Re: [Openstack-operators] Newton consoleauth HA tokens

2017-01-24 Thread Chris Apsey

Saverio,

Here is my pertinent config data:

[default]
memcached_servers = 10.10.6.240:11211

[cache]
enablde = True  <<< I just noticed this 
as I copied into the email.

memcache_servers = 10.10.6.240:11211
backend = oslo_cache.memcache_pool

[keystone_authtoken]
memcached_servers = 10.10.6.240:11211


See above - I just noticed my config typo as I was getting ready to send 
this.  Changing it to 'enabled' seems to have solved the issue.  
Spelling counts.  This is a working config for anyone else who runs into 
the same problems.  Be sure to specify the backend under [cache], as the 
default is null.



Thanks for the assist!

Chris

---
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

On 2017-01-24 11:36, Saverio Proto wrote:
Did you try to restart memcached after chaning the configuration to HA 
?


there are two sections where you can configure, memcached_servers
[DEFAULT]
 [keystone_authtoken]

how your config looks like ?

Saverio


2017-01-24 6:48 GMT+01:00 Chris Apsey :

All,

I attempted to deploy the nova service in HA, but when users attempt 
to
connect via the console, it doesn't work about 30% of the time and 
they get
the 1006 error.  The nova-consoleauth service is reporting their token 
as
invalid.  I am running memcached, and have tried referencing it using 
both
the legacy memcached_servers directive and in the new [cache] 
configuration
section.  No dice.  If I disable the nova-consoleauth service on one 
of the
nodes, everything works fine.  I see lots of bug reports floating 
around

about this, but I can't quite get the solutions I have found reliably
working.  I'm on Ubuntu 16.04 LTS+Newton from UCA.

Ideas?

--
v/r

Chris Apsey
bitskr...@bitskrieg.net
https://www.bitskrieg.net

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] deprecation/removal of Debian images

2017-01-24 Thread Fox, Kevin M
The issue is, as I understand it, that there are no tests currently to check if 
changes to the Kolla code base will break the Debian based containers, and no 
one has stepped up to write the tests in a long time.

So, no one can rely on the containers being in a usable state.

If someone is willing to step up and contribute the missing infrastructure, I 
think everyone would be happy to have Debian continue to be supported.

Is anyone interested in contributing the missing functionality so it can stay?

Thanks,
Kevin

From: Keith Farrar [kfar...@parc.com]
Sent: Tuesday, January 24, 2017 5:04 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [kolla] deprecation/removal of Debian images

Removing Debian support seems odd, in that Debian testing generally has
been the upstream source for Ubuntu OS packages.



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] deprecation/removal of Debian images

2017-01-24 Thread Keith Farrar
Removing Debian support seems odd, in that Debian testing generally has 
been the upstream source for Ubuntu OS packages.




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [kolla] deprecation/removal of Debian images

2017-01-24 Thread Michał Jastrzębski
Hello Gilles,

Don't worry, debian images means debian *inside* container, we don't
really care about host OS. Techncially you could now do kolla-build
--base debian, but there is no maintenance of this distro and we're
planning to remove it alltogether. That's what this thread is about.
If you are runnind debian and ubuntu source/binary images on top of
it, you won't be affected in any way.

On 20 January 2017 at 11:34, Gilles Mocellin
 wrote:
> Le 12/01/2017 à 12:52, Christian Berendt a écrit :
>>
>> Hello everybody.
>>
>> In the past we have talked about the removal of Debian images from Kolla.
>> We have postponed the decision.
>>
>> At the moment, there is no visible interest in Debian Images. Therefore I
>> will put the removal in the next week to the vote. So we can deprecate the
>> images in the current cycle, remove them in the next cycle.
>>
>> Please give me feedback whether someone currently uses the Debian Images
>> and would like to work actively in the future.
>>
>> Christian.
>>
>
> Rhaaa !
>
> Just to have one response : I use Kolla on Debian.
> But, just at home for testing.
>
> Anyway, I didn't see any Debian images ? I only see CentOS and Ubuntu
> binaries images (I use Ubuntu ones).
> Was it sources images or what ?
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Plus One...need I say more?

2017-01-24 Thread Melvin Hillsman
Calling all Operators! help us plan the mid-cycle.






*Please add topics to https://etherpad.openstack.org/p/MIL-ops-meetup
or +1 those that are
already there that you are interested in; even if you are not attending.For
each session that makes the cut we will need volunteer moderators.
Moderators simply facilitate a conversation.*

*Please be sure you +1 at least 3 to 5 sessions while you take time to view
the etherpad, your vote(s) count!!!*

The Ops Meetups team meets regularly on IRC at 15:00 UTC on Tuesdays in
#openstack-operators.

More information about the team is here : https://wiki.openstack.org/
wiki/Ops_Meetups_Team and weekly agendas are posted here :
https://etherpad.openstack.org/p/ops-meetups-team


-- 
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Glance image 'visibility' migration in Ocata

2017-01-24 Thread Carlos Konstanski
- There seems to be no difference between public and community, and the
  README does not do an adequate job of explaining the difference.

- There is nothing conceptually wrong with having something be private
  and letting it have a list of members. Private does not have to mean
  "just one user". I can be a member of a private club for example. That
  does not mean I sit there and drink all alone. Shared seems
  redundant. We can't trust users to manage the members list if they
  really want something to be super-duper-private?

Just my 2 cents and a first impression of these proposed changes. I
don't want to comment in the review until I'm clear on the purpose of
these changes, which at the moment is not clear. Everything that could
be done with these changes can be done now, just with different names.

Brian Rosmaita  writes:

> Hello Operators,
>
> As you may be aware, the discussion about image 'visibility' and its
> migration carried on into 2017 and was only recently settled.  I'm
> writing this because the direction we ultimately took (after much
> discussion--did I mention that?) was *not* the recommendation I made in
> the email below.  I just want to point that out in case the email below
> was last news you'd heard about this issue.
>
> So what actually *did* happen?  There's a patch up for the release note
> explaining the visibility changes in Ocata.  As we'd like it to be
> completely clear, I'd appreciate it if you'd take a few minutes to read
> through it and leave comments on the patch.  Please point out if there's
> anything expressed ambiguously, or if there are any questions that come
> to mind that aren't addressed in the release notes.
>
> https://review.openstack.org/#/c/422897/
>
> thanks,
> brian
>
> On 12/2/16 5:33 PM, Brian Rosmaita wrote:
>> Hello Operators,
>> 
>> Here are the results of the recent operators' poll concerning the
>> upcoming image visibility changes in Glance and the direction we plan to
>> take.  Thanks to all participants for helping us come to a data-driven
>> decision on a contentious issue.
>> 
>> (For background on the operators' poll, see [0].)
>> 
>> The operators' poll was taken to determine the migration strategy:
>> (A) All pre-Ocata 'private' images are migrated to 'shared'.
>> (B) 'private' images with no members stay 'private', 'private' images
>> with members are migrated to 'shared'.
>> (C) No strong preference as long as the documentation is clear.
>> 
>> The results were inconclusive:
>> 9 total responses: 4 for (A), 4 for (B), 1 for (C)
>> 
>> I recommend that we go with option (A).  Here's why (in addition to the
>> arguments made in [2], and my arguments in the "outside" comments on [2]):
>> 
>> (1) There was a comment on the poll that 'private' creates an
>> expectation that we should meet.  This is basically my reason for
>> rejecting Hemanth's otherwise sensible comment on the patch that we
>> should remove 'shared' visibility from Timothy's patch [1] and deal with
>> the issue later.  This is a problem we should deal with now.
>> 
>> (2) We want to respect the principle of least surprise for users, in
>> other words, changes in API behavior as a consequence of changes
>> introduced in Timothy's community/shared images patch [1] are OK if
>> they're a result of something a user *does*, but not OK if they are
>> something that *happens to* a user.  To make this point concrete: if a
>> current image with no members is made 'private' during migration,
>> suddenly the add-member call can't be made on that image in either the
>> v1 or v2 API.  If the image is migrated to 'shared', on the other hand,
>> the current user workflow is not changed.  If the user decides to set
>> the visibility on the image to 'private', then the add-member calls will
>> return 409s, but that's OK because it's a result of an action the user took.
>> 
>> But, you say, all my previously 'private' images being listed as
>> 'shared' could be a big surprise!  I think this is a trade-off we should
>> accept, and address by educating Ocata operators and users of what to
>> expect.  Here's the key thing people need to be aware of:
>> 
>> You can specify a visibility of 'private' at the time of image creation,
>> and it's respected.  An interface could (should?) make this choice clear
>> at the time of image creation.
>> 
>> So to be completely clear, my recommendation is that we go with
>> migration strategy (A) (i.e., the one specified in [2]) and Timothy's
>> current community/shared images patch [1].
>> 
>> What's missing in Glance is an easy way to list images that have
>> visibility 'shared' and no members (and hence, aren't consumable by
>> anyone other than the owner) from images with visibility 'shared' that
>> do have members.  This could be addressed by adding a 'has_members'
>> field to the Image object.  We could use some feedback on how useful
>> such a field would be.
>> 
>> This course of action is a compromise so that we can 

Re: [Openstack-operators] Glance image 'visibility' migration in Ocata

2017-01-24 Thread Brian Rosmaita
Hello Operators,

As you may be aware, the discussion about image 'visibility' and its
migration carried on into 2017 and was only recently settled.  I'm
writing this because the direction we ultimately took (after much
discussion--did I mention that?) was *not* the recommendation I made in
the email below.  I just want to point that out in case the email below
was last news you'd heard about this issue.

So what actually *did* happen?  There's a patch up for the release note
explaining the visibility changes in Ocata.  As we'd like it to be
completely clear, I'd appreciate it if you'd take a few minutes to read
through it and leave comments on the patch.  Please point out if there's
anything expressed ambiguously, or if there are any questions that come
to mind that aren't addressed in the release notes.

https://review.openstack.org/#/c/422897/

thanks,
brian

On 12/2/16 5:33 PM, Brian Rosmaita wrote:
> Hello Operators,
> 
> Here are the results of the recent operators' poll concerning the
> upcoming image visibility changes in Glance and the direction we plan to
> take.  Thanks to all participants for helping us come to a data-driven
> decision on a contentious issue.
> 
> (For background on the operators' poll, see [0].)
> 
> The operators' poll was taken to determine the migration strategy:
> (A) All pre-Ocata 'private' images are migrated to 'shared'.
> (B) 'private' images with no members stay 'private', 'private' images
> with members are migrated to 'shared'.
> (C) No strong preference as long as the documentation is clear.
> 
> The results were inconclusive:
> 9 total responses: 4 for (A), 4 for (B), 1 for (C)
> 
> I recommend that we go with option (A).  Here's why (in addition to the
> arguments made in [2], and my arguments in the "outside" comments on [2]):
> 
> (1) There was a comment on the poll that 'private' creates an
> expectation that we should meet.  This is basically my reason for
> rejecting Hemanth's otherwise sensible comment on the patch that we
> should remove 'shared' visibility from Timothy's patch [1] and deal with
> the issue later.  This is a problem we should deal with now.
> 
> (2) We want to respect the principle of least surprise for users, in
> other words, changes in API behavior as a consequence of changes
> introduced in Timothy's community/shared images patch [1] are OK if
> they're a result of something a user *does*, but not OK if they are
> something that *happens to* a user.  To make this point concrete: if a
> current image with no members is made 'private' during migration,
> suddenly the add-member call can't be made on that image in either the
> v1 or v2 API.  If the image is migrated to 'shared', on the other hand,
> the current user workflow is not changed.  If the user decides to set
> the visibility on the image to 'private', then the add-member calls will
> return 409s, but that's OK because it's a result of an action the user took.
> 
> But, you say, all my previously 'private' images being listed as
> 'shared' could be a big surprise!  I think this is a trade-off we should
> accept, and address by educating Ocata operators and users of what to
> expect.  Here's the key thing people need to be aware of:
> 
> You can specify a visibility of 'private' at the time of image creation,
> and it's respected.  An interface could (should?) make this choice clear
> at the time of image creation.
> 
> So to be completely clear, my recommendation is that we go with
> migration strategy (A) (i.e., the one specified in [2]) and Timothy's
> current community/shared images patch [1].
> 
> What's missing in Glance is an easy way to list images that have
> visibility 'shared' and no members (and hence, aren't consumable by
> anyone other than the owner) from images with visibility 'shared' that
> do have members.  This could be addressed by adding a 'has_members'
> field to the Image object.  We could use some feedback on how useful
> such a field would be.
> 
> This course of action is a compromise so that we can preserve backward
> compatibility in the Image APIs.  Common to many compromises, no one is
> going to be completely happy with the outcome.  But I really think it's
> the best way to go.
> 
> thanks,
> brian
> 
> 
> [0]
> http://lists.openstack.org/pipermail/openstack-operators/2016-November/012107.html
> [1] https://review.openstack.org/#/c/369110/
> [2] https://review.openstack.org/#/c/396919/
> 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [telecom-nfv] Meeting #17 tomorrow

2017-01-24 Thread Curtis
Hi,

The bi-weekly meeting for the Operators Telecom/NFV team is tomorrow [1].

Feel free to have a look at the agenda and make any alterations [2]. :)

Thanks,
Curtis.

[1]: 
http://eavesdrop.openstack.org/#OpenStack_Operators_Telco_and_NFV_Working_Group
[2]: https://etherpad.openstack.org/p/ops-telco-nfv-meeting-agenda

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2017-01-24 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’d like to develop the discussion on what the WG can achieve for 
the Boston OpenStack Summit. We’d also like to share people’s thoughts and 
experiences on frameworks for scientific reproducibility.  There’s also the 
opportunity to propose a hands-on workshop at SC2017.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_January_24th_2017
 

  
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] OsOps Reboot

2017-01-24 Thread Edgar Magana
I do not think the PTG is the place for this session. However, if there will be 
enough participation and you are already all over there, it maes sense to get 
together.

Edgar

From: Matt Fischer 
Date: Monday, January 23, 2017 at 4:27 PM
To: Mike Dorman 
Cc: OpenStack Operators 
Subject: Re: [Openstack-operators] OsOps Reboot

Will there be enough of us at the PTG for an impromptu session there as well?

On Mon, Jan 23, 2017 at 9:18 AM, Mike Dorman 
> wrote:
+1!  Thanks for driving this.


From: Edgar Magana >
Date: Friday, January 20, 2017 at 1:23 PM
To: "m...@mattjarvis.org.uk" 
>, Melvin Hillsman 
>

Cc: OpenStack Operators 
>
Subject: Re: [Openstack-operators] OsOps Reboot

I super second this! Yes, looking forward to amazing contributions there.

Edgar

From: Matt Jarvis >
Reply-To: "m...@mattjarvis.org.uk" 
>
Date: Friday, January 20, 2017 at 12:33 AM
To: Melvin Hillsman >
Cc: OpenStack Operators 
>
Subject: Re: [Openstack-operators] OsOps Reboot

Great stuff Melvin ! Look forward to seeing this move forward.

On Fri, Jan 20, 2017 at 6:32 AM, Melvin Hillsman 
> wrote:
Good day everyone,

As operators we would like to reboot the efforts started around OsOps. Initial 
things that may make sense to work towards are starting back meetings, 
standardizing the repos (like having a lib or common folder, READMEs include 
release(s) tool works with, etc), increasing feedback loop from operators in 
general, actionable work items, identifying teams/people with resources for 
continuous testing/feedback, etc.

We have got to a great place so let's increase the momentum and maximize all 
the work that has been done for OsOps so far. Please visit the following link [ 
https://goo.gl/forms/eSvmMYGUgRK901533
 ] to vote on day of the week and time (UTC) you would like to have OsOps 
meeting. And also visit this etherpad [ 
https://etherpad.openstack.org/p/osops-meeting
 ] to help shape the initial and ongoing agenda items.

Really appreciate you taking time to read through this email and looking 
forward to all the great things to come.

Also we started an etherpad for brainstorming around how OsOps could/would 
function; very rough draft/outline/ideas right now again please provide 
feedback: 
https://etherpad.openstack.org/p/osops-project-future


--
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___

Re: [Openstack-operators] Newton consoleauth HA tokens

2017-01-24 Thread Saverio Proto
Did you try to restart memcached after chaning the configuration to HA ?

there are two sections where you can configure, memcached_servers
[DEFAULT]
 [keystone_authtoken]

how your config looks like ?

Saverio


2017-01-24 6:48 GMT+01:00 Chris Apsey :
> All,
>
> I attempted to deploy the nova service in HA, but when users attempt to
> connect via the console, it doesn't work about 30% of the time and they get
> the 1006 error.  The nova-consoleauth service is reporting their token as
> invalid.  I am running memcached, and have tried referencing it using both
> the legacy memcached_servers directive and in the new [cache] configuration
> section.  No dice.  If I disable the nova-consoleauth service on one of the
> nodes, everything works fine.  I see lots of bug reports floating around
> about this, but I can't quite get the solutions I have found reliably
> working.  I'm on Ubuntu 16.04 LTS+Newton from UCA.
>
> Ideas?
>
> --
> v/r
>
> Chris Apsey
> bitskr...@bitskrieg.net
> https://www.bitskrieg.net
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Newton consoleauth HA tokens

2017-01-24 Thread George Paraskevas
Hello,
You should also define memcached servers in the default Nova section I
believe.
Try that.

Best regards
George

On Tue, 24 Jan 2017, 07:48 Chris Apsey,  wrote:

> All,
>
> I attempted to deploy the nova service in HA, but when users attempt to
> connect via the console, it doesn't work about 30% of the time and they
> get the 1006 error.  The nova-consoleauth service is reporting their
> token as invalid.  I am running memcached, and have tried referencing it
> using both the legacy memcached_servers directive and in the new [cache]
> configuration section.  No dice.  If I disable the nova-consoleauth
> service on one of the nodes, everything works fine.  I see lots of bug
> reports floating around about this, but I can't quite get the solutions
> I have found reliably working.  I'm on Ubuntu 16.04 LTS+Newton from UCA.
>
> Ideas?
>
> --
> v/r
>
> Chris Apsey
> bitskr...@bitskrieg.net
> https://www.bitskrieg.net
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators