Re: JAAS confusion

2017-10-12 Thread Pete Vander Giessen
> So just changing your client isn't going to fix the issue, as it is a
server side issue that is refusing to destroy the models.

Aha. That makes more sense, actually.

I'll look forward to testing things out once things update on the JaaS side
:-)

~ PeteVG

On Thu, Oct 12, 2017 at 3:46 PM John Meinel  wrote:

> So just changing your client isn't going to fix the issue, as it is a
> server side issue that is refusing to destroy the models.
>
> https://bugs.launchpad.net/bugs/1714409
>
> Is at least one of them that might be relevant for your issue.
>
> I also know that we have:
>  https://bugs.launchpad.net/bugs/1721786
>
> which is a different iteration of remove-model failing (we changed a lot
> of code to use a shared pool of information about models, and it has a
> slightly different issue as models get removed). But that one is 2.3
> specific and I expect it to be fixed by early next week.
>
> John
> =:->
>
> On Thu, Oct 12, 2017 at 11:59 AM, Pete Vander Giessen <
> pete.vandergies...@canonical.com> wrote:
>
>> Hi All,
>>
>> > I think the inability to remove a model that is half-dead might be
>> fixed already in 2.3 but has to do with an issue around 2 critical
>> documents that define a model, and one of them has been removed but not the
>> other, which leads to a bunch of code that gets a different view of whether
>> the model exists or not.
>>
>> I have a few models stuck in a state where I can't remove them. I gave
>> juju 2.3 a try via the edge channel in the snap
>> (2.3-beta2+develop-79cd92d), and it looks like the error message is
>> different, but I still can't remove them.
>>
>> The old error message:
>>
>> ERROR cannot destroy model: failed to destroy model: state changing too
>> quickly; try again soon
>>
>> The new error message:
>>
>> ERROR cannot destroy model: context deadline exceeded
>>
>> Is there an open bug that I can paste error messages and logs to?
>>
>> ~ PeteVG
>>
>>
>>
>> On Mon, Oct 9, 2017 at 4:18 PM John Meinel 
>> wrote:
>>
>>> The 6 accessible models is gone in 2.3 (IIRC), because it was actually
>>> just reflecting some locally cached information about model numbers, but
>>> wasn't actually being kept up to date properly.
>>>
>>> I think the inability to remove a model that is half-dead might be fixed
>>> already in 2.3 but has to do with an issue around 2 critical documents that
>>> define a model, and one of them has been removed but not the other, which
>>> leads to a bunch of code that gets a different view of whether the model
>>> exists or not.
>>>
>>> A different explanation could be that you create a model with the same
>>> name with a different client and thus its actually the underlying UUID
>>> doesn't exist, but there is a model collision. (Your local client new a
>>> model named 'mymodel' with UUID 1234, but you had a different client that
>>> deleted that model and created a new 'mymodel' with UUID 3456, but when
>>> you're trying to 'juju destroy-model' we are using the 1234 UUID again. I'm
>>> brainstorming, though, and wouldn't say concretely that it is definitely
>>> true.)
>>>
>>> John
>>> =:->
>>>
>>>
>>> On Mon, Oct 9, 2017 at 9:42 PM, Tom Barber  wrote:
>>>
 Hello folks

 Couple of random questions:

 juju destroy-model mymodel
 WARNING! This command will destroy the "mymodel" model.
 This includes all machines, applications, data and other resources.

 Continue [y/N]? y
 ERROR cannot connect to API: model "mymodel" has been removed from the
 controller, run 'juju models' and switch to one of them.
 There are 6 accessible models on controller "jaas".

 juju models
 Controller: jaas

 Model  Cloud/Region   Status Machines  Cores  Access  Last
 connection
 mymodelaws/eu-west-1  available 5  9  -   never
 connected



 2 things in this output, firstly how do I delete the model that seems
 stuck?

 secondly what is the 6 accessible models bit talking about?

 Thanks

 Tom

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: JAAS confusion

2017-10-12 Thread John Meinel
So just changing your client isn't going to fix the issue, as it is a
server side issue that is refusing to destroy the models.

https://bugs.launchpad.net/bugs/1714409

Is at least one of them that might be relevant for your issue.

I also know that we have:
 https://bugs.launchpad.net/bugs/1721786

which is a different iteration of remove-model failing (we changed a lot of
code to use a shared pool of information about models, and it has a
slightly different issue as models get removed). But that one is 2.3
specific and I expect it to be fixed by early next week.

John
=:->

On Thu, Oct 12, 2017 at 11:59 AM, Pete Vander Giessen <
pete.vandergies...@canonical.com> wrote:

> Hi All,
>
> > I think the inability to remove a model that is half-dead might be
> fixed already in 2.3 but has to do with an issue around 2 critical
> documents that define a model, and one of them has been removed but not the
> other, which leads to a bunch of code that gets a different view of whether
> the model exists or not.
>
> I have a few models stuck in a state where I can't remove them. I gave
> juju 2.3 a try via the edge channel in the snap
> (2.3-beta2+develop-79cd92d), and it looks like the error message is
> different, but I still can't remove them.
>
> The old error message:
>
> ERROR cannot destroy model: failed to destroy model: state changing too
> quickly; try again soon
>
> The new error message:
>
> ERROR cannot destroy model: context deadline exceeded
>
> Is there an open bug that I can paste error messages and logs to?
>
> ~ PeteVG
>
>
>
> On Mon, Oct 9, 2017 at 4:18 PM John Meinel  wrote:
>
>> The 6 accessible models is gone in 2.3 (IIRC), because it was actually
>> just reflecting some locally cached information about model numbers, but
>> wasn't actually being kept up to date properly.
>>
>> I think the inability to remove a model that is half-dead might be fixed
>> already in 2.3 but has to do with an issue around 2 critical documents that
>> define a model, and one of them has been removed but not the other, which
>> leads to a bunch of code that gets a different view of whether the model
>> exists or not.
>>
>> A different explanation could be that you create a model with the same
>> name with a different client and thus its actually the underlying UUID
>> doesn't exist, but there is a model collision. (Your local client new a
>> model named 'mymodel' with UUID 1234, but you had a different client that
>> deleted that model and created a new 'mymodel' with UUID 3456, but when
>> you're trying to 'juju destroy-model' we are using the 1234 UUID again. I'm
>> brainstorming, though, and wouldn't say concretely that it is definitely
>> true.)
>>
>> John
>> =:->
>>
>>
>> On Mon, Oct 9, 2017 at 9:42 PM, Tom Barber  wrote:
>>
>>> Hello folks
>>>
>>> Couple of random questions:
>>>
>>> juju destroy-model mymodel
>>> WARNING! This command will destroy the "mymodel" model.
>>> This includes all machines, applications, data and other resources.
>>>
>>> Continue [y/N]? y
>>> ERROR cannot connect to API: model "mymodel" has been removed from the
>>> controller, run 'juju models' and switch to one of them.
>>> There are 6 accessible models on controller "jaas".
>>>
>>> juju models
>>> Controller: jaas
>>>
>>> Model  Cloud/Region   Status Machines  Cores  Access  Last
>>> connection
>>> mymodelaws/eu-west-1  available 5  9  -   never
>>> connected
>>>
>>>
>>>
>>> 2 things in this output, firstly how do I delete the model that seems
>>> stuck?
>>>
>>> secondly what is the 6 accessible models bit talking about?
>>>
>>> Thanks
>>>
>>> Tom
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>>> mailman/listinfo/juju
>>>
>>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>> mailman/listinfo/juju
>>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all interfaces bound to spaces are considered private addresses, and
>> with the assumption that if you don't specify a space constraint, your
>> private network interface is bound to the "default" space.
>>
>> The key thing here is the schema structure grouping the interfaces bound to
>> spaces inside a private-networks level in the dict, and the introduction of
>> the fact that if you don't specify a space, you get an address bound to an
>> artificial "default" space.
>>
>> I feel this would make things easier to consume, and interface to from a
>> developer standpoint.
>>
>> Is this making sense? How 

Re: default network space

2017-10-12 Thread Ian Booth
Copying in the Juju list also

On 12/10/17 22:18, Ian Booth wrote:
> I'd like to understand the use case you have in mind a little better. The
> premise of the network-get output is that charms should not think about public
> vs private addresses in terms of what to put into relation data - the other
> remote unit should not be exposed to things in those terms.
> 
> There's some doc here to explain things in more detail
> 
> https://jujucharms.com/docs/master/developer-network-primitives
> 
> The TL;DR: is that charms need to care about:
> - what address do I bind to (listen on)
> - what address do external actors use to connect to me (ingress)
> 
> Depending on how the charm has been deployed, and more specifically whether it
> is in a cross model relation, the ingress address might be either the public 
> or
> private address. Juju will decide based on a number of factors (whether models
> are deployed to same region, vpc, other provider specific aspects) and 
> populate
> the network-get data accordingly. NOTE: for now Juju will always pick the 
> public
> address (if there is one) for the ingress value for cross model relations - 
> the
> algorithm to short circuit to a cloud local address is not yet finished.
> 
> The content of the bind-addresses block is space aware in that these are
> filtered based on the space with which the specified endpoint is associated. 
> The
> network-get output though should not include any space information explicitly 
> -
> this is a concern which a charm should not care about.
> 
> 
> On 12/10/17 13:35, James Beedy wrote:
>> Hello all,
>>
>> In case you haven't noticed, we now have a network_get() function available
>> in charmhelpers.core.hookenv (in master, not stable).
>>
>> Just wanted to have a little discussion about how we are going to be
>> parsing network_get().
>>
>> I first want to address the output of network_get() for an instance
>> deployed to the default vpc, no spaces constraint, and related to another
>> instance in another model also default vpc, no spaces constraint.
>>
>> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
>> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
>> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
>> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
>> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
>>
>>
>> The use case I have in mind here is such that I want to provide the private
>> network interface address via relation data in the provides.py of my
>> interface to the relating appliication.
>>
>> This will be able to happen by calling
>> hookenv.network_get('') in the layer that provides the
>> interface in my charm, and passing the output to get the private interface
>> ip data, to then set that in the provides side of the relation.
>>
>> Tracking?
>>
>> The problem:
>>
>> The problem is such that its not so straight forward to just get the
>> private address from the output of network_get().
>>
>> As you can see above, I could filter for network interface name, but thats
>> about the least best way one could go about this.
>>
>> Initially, I assumed the network_get() output would look different if you
>> had specified a spaces constraint when deploying your application, but the
>> output was similar to no spaces, e.g. spaces aren't listed in the output of
>> network_get().
>>
>>
>> All in all, what I'm after is a consistent way to grep either the space an
>> interface is bound to, or to get the public vs private address from the
>> output of network_get(), I think this is true for every provider just about
>> (ones that use spaces at least).
>>
>> Instead of the dict above, I was thinking we might namespace the interfaces
>> inside of what type of interface they are to make it easier to decipher
>> when parsing the network_get().
>>
>> My idea is a schema like the following:
>>
>> {
>> 'private-networks': {
>> 'my-admin-space': {
>> 'addresses': [
>> {
>> 'cidr': '172.31.48.0/20',
>> 'address': '172.31.51.59'
>> }
>> ],
>> 'interfacename': 'eth0',
>> 'macaddress': '12:ba:53:58:9c:52'
>> }
>> 'public-networks': {
>> 'default': {
>> 'addresses': [
>> {
>> 'cidr': 'publicipaddress/32',
>> 'address': 'publicipaddress'
>> }
>> ],
>> }
>> 'fan-networks': {
>> 'fan-252': {
>> 
>> 
>> }
>> }
>>
>> Where all interfaces bound to spaces are considered private addresses, and
>> with the assumption that if you don't specify a space constraint, your
>> private network interface is bound to the "default" space.
>>
>> The key thing here is the schema structure grouping the interfaces bound to
>> spaces inside a private-networks level in the dict, and the introduction of
>> the fact that if you don't specify a space, you get an address bound to an
>> artificial "default" space.
>>
>> I feel this would make things easier to consume, and interface to from a
>> developer standpoint.
>>
>> Is this making sense? How 

Re: default network space

2017-10-12 Thread Ian Booth
I'd like to understand the use case you have in mind a little better. The
premise of the network-get output is that charms should not think about public
vs private addresses in terms of what to put into relation data - the other
remote unit should not be exposed to things in those terms.

There's some doc here to explain things in more detail

https://jujucharms.com/docs/master/developer-network-primitives

The TL;DR: is that charms need to care about:
- what address do I bind to (listen on)
- what address do external actors use to connect to me (ingress)

Depending on how the charm has been deployed, and more specifically whether it
is in a cross model relation, the ingress address might be either the public or
private address. Juju will decide based on a number of factors (whether models
are deployed to same region, vpc, other provider specific aspects) and populate
the network-get data accordingly. NOTE: for now Juju will always pick the public
address (if there is one) for the ingress value for cross model relations - the
algorithm to short circuit to a cloud local address is not yet finished.

The content of the bind-addresses block is space aware in that these are
filtered based on the space with which the specified endpoint is associated. The
network-get output though should not include any space information explicitly -
this is a concern which a charm should not care about.


On 12/10/17 13:35, James Beedy wrote:
> Hello all,
> 
> In case you haven't noticed, we now have a network_get() function available
> in charmhelpers.core.hookenv (in master, not stable).
> 
> Just wanted to have a little discussion about how we are going to be
> parsing network_get().
> 
> I first want to address the output of network_get() for an instance
> deployed to the default vpc, no spaces constraint, and related to another
> instance in another model also default vpc, no spaces constraint.
> 
> {'ingress-addresses': ['107.22.129.65'], 'bind-addresses': [{'addresses':
> [{'cidr': '172.31.48.0/20', 'address': '172.31.51.59'}], 'interfacename':
> 'eth0', 'macaddress': '12:ba:53:58:9c:52'}, {'addresses': [{'cidr': '
> 252.48.0.0/12', 'address': '252.51.59.1'}], 'interfacename': 'fan-252',
> 'macaddress': '1e:a2:1e:96:ec:a2'}]}
> 
> 
> The use case I have in mind here is such that I want to provide the private
> network interface address via relation data in the provides.py of my
> interface to the relating appliication.
> 
> This will be able to happen by calling
> hookenv.network_get('') in the layer that provides the
> interface in my charm, and passing the output to get the private interface
> ip data, to then set that in the provides side of the relation.
> 
> Tracking?
> 
> The problem:
> 
> The problem is such that its not so straight forward to just get the
> private address from the output of network_get().
> 
> As you can see above, I could filter for network interface name, but thats
> about the least best way one could go about this.
> 
> Initially, I assumed the network_get() output would look different if you
> had specified a spaces constraint when deploying your application, but the
> output was similar to no spaces, e.g. spaces aren't listed in the output of
> network_get().
> 
> 
> All in all, what I'm after is a consistent way to grep either the space an
> interface is bound to, or to get the public vs private address from the
> output of network_get(), I think this is true for every provider just about
> (ones that use spaces at least).
> 
> Instead of the dict above, I was thinking we might namespace the interfaces
> inside of what type of interface they are to make it easier to decipher
> when parsing the network_get().
> 
> My idea is a schema like the following:
> 
> {
> 'private-networks': {
> 'my-admin-space': {
> 'addresses': [
> {
> 'cidr': '172.31.48.0/20',
> 'address': '172.31.51.59'
> }
> ],
> 'interfacename': 'eth0',
> 'macaddress': '12:ba:53:58:9c:52'
> }
> 'public-networks': {
> 'default': {
> 'addresses': [
> {
> 'cidr': 'publicipaddress/32',
> 'address': 'publicipaddress'
> }
> ],
> }
> 'fan-networks': {
> 'fan-252': {
> 
> 
> }
> }
> 
> Where all interfaces bound to spaces are considered private addresses, and
> with the assumption that if you don't specify a space constraint, your
> private network interface is bound to the "default" space.
> 
> The key thing here is the schema structure grouping the interfaces bound to
> spaces inside a private-networks level in the dict, and the introduction of
> the fact that if you don't specify a space, you get an address bound to an
> artificial "default" space.
> 
> I feel this would make things easier to consume, and interface to from a
> developer standpoint.
> 
> Is this making sense? How do others feel?
> 
> 
> 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: FW: [PIKE] juju based OpenStack --Query

2017-10-12 Thread James Page
Hi Akshay

I think you've tripped over:

  https://bugs.launchpad.net/charm-keystone/+bug/1722909

which I did as well last night - this only impacts the development version
of the charm which you are using with the bundle.

I have a fix up for this, should land in the next couple of hours (we've
been having some challenges with the infrastructure that runs our gate
testing upstream in OpenStack which have now been resolved)

Cheers

James

On Thu, 12 Oct 2017 at 07:12 Akshay Ranade 
wrote:

>
>
> Hi All,
>
> We at Veritas Technologies LLC are trying to deploy juju based OpenStack
> Pike bits, from location:
> https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-pike/
>  .
>
> But it fails at keystone charm in ‘shared-db-relation-changed’ hook giving
> following stack trace:
>
>
>
> ***
>
>
>
> Traceback (most recent call last):
>
>   File "./hooks/shared-db-relation-changed", line 919, in 
>
> main()
>
>   File "./hooks/shared-db-relation-changed", line 912, in main
>
> hooks.execute(sys.argv)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/hookenv.py",
> line 784, in execute
>
> self._hooks[hook_name]()
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
> line 1890, in wrapped_f
>
> restart_functions)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/host.py",
> line 685, in restart_on_change_helper
>
> r = lambda_f()
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
> line 1889, in 
>
> (lambda: f(*args, **kwargs)), restart_map, stopstart,
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line
> 1830, in inner_synchronize_ca_if_changed2
>
> ret = f(*args, **kwargs)
>
>   File "./hooks/shared-db-relation-changed", line 446, in db_changed
>
> leader_init_db_if_ready(use_current_context=True)
>
>   File "./hooks/shared-db-relation-changed", line 420, in
> leader_init_db_if_ready
>
> update_all_identity_relation_units(check_db_ready=False)
>
>   File "./hooks/shared-db-relation-changed", line 382, in
> update_all_identity_relation_units
>
> ensure_initial_admin(config)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line
> 1240, in ensure_initial_admin
>
> return _ensure_initial_admin(config)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/decorators.py",
> line 40, in _retry_on_exception_inner_2
>
> return f(*args, **kwargs)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line
> 1195, in _ensure_initial_admin
>
> create_tenant("admin", DEFAULT_DOMAIN)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line
> 934, in create_tenant
>
> manager = get_manager()
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", line
> 1023, in get_manager
>
> api_version)
>
>   File
> "/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/decorators.py",
> line 40, in _retry_on_exception_inner_2
>
> return f(*args, **kwargs)
>
>   File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/manager.py", line
> 75, in get_keystone_manager
>
> for svc in manager.api.services.list():
>
>   File "/usr/lib/python2.7/dist-packages/keystoneclient/v2_0/services.py",
> line 35, in list
>
> return self._list("/OS-KSADM/services", "OS-KSADM:services")
>
>   File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line
> 125, in _list
>
> resp, body = self.client.get(url, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
> 288, in get
>
> return self.request(url, 'GET', **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
> 447, in request
>
> resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line
> 192, in request
>
> return self.session.request(url, method, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/positional/__init__.py", line
> 101, in inner
>
> return wrapped(*args, **kwargs)
>
>   File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line
> 445, in request
>
> raise exceptions.from_response(resp, method, url)
>
> keystoneauth1.exceptions.http.InternalServerError: Internal Server Error
> (HTTP 500)
>
>
>
> ***
>
>
>
> Can someone please help us out here.
>
>
>
>
>
> Thanks,
>
> Akshay Ranade
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 

FW: [PIKE] juju based OpenStack --Query

2017-10-12 Thread Akshay Ranade

Hi All,
We at Veritas Technologies LLC are trying to deploy juju based OpenStack Pike 
bits, from location: 
https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-pike/  .
But it fails at keystone charm in 'shared-db-relation-changed' hook giving 
following stack trace:

***

Traceback (most recent call last):
  File "./hooks/shared-db-relation-changed", line 919, in 
main()
  File "./hooks/shared-db-relation-changed", line 912, in main
hooks.execute(sys.argv)
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/hookenv.py",
 line 784, in execute
self._hooks[hook_name]()
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1890, in wrapped_f
restart_functions)
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/host.py", 
line 685, in restart_on_change_helper
r = lambda_f()
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/contrib/openstack/utils.py",
 line 1889, in 
(lambda: f(*args, **kwargs)), restart_map, stopstart,
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", 
line 1830, in inner_synchronize_ca_if_changed2
ret = f(*args, **kwargs)
  File "./hooks/shared-db-relation-changed", line 446, in db_changed
leader_init_db_if_ready(use_current_context=True)
  File "./hooks/shared-db-relation-changed", line 420, in 
leader_init_db_if_ready
update_all_identity_relation_units(check_db_ready=False)
  File "./hooks/shared-db-relation-changed", line 382, in 
update_all_identity_relation_units
ensure_initial_admin(config)
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", 
line 1240, in ensure_initial_admin
return _ensure_initial_admin(config)
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/decorators.py",
 line 40, in _retry_on_exception_inner_2
return f(*args, **kwargs)
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", 
line 1195, in _ensure_initial_admin
create_tenant("admin", DEFAULT_DOMAIN)
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", 
line 934, in create_tenant
manager = get_manager()
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/keystone_utils.py", 
line 1023, in get_manager
api_version)
  File 
"/var/lib/juju/agents/unit-keystone-0/charm/hooks/charmhelpers/core/decorators.py",
 line 40, in _retry_on_exception_inner_2
return f(*args, **kwargs)
  File "/var/lib/juju/agents/unit-keystone-0/charm/hooks/manager.py", line 75, 
in get_keystone_manager
for svc in manager.api.services.list():
  File "/usr/lib/python2.7/dist-packages/keystoneclient/v2_0/services.py", line 
35, in list
return self._list("/OS-KSADM/services", "OS-KSADM:services")
  File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 125, in 
_list
resp, body = self.client.get(url, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, 
in get
return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 447, 
in request
resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 192, 
in request
return self.session.request(url, method, **kwargs)
  File "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 101, in 
inner
return wrapped(*args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystoneclient/session.py", line 445, 
in request
raise exceptions.from_response(resp, method, url)
keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 
500)

***

Can someone please help us out here.


Thanks,
Akshay Ranade
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju