Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 23:20, Simon Davy  wrote:

> One thing that seems to have landed in 2.1, which is worth noting IMO, is
> the local juju lxd image aliases.
>
> tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
> local lxd server, and uses that if it finds it.
>
> This is amazing. I can now build a local nightly image[1] that pre-installs
> and pre-downloads a whole set of packages[2], and my local lxd units don't
> have to install them when they spin up. Between layer-basic and Canonical
> IS' basenode, for us that's about 111 packages that I don't need to install
> on every machine in my 10 node bundle. Took my install hook times from 5min+
> each to <1min, and probably halfs my initial deploy time, on average.

Ooh, thanks for highlighting this! I've needed this feature for a long
time for exactly the same reasons.


> [2] my current nightly cron:
> https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040

/me starts stealing

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 23:20, Simon Davy  wrote:

> One thing that seems to have landed in 2.1, which is worth noting IMO, is
> the local juju lxd image aliases.
>
> tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
> local lxd server, and uses that if it finds it.
>
> This is amazing. I can now build a local nightly image[1] that pre-installs
> and pre-downloads a whole set of packages[2], and my local lxd units don't
> have to install them when they spin up. Between layer-basic and Canonical
> IS' basenode, for us that's about 111 packages that I don't need to install
> on every machine in my 10 node bundle. Took my install hook times from 5min+
> each to <1min, and probably halfs my initial deploy time, on average.

Ooh, thanks for highlighting this! I've needed this feature for a long
time for exactly the same reasons.


> [2] my current nightly cron:
> https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040

/me starts stealing

-- 
Stuart Bishop 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: regression: restore-backup broken by recent commit

2017-02-23 Thread Tim Penhey

On 24/02/17 16:17, Tim Penhey wrote:

Also, a CI run of a develop revision just before the gorilla/websocket
reversion hit this:

http://reports.vapour.ws/releases/4922/job/functional-ha-backup-restore/attempt/5045#highlight


cannot create collection "txns": unauthorized mongo access: not
authorized on juju to execute command { create: "txns" }
(unauthorized access)

Not sure why that is happening either. Seems that the restore of mongo
is incredibly fragile.


Actually 4922 was the test run after the reversion.

Clearly there is some issue here, and particularly errors that showed up 
from the CI tests. What we need is more data...


Tim

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: regression: restore-backup broken by recent commit

2017-02-23 Thread Tim Penhey

Hi Curtis (also expanding to juju-dev),

I have been looking into this issue. And the good news is that it 
doesn't appear to be a real problem with gorilla/websocket at all, but 
instead a change in timing showed an existing issue that hadn't surfaced 
before.


I'll be looking into that issue - where the restore command after 
bootstrapping, doesn't appear to retry if it gets an error like "denied: 
upgrade in progress".


Secondly I tried to reproduce on lxd to find that there is an issue with 
the rebootstrap and lxd - it just doesn't work.


Then I tried with AWS, to mirror the CI test as close as possible. I 
didn't hit the same timing issue as before, but instead got a different 
failure with the mongo restore:


  http://pastebin.ubuntu.com/24056766/

I have no idea why juju.txns.stash failed but juju.txns and 
juju.txns.logs succeeded.


Also, a CI run of a develop revision just before the gorilla/websocket 
reversion hit this:


http://reports.vapour.ws/releases/4922/job/functional-ha-backup-restore/attempt/5045#highlight

cannot create collection "txns": unauthorized mongo access: not
authorized on juju to execute command { create: "txns" }
(unauthorized access)

Not sure why that is happening either. Seems that the restore of mongo 
is incredibly fragile.


Again, this shows errors in the restore code, but luckily it has nothing 
to do with gorilla/websockets.


Tim

On 23/02/17 04:02, Curtis Hovey-Canonical wrote:

Hi Tim, et al.

All the restore-backup tests in all the substrates failed with your
recent gorilla socket commit. The restore-backup command is often
fails when bootstrap or connection behaviours change. This new bug is
definitely a connection failure while the client is driving a
restore.

We need the develop branch fixed. As the previous commit was blessed,
as are certain 2.2-alpha1 was in very good shape before the gorilla
change.

Restore backup failed websocket: close 1006
https://bugs.launchpad.net/juju/+bug/1666898

As seen at
http://reports.vapour.ws/releases/issue/5550dda7749a561097cf3d44

All the restore-backup tests failed when testing commit
https://github.com/juju/juju/commit/f06c3e96f4e438dc24a28d8ebf7d22c76fff47e2

We see
Initial model "default" added.
04:54:39 INFO juju.juju api.go:72 connecting to API addresses:
[52.201.105.25:17070 172.31.15.167:17070]
04:54:39 INFO juju.api apiclient.go:569 connection established to
"wss://52.201.105.25:17070/model/89bcc17c-9af9-4113-8417-71847838f61a/api"
...
04:55:20 ERROR juju.api.backups restore.go:136 could not clean up
after failed restore attempt: 
04:55:20 ERROR cmd supercommand.go:458 cannot perform restore: :
codec.ReadHeader error: error receiving message: websocket: close 1006
(abnormal closure): unexpected EOF

This is seen in aws, prodstack, gce





--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju 2.1.0, and Conjure-up, are here!

2017-02-23 Thread Simon Davy
On Thu, Feb 23, 2017 at 2:48 AM, Curtis Hovey-Canonical <
cur...@canonical.com> wrote:

> A new release of Juju, 2.1.0, and Conjure-up, are here!
>
>
> ## What's new in 2.1.0
>
> - Model migration
> - Interactive `add-cloud`
> - Networking changes
> - Conjure-up
> - LXD credential changes
> - Changes to the GUI
> - Instrumentation of Juju via Prometheus endpoints
> - Improved OpenStack keystone v3 authentication
> - New cloud-regions supported
> - Additional improvements
>

​One thing that seems to have landed in 2.1, which is worth noting IMO, is
the local juju lxd image aliases.

tl;dr: juju 2.1 now looks for the lxd image alias juju/$series/$arch in the
local lxd server, and uses that if it finds it.

This is amazing. I can now build a local nightly image[1] that pre-installs
and pre-downloads a whole set of packages[2], and my local lxd units don't
have to install them when they spin up. Between layer-basic and Canonical
IS' basenode, for us that's about 111 packages that I don't need to install
on every machine in my 10 node bundle. Took my install hook times from
5min+ each to <1min, and probably halfs my initial deploy time, on average.

Oddly, I only found out about this indirectly via Andrew Wilkins' blog
post[3] on CentOs images, which suggested this was possible. I had to
actually look at the code[4] to figure it out.

For me, this is the single biggest feature in 2.1, and will save me 30mins+
a day, per person who works with juju on my team. But more than raw time,
it reduces iteration interval, and the number of context switches I'm doing
as a I wait for things to deploy. ++win.

I couldn't find any mention of this is the 2.1 lxd provider docs, but I
think it'd be worth calling out, as it's a big speed up in local
development.

My thanks to the folks who did this work. Very much appreciated.

[1] well, you could do this with juju 1.x, but it was messier.
[2] my current nightly cron:
https://gist.github.com/bloodearnest/3474741411c4fdd6c2bb64d08dc75040
[3] https://awilkins.id.au/post/juju-2.1-lxd-centos/
[4]
https://github.com/juju/juju/blob/staging/tools/lxdclient/client_image.go#L117
​

-- 
Simon
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju GUI handling of empty config breaks promulgated charms

2017-02-23 Thread Pete Vander Giessen
On Thu, Feb 23, 2017 at 2:31 AM John Meinel  wrote:

> AFAICT we fixed this in Juju 1.15, but there are 2 places you can pass
> config. The old field uses the compatibility of "" means nil, the new field
> handles nil vs "" correctly. The Juju CLI should work correctly, it sounds
> like the GUI was missed to use the new field.
>
>
I think that it's a bit more complicated than that. The GUI (and tools like
libjuju that use the websocket api), deploy based on the "plan" that they
get back from the API. So my guess would be that the plan is using the old
values. Or that there's some additional parsing that needs to happen to the
plan, which isn't well documented. ... or that its very easy to call the
wrong interface in the websocket api. :-)

Regardless, doing what looks like the "right" thing against the websocket
api, given the plan that you get back from the api, with a bundle that has
blank config values seems to lead to crashes. It would be nice if that got
fixed on the Go side of things, rather than getting fixed via special logic
in every client that talks to the websocket api ...

~ PeteVG
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread James Beedy
Awesome. Thanks for this!

> On Feb 23, 2017, at 4:15 AM, Stuart Bishop  
> wrote:
> 
>> On 23 February 2017 at 15:46, Stuart Bishop  
>> wrote:
>> On 23 February 2017 at 01:46, James Beedy  wrote:
>> 
 I think I can fix this, but I'll need to make a corresponding
 adjustment in the PostgreSQL charm and the fix will only take effect
 with updated services.
 
>>> +1 for a fix. Thanks.
>> 
 Its related to the above issue. Your charm connects and gets the
 db.master.available state set. But you want to specify the database
 name, so a handler runs calling set_database(). At this point the
 .master and .standbys properties start correctly returning None, but
 set_database() neglected to remove the *.available states so handlers
 got kicked in that shouldn't have.
 
>>> Ok, so a fix coming for this too in that case? This one is borking on my
>>> devs who are deploying my bundles, in turn causing me grief, but also
>>> borking on me too, making me question my own sanity :(
>> 
>> Yes. I'll push a fix out shortly.
> 
> I've pushed a fix for your second issue (the 'available' states not
> being removed when you change the requested database name).
> 
> I won't be able to fix the first issue today. For now, I think you can
> work around it using an extra state.
> 
> @when('db.connected')
> @when_not('dbname.requested')
> def request_database_name(psql):
>psql.set_database('foobar')
>reactive.set_state('dbname.requested')
> 
> @when_all('db.master.available', 'dbname.requested')
> def do_stuff_needing_master_db(psql):
>assert psql.master is not None
>assert psql.master.dbname == 'foobar'
> 
> 
> -- 
> Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 15:46, Stuart Bishop  wrote:
> On 23 February 2017 at 01:46, James Beedy  wrote:
>
>>> I think I can fix this, but I'll need to make a corresponding
>>> adjustment in the PostgreSQL charm and the fix will only take effect
>>> with updated services.
>>>
>> +1 for a fix. Thanks.
>
>>> Its related to the above issue. Your charm connects and gets the
>>> db.master.available state set. But you want to specify the database
>>> name, so a handler runs calling set_database(). At this point the
>>> .master and .standbys properties start correctly returning None, but
>>> set_database() neglected to remove the *.available states so handlers
>>> got kicked in that shouldn't have.
>>>
>> Ok, so a fix coming for this too in that case? This one is borking on my
>> devs who are deploying my bundles, in turn causing me grief, but also
>> borking on me too, making me question my own sanity :(
>
> Yes. I'll push a fix out shortly.

I've pushed a fix for your second issue (the 'available' states not
being removed when you change the requested database name).

I won't be able to fix the first issue today. For now, I think you can
work around it using an extra state.

@when('db.connected')
@when_not('dbname.requested')
def request_database_name(psql):
psql.set_database('foobar')
reactive.set_state('dbname.requested')

@when_all('db.master.available', 'dbname.requested')
def do_stuff_needing_master_db(psql):
assert psql.master is not None
assert psql.master.dbname == 'foobar'


-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 14:22, Mark Shuttleworth  wrote:
> On 22/02/17 19:46, James Beedy wrote:
>
> A client can 'accept the defaults' by not setting any properties on
>>
>> the db relation when it joins (dating back to the original protocol
>> with pyjuju). When the PostgreSQL charm runs its relation-joined and
>> relation-changed hooks, it has no way of telling if the client just
>> wants to 'accept the defaults', or if the client has not yet run its
>> relation-joined or relation-changed hooks yet. So if it sees an empty
>> relation, it assumes 'accept the defaults' and provides a database
>> named after the client service.
>
>
> IIRC we agreed that the full state of a unit would be exposed to it from the
> beginning, if we know that.
>
> We have had ample time to introduce changes in behaviour since pyjuju, so I
> suspect this is just something that slipped through the cracks, not
> something we especially want to preserve. Could you file a bug with the
> proposed change in behaviour that would enable charmers to be more
> definitive in their approach?

I've filed https://bugs.launchpad.net/juju/+bug/1667268.

For exposing full state of a unit,
https://bugs.launchpad.net/juju-core/+bug/1417874 may also be relevant
as clusters don't have enough information or opportunity to
decommission themselves cleanly.


-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: PostgreSQL Use Case/Issues

2017-02-23 Thread Stuart Bishop
On 23 February 2017 at 01:46, James Beedy  wrote:

>> I think I can fix this, but I'll need to make a corresponding
>> adjustment in the PostgreSQL charm and the fix will only take effect
>> with updated services.
>>
> +1 for a fix. Thanks.

>> Its related to the above issue. Your charm connects and gets the
>> db.master.available state set. But you want to specify the database
>> name, so a handler runs calling set_database(). At this point the
>> .master and .standbys properties start correctly returning None, but
>> set_database() neglected to remove the *.available states so handlers
>> got kicked in that shouldn't have.
>>
> Ok, so a fix coming for this too in that case? This one is borking on my
> devs who are deploying my bundles, in turn causing me grief, but also
> borking on me too, making me question my own sanity :(

Yes. I'll push a fix out shortly.


>> need more control, you can use the set_roles() method on the interface
>> to have PostgreSQL grant some roles to your user, and then grant
>> permissions explicitly to those roles. But this doesn't really help
>> much from a security POV, so I've been toying with the idea of just
>> having clients all connect as the same user for the common case where
>> people don't want granular permissions (even if it does make security
>> minded people wince).
>
> Will the "common use case" be the only use case?

I think I need to support both approaches. I don't want applications
to outgrow Juju once they become complex enough to warrant per table
permissions. How this happens, I don't know yet; I haven't yet come up
with a design I like enough to pursue further. I don't think you'll
see any changes here until LTS+1, because it will likely need
backwards incompatible changes.

-- 
Stuart Bishop 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju