Re: charm for jboss,hazelcast,infinispan

2014-06-17 Thread Matt Bruzek
Hello Sanat,

Linux Containers (LXC) provision very fast and work very well.  Using Juju
on your laptop is an excellent way to rapidly prototype cloud image
development as you need to do.  There are many resources on the Internet
that describe how to create charms.

https://juju.ubuntu.com/docs/authors-charm-writing.html
http://www.youtube.com/watch?v=ByHDXFcz9Nk

This one is particularly good at showing how to optimize your workflow
using Juju charms and a little trick:

http://www.youtube.com/watch?v=ND-vmtCOEjY

If you have any questions/comments/concerns about creating charms you can
contact us in #juju on irc.freenode.net, or ask a question tagged with
juju on
http://askubuntu.com.

Cheers,

   - Matt Bruzek matthew.bru...@canonical.com


On Tue, Jun 17, 2014 at 2:34 AM, Sanat Meti sanat.m...@bizruntime.com
wrote:

 No actually client requirment is to have jboss,hazelcast,infinspan as
 juju-charm.So how to create a custom charm for this app server
 (jbos),datagrid (hazelcast,infinispan).So a littel help on creating this
 charm will be helpful


 On Tue, Jun 17, 2014 at 4:54 AM, Matt Bruzek matthew.bru...@canonical.com
  wrote:

 Hello Sanat,

 Most of the charms are Free and Open Source Software.

 We have Wildfly which is an open source version based on JBoss:
 http://manage.jujucharms.com/charms/precise/wildfly

 We also have memcached:
 http://manage.jujucharms.com/charms/precise/memcached

 Those charms already exist in the Juju Charm store and can relate to
 other Juju Charms in the ecosystem.

 The others would be great submissions if you were willing to charm them
 up!

- Matt Bruzek matthew.bru...@canonical.com


 On Fri, Jun 13, 2014 at 8:04 AM, Sanat Meti sanat.m...@bizruntime.com
 wrote:

 Hi,

 is there any pre-configured char for jboss fuse,hazelcast
 mem-cache,infinispan or i have to write a custom charm for that

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju




-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: state/api.State authTag vs tag

2014-06-17 Thread roger peppe
On 16 June 2014 09:25, William Reade william.re...@canonical.com wrote:
 On Sun, Jun 15, 2014 at 2:58 PM, John Meinel j...@arbash-meinel.com wrote:

 I feel like we should consolidate these fields. And if we need authTag
 to match Login then we should be setting tag there instead. (That will be
 better for any case where we Login late, given that we probably still would
 want to be able to use anything like DebugLog URLs.)

 They currently match but are not guaranteed to; in particular, when we
 consolidate the agents such that a machine agent is running the uniters
 deployed on that machine, they will definitely not match.

I don't understand this. Surely if a machine agent is running the uniters
deployed on that machine, it will still log in as itself (the machine agent)
and leave it up to the API server to allow that agent the authority
to speak for those unit agents?

I agree that that the authTag and tag fields are mutually redundant.
I think we should just delete tag,  and make both Open and Login save
authTag and the password (authTag is a somewhat more self-explanatory
name than tag IMHO).

But I may well be missing some aspect of your future plans for the API
here that would make this unreasonable.

John writes:

  (That will be better for any case where we Login late, given that
we probably still
 would want to be able to use anything like DebugLog URLs.)

This is an interesting possibility - I hadn't considered that we might not
want to log in immediately. I guess that if we just want to access
non-websocket-based aspects of the API, that logging in is
unnecessary and we can save a round trip by avoiding Login.
But if so, we'd probably want to avoid connecting to the websocket
entirely. One could arrange things so that the first use of any websocket
RPC call makes the actual connection and logs in. But that's is
a significant change and how Open vs Login is managed for that case
case could be dealt with if and when that happens.

  cheers,
rog.

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Andrew Wilkins
On Tue, Jun 17, 2014 at 3:42 PM, William Reade william.re...@canonical.com
wrote:

 On Tue, Jun 17, 2014 at 6:39 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 Hi all,

 I've started looking into fixing
 https://bugs.launchpad.net/juju-core/+bug/1215579. The gist is, we
 currently set private-address in relation settings when a unit joins, but
 never update it.

 I've had some preliminary discussions with John, William and Dimiter, and
 came up with the following proposal:
 https://docs.google.com/a/canonical.com/document/d/1jCNvS7sSMZqtSnup9rDo3b2Wwgs57NimqMorXr9Ir-o/edit

 If you're a charm author, particularly if you work on proxy charms,
 please take a look at this and let me know of any concerns or suggestions.
 I have opened up comments on the doc.

 In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.
  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 I think that what the proposal says is subtly different: that is, that we
 *will* update the private-address setting for that foo relation at the
 start of the foo-relation-address-changed hook, but that it won't be
 propagated further until that hook is committed.

 The upshot is that if you're *not* a proxy charm, you can *probably*
 ignore the relation-address-changed hook [0]; but if you *are* a proxy
 charm, you must (almost) certainly overwrite the new private-address
 setting with that of the endpoint you're proxying.


Thanks, you're quite right, and the wording in the doc is what is expected
of charm authors.


 IMO this behaviour is more consistent and predictable than behaving
 differently depending on whether or not a particular hook is implemented.

 Cheers
 William

 [0] some interfaces have their own language -- say host -- and they'll
 have to be responsible for updating themselves.


 Thanks,
 Andrew

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: state/api.State authTag vs tag

2014-06-17 Thread William Reade
On Tue, Jun 17, 2014 at 10:04 AM, roger peppe rogpe...@gmail.com wrote:

 On 16 June 2014 09:25, William Reade william.re...@canonical.com wrote:
  On Sun, Jun 15, 2014 at 2:58 PM, John Meinel j...@arbash-meinel.com
 wrote:
 
  I feel like we should consolidate these fields. And if we need authTag
  to match Login then we should be setting tag there instead. (That
 will be
  better for any case where we Login late, given that we probably still
 would
  want to be able to use anything like DebugLog URLs.)
 
  They currently match but are not guaranteed to; in particular, when we
  consolidate the agents such that a machine agent is running the uniters
  deployed on that machine, they will definitely not match.

 I don't understand this. Surely if a machine agent is running the uniters
 deployed on that machine, it will still log in as itself (the machine
 agent)
 and leave it up to the API server to allow that agent the authority
 to speak for those unit agents?


Yeah, but the (only, I think) usage of the authTag field is to specialise
relation calls by interested unit. We'll certainly be authed as the machine
agent, but the current form of api/uniter.State requires the unit tag, so
we will need some fix; whether that fix is a change to the relation
methods, or something else, is not especially interesting to me right now.

I agree that that the authTag and tag fields are mutually redundant.
 I think we should just delete tag,  and make both Open and Login save
 authTag and the password (authTag is a somewhat more self-explanatory
 name than tag IMHO).


So long as we're agreed that we only need one field, I think the choice of
name can be left to the implementer and tweaked in review if necessary.

Cheers
William
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Is it ok to Close PRs to indicate WiP?

2014-06-17 Thread John Meinel
Since we are now trying to have everyone regularly rotate into a on-call
reviewer day, and one of the goals of OCR is that you should try to touch
all open reviews. However, I'm finding a bunch of things that have already
been reviewed quite thoroughly and look much more like we are just waiting
for the person to do what was requested and then ask for review again.

In Launchpad, we used Work in Progress to indicate this. I don't see any
equivalent on Github (you just have whether the current PR is open or
closed). I'm a little concerned that just Closing a request is going to
make it easy for the person who submitted it forget about it. However, I
also don't think we want all reviewers to have to poll through a large
backlog every day.

I suppose a meta question exists, why do we have such a huge pile of things
that have been reviewed but not actually responded to by the original
person?

Also, I do think we want to follow our old Rietveld behavior, where for
each comment a reviewer made, the submitter can respond (even if just with
Done). I realize this generates a lot of email noise, but it means that
any reviewer can come along and see what has been addressed and what
hasn't. Or at least follow along with the conversation.

Thoughts? Is Closed to big of a hammer. Is there something else in our
process that we need to focus on?

John
=:-
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is it ok to Close PRs to indicate WiP?

2014-06-17 Thread roger peppe
On 17 June 2014 10:02, John Meinel j...@arbash-meinel.com wrote:
 Also, I do think we want to follow our old Rietveld behavior, where for each
 comment a reviewer made, the submitter can respond (even if just with
 Done). I realize this generates a lot of email noise, but it means that
 any reviewer can come along and see what has been addressed and what hasn't.
 Or at least follow along with the conversation.

I agree entirely. This is even more important since github doesn't
make it possible to see what changes have been made in response
to a given comment.

 Thoughts? Is Closed to big of a hammer. Is there something else in our
 process that we need to focus on?

I think that only the person that created the pull request should close it,
unless it has been merged.

Unfortunately I can't think of a decent way of finding PRs that still
need review.
Perhaps someone could hack up a quick tool that pulls comments from
outstanding PRs and prints any PRs that don't have a reviewed comment.

http://godoc.org/github.com/google/go-github/github#PullRequestsService.ListComments

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is it ok to Close PRs to indicate WiP?

2014-06-17 Thread John Meinel
I think you accidentally replied to just me, so I'm including juju-dev in
my reply.


On Tue, Jun 17, 2014 at 3:53 PM, Wayne Witzel wayne.wit...@canonical.com
wrote:




 On Tue, Jun 17, 2014 at 5:02 AM, John Meinel j...@arbash-meinel.com
 wrote:

 Since we are now trying to have everyone regularly rotate into a on-call
 reviewer day, and one of the goals of OCR is that you should try to touch
 all open reviews. However, I'm finding a bunch of things that have already
 been reviewed quite thoroughly and look much more like we are just waiting
 for the person to do what was requested and then ask for review again.

 In Launchpad, we used Work in Progress to indicate this. I don't see any
 equivalent on Github (you just have whether the current PR is open or
 closed). I'm a little concerned that just Closing a request is going to
 make it easy for the person who submitted it forget about it. However, I
 also don't think we want all reviewers to have to poll through a large
 backlog every day.


 I've already seen some people changing the title of the pull request to
 WIP: title and then back after they are ready for review again, we could
 make that convention?



I think that is going to work better. I didn't realize that I only had
Close rights because all the team leads are still superusers on the
juju/juju project. So if general reviewers and submitters can set it to
WIP, then we need another process. Editing the title seems ok here.




 I suppose a meta question exists, why do we have such a huge pile of
 things that have been reviewed but not actually responded to by the
 original person?

 Also, I do think we want to follow our old Rietveld behavior, where for
 each comment a reviewer made, the submitter can respond (even if just with
 Done). I realize this generates a lot of email noise, but it means that
 any reviewer can come along and see what has been addressed and what
 hasn't. Or at least follow along with the conversation.



 +1


 Thoughts? Is Closed to big of a hammer. Is there something else in our
 process that we need to focus on?

 Where are we at with moving to another review system all together? I think
 expediting that process should be a focus.


I asked Ian, and he said Martin is currently looking into it. That still
probably means a week or so before it would actively be useful.

John
=:-
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Kapil Thangavelu
On Tue, Jun 17, 2014 at 12:39 AM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 Hi all,

 I've started looking into fixing
 https://bugs.launchpad.net/juju-core/+bug/1215579. The gist is, we
 currently set private-address in relation settings when a unit joins, but
 never update it.

 I've had some preliminary discussions with John, William and Dimiter, and
 came up with the following proposal:
 https://docs.google.com/a/canonical.com/document/d/1jCNvS7sSMZqtSnup9rDo3b2Wwgs57NimqMorXr9Ir-o/edit

 If you're a charm author, particularly if you work on proxy charms,
 please take a look at this and let me know of any concerns or suggestions.
 I have opened up comments on the doc.

 In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


This seems less than ideal, we already have standards ways of getting this
data and being notified of its change. introducing non-orthogonal ways of
doing the same lacks value afaics or at least any rationale in the document.

the two perspectives of addresses for self vs related also seem to be a bit
muddled. a relation hook is called in notification of a remote unit change,
but now we're introducing one that behaves in the opposite manner of every
other, and we're calling it redundantly for every relation instead of once
for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


perhaps there is a  misunderstanding of proxies, but things that set their
own address have taken responsibility for it. ie juju only updates private
address if it provided it, else its the charms responsibility.

fwiw, i think this could use some additional discussion.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread John Meinel
...


 In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting this
 data and being notified of its change. introducing non-orthogonal ways of
 doing the same lacks value afaics or at least any rationale in the document.


So maybe the spec isn't very clear, but the idea is that the new hook is
called on the unit when *its* private address might have changed, to give
it a chance to respond. After which, relation-changed is called on all
the associated units to let them know that the address they need to connect
to has changed.

It would be possible to just roll relation-address-changed into config
changed.

The reason it is called for each associated unit is because the network
model means we can actually have different addresses (be connected on a
different network) for different things related to me.

e.g. I have a postgres charm related to application on network A, but
related to my-statistics-aggregator on network B. The address it needs to
give to application should be different than the address given to
my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
actually be different.



 the two perspectives of addresses for self vs related also seem to be a
 bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation instead
 of once for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps there is a  misunderstanding of proxies, but things that set their
 own address have taken responsibility for it. ie juju only updates private
 address if it provided it, else its the charms responsibility.

 fwiw, i think this could use some additional discussion.


So one of the reasons is that it takes some double handling of values to
know if the existing value was the one that was what we last set it. And
there is the possibility that it has changed 2 times, and it was the value
we set it to, but that was the address before this one and we just haven't
gotten to update it.
There was a proposal that we could effectively have 2 fields this is the
private address you are sharing, which might be empty and this is the
private address we set which is where we put our data. And we return the
second value if the first is still nil. Or we set it twice, and we only set
the first one if it matches what was in the second one, etc.
All these things are possible, but in the discussions we had it seemed
simpler to not have to track extra data for marginal benefit. Things which
are proxy charms know that they are, and they found the right address to
give in the past, and they simply do the same thing again when told that we
want to change their address.

John
=:-
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Kapil Thangavelu
On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com wrote:

 ...


  In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting
 this data and being notified of its change. introducing non-orthogonal ways
 of doing the same lacks value afaics or at least any rationale in the
 document.


 So maybe the spec isn't very clear, but the idea is that the new hook is
 called on the unit when *its* private address might have changed, to give
 it a chance to respond. After which, relation-changed is called on all
 the associated units to let them know that the address they need to connect
 to has changed.

 It would be possible to just roll relation-address-changed into config
 changed.


or another unit level change hook (unit-address-changed), again the
concerns are that we're changing the semantics of relation hooks to
something fundamentally different for this one case (every other relation
hook is called for a remote unit) and that we're doing potentially
redundant event expansion and hook queuing as opposed to
coalescing/executing the address set change directly at the unit scope
level.


 The reason it is called for each associated unit is because the network
 model means we can actually have different addresses (be connected on a
 different network) for different things related to me.

 e.g. I have a postgres charm related to application on network A, but
 related to my-statistics-aggregator on network B. The address it needs to
 give to application should be different than the address given to
 my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
 actually be different.


thanks, that scenario would be useful to have in the spec doc. As long as
we're talking about unimplemented features guiding current bug fixes,
realistically there's quite a lot of software that only knows how to listen
on one address, so for network scoped relations to be more than advisory
would also need juju to perform some form of nftables/iptables mgmt. Its
feels a bit slippery that we'd be exposing the user to new concepts and
features that are half-finished and not backwards-compatible for proxy
charms as part of a imo critical bug fix.




 the two perspectives of addresses for self vs related also seem to be a
 bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation instead
 of once for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps there is a  misunderstanding of proxies, but things that set
 their own address have taken responsibility for it. ie juju only updates
 private address if it provided it, else its the charms responsibility.

 fwiw, i think this could use some additional discussion.


 So one of the reasons is that it takes some double handling of values to
 know if the existing value was the one that was what we last set it. And
 there is the possibility that it has changed 2 times, and it was the value
 we set it to, but that was the address before this one and we just haven't
 gotten to update it.
 There was a proposal that we could effectively have 2 fields this is the
 private address you are sharing, which might be empty and this is the
 private address we set which is where we put our data. And we return the
 second value if the first is still nil. Or we set it twice, and we only set
 the first one if it matches what was in the second one, etc.
 All these things are possible, but in the discussions we had it seemed
 simpler to not have to track extra data for marginal benefit. Things which
 are proxy charms know that they are, and they found the right address to
 give in the past, and they simply do the same thing again when told that we
 want to change their address.


there's lots of other implementation complexity in juju that we don't leak,
we just try to present a simple interface to it. we'd be breaking existing
proxy charms if we update the values out from the changed values. The
simple basis of update being you touched you own it and if you didn't it
updates, is simple, explicit, and backwards compatible imo.

There's also the question of why the other new hook (relation-created) is
needed or how it relates to this functionality, or why the existing
unit-get private-address needs to be supplemented by address-get.

chers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Fwd: [Canonical-juju-qa] Cursed (final): #1484 gitbranch:master:github.com/juju/juju 348c104d (functional-backup-restore, functional-ha-recovery, hp-upgrade-precise-amd64)

2014-06-17 Thread Curtis Hovey-Canonical
I tried to make CI pass by extending timeouts, retrying up to 10
times, and manually cleaning up envs after tests. but these 4 tests
failed.

The HP test fails because HP is lying about the available resources.
We know juju.1.18.4 works with HP so the upgrade failure is HP's fault
[1]

The functional tests failed in new ways from the past:

http://juju-ci.vapour.ws:8080/job/functional-ha-recovery/379/console
passed yesterday with commit ba83d11 (Merge pull request #79 from
perrito666/translate_backup_to_go), restore broke sometime after this
rev.  I cannot be more precise at this time. CI was hung ona
joyen-upgrade test for 21 hours :(

http://juju-ci.vapour.ws:8080/job/functional-ha-recovery/
http://juju-ci.vapour.ws:8080/job/functional-ha-backup-restore-devel/
both fail This test is labled devel, but it that was because I renamed
it when I thought the test had problems The test has been passing for
week with 1.19.x The only change we have made to this test in 3 weeks
is to extend the timeouts.The errors are about failures to init the
replica set. The error happens with precise and trusty.
http://juju-ci.vapour.ws:8080/job/functional-ha-backup-restore-devel/148/console
http://juju-ci.vapour.ws:8080/job/functional-ha-recovery/379/console




-- Forwarded message --
From: CI  CD Jenkins aaron.bentley+c...@canonical.com
Date: Tue, Jun 17, 2014 at 7:45 PM
Subject: [Canonical-juju-qa] Cursed (final): #1484
gitbranch:master:github.com/juju/juju 348c104d
(functional-backup-restore, functional-ha-recovery,
hp-upgrade-precise-amd64)

Build: #1484 Revision: gitbranch:master:github.com/juju/juju 348c104d
Version: 1.19.4

Failed tests
functional-backup-restore build #954
http://juju-ci.vapour.ws:8080/job/functional-backup-restore/954/console
functional-ha-recovery build #379
http://juju-ci.vapour.ws:8080/job/functional-ha-recovery/379/console
hp-upgrade-precise-amd64 build #1349
http://juju-ci.vapour.ws:8080/job/hp-upgrade-precise-amd64/1349/console


-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Andrew Wilkins
On Tue, Jun 17, 2014 at 11:35 PM, Kapil Thangavelu 
kapil.thangav...@canonical.com wrote:




 On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com
 wrote:

 ...


  In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting
 this data and being notified of its change. introducing non-orthogonal ways
 of doing the same lacks value afaics or at least any rationale in the
 document.


 So maybe the spec isn't very clear, but the idea is that the new hook is
 called on the unit when *its* private address might have changed, to give
 it a chance to respond. After which, relation-changed is called on all
 the associated units to let them know that the address they need to connect
 to has changed.

 It would be possible to just roll relation-address-changed into config
 changed.


 or another unit level change hook (unit-address-changed), again the
 concerns are that we're changing the semantics of relation hooks to
 something fundamentally different for this one case (every other relation
 hook is called for a remote unit) and that we're doing potentially
 redundant event expansion and hook queuing as opposed to
 coalescing/executing the address set change directly at the unit scope
 level.


 The reason it is called for each associated unit is because the network
 model means we can actually have different addresses (be connected on a
 different network) for different things related to me.

 e.g. I have a postgres charm related to application on network A, but
 related to my-statistics-aggregator on network B. The address it needs to
 give to application should be different than the address given to
 my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
 actually be different.


 thanks, that scenario would be useful to have in the spec doc. As long as
 we're talking about unimplemented features guiding current bug fixes,
 realistically there's quite a lot of software that only knows how to listen
 on one address, so for network scoped relations to be more than advisory
 would also need juju to perform some form of nftables/iptables mgmt. Its
 feels a bit slippery that we'd be exposing the user to new concepts and
 features that are half-finished and not backwards-compatible for proxy
 charms as part of a imo critical bug fix.



 the two perspectives of addresses for self vs related also seem to be a
 bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation instead
 of once for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps there is a  misunderstanding of proxies, but things that set
 their own address have taken responsibility for it. ie juju only updates
 private address if it provided it, else its the charms responsibility.

 fwiw, i think this could use some additional discussion.


 So one of the reasons is that it takes some double handling of values to
 know if the existing value was the one that was what we last set it. And
 there is the possibility that it has changed 2 times, and it was the value
 we set it to, but that was the address before this one and we just haven't
 gotten to update it.
 There was a proposal that we could effectively have 2 fields this is the
 private address you are sharing, which might be empty and this is the
 private address we set which is where we put our data. And we return the
 second value if the first is still nil. Or we set it twice, and we only set
 the first one if it matches what was in the second one, etc.
 All these things are possible, but in the discussions we had it seemed
 simpler to not have to track extra data for marginal benefit. Things which
 are proxy charms know that they are, and they found the right address to
 give in the past, and they simply do the same thing again when told that we
 want to change their address.


 there's lots of other implementation complexity in juju that we don't
 leak, we just try to present a simple interface to it. we'd be breaking
 existing proxy charms if we update the values out from the changed values.
 The simple basis of update being you touched you own it and if you didn't
 it updates, is simple, explicit, and backwards compatible imo.

 There's also the question of why the other new hook (relation-created) is
 needed or how it relates to this functionality, or why the existing
 unit-get private-address needs to be supplemented by address-get.


relation-created is not strictly required for this 

Re: [Canonical-juju-qa] Cursed (final): #1484 gitbranch:master:github.com/juju/juju 348c104d (functional-backup-restore, functional-ha-recovery, hp-upgrade-precise-amd64)

2014-06-17 Thread Curtis Hovey-Canonical
On Tue, Jun 17, 2014 at 9:49 PM, Curtis Hovey-Canonical
cur...@canonical.com wrote:
 I tried to make CI pass by extending timeouts, retrying up to 10
 times, and manually cleaning up envs after tests. but these 4 tests
 failed.

 The HP test fails because HP is lying about the available resources.
 We know juju.1.18.4 works with HP so the upgrade failure is HP's fault
 [1]

I have a pass. I placed more restrictions on what can use HP cloud to
preserve the limited resources we have.  Juju deploy and upgrade tests
passed first time for
gitbranch:master:github.com/juju/juju ec7e4843


-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Is upgrade-juju provider/cloud dependant?

2014-06-17 Thread Curtis Hovey-Canonical
CI tests deploy and upgrade in every CPC because I *think* these two
scenario test the provider and the streams that were placed in the
clouds. The upgrade test verifies stable juju understands the new
streams, can can upgrade to the next juju.

But does juju-upgrade have provider nuances? I don't recall seeing
upgrade fail in one provider. It fails in all, or it fails for the
same reason deploy failed. We have several tests with very slow
upgrades.

Maybe we only need one upgrade test, and CI can choose the most stable
cloud to test that on.

-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is upgrade-juju provider/cloud dependant?

2014-06-17 Thread Andrew Wilkins
On Wed, Jun 18, 2014 at 12:27 PM, Curtis Hovey-Canonical 
cur...@canonical.com wrote:

 CI tests deploy and upgrade in every CPC because I *think* these two
 scenario test the provider and the streams that were placed in the
 clouds. The upgrade test verifies stable juju understands the new
 streams, can can upgrade to the next juju.

 But does juju-upgrade have provider nuances? I don't recall seeing
 upgrade fail in one provider. It fails in all, or it fails for the
 same reason deploy failed. We have several tests with very slow
 upgrades.


There is currently a step that only does something for the local provider.


 Maybe we only need one upgrade test, and CI can choose the most stable
 cloud to test that on.


I would say test on local and one CPC, maybe manual.



 --
 Curtis Hovey
 Canonical Cloud Development and Operations
 http://launchpad.net/~sinzui

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Is upgrade-juju provider/cloud dependant?

2014-06-17 Thread John Meinel
The steps to upgrade is not changed between providers (though as mentioned
local and manual might be a bit different). However, what *is* different is
the search paths for tools.
If you remember some of our earlier transitions, where we had to have 2
copies of the tools so that (e.g.) the 1.16 tools are where 1.16 wanted
them to be, but they *also* had to be placed where 1.14 could find them so
that it could actually perform the upgrade.
And the location for where to find tools is unique to each cloud, isn't it?

Now, we haven't moved tools in a while, and I don't think we're planning to
move them again. So it is possible that we could drop the redundant tests
at this point. But I think they *did* have usefulness in the past. (In the
case that one of the mirrors wasn't updated in both locations.)

Upgrade *shouldn't* be very slow, so it sounds like something that could
use some investigation to understand why it is being a problem. (Maybe
downloading the new tools is slow, maybe it isn't using a cloud-local
mirror and has to download all the data from streams.canonical.com,
maybe... ?)

John
=:-




On Wed, Jun 18, 2014 at 8:44 AM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 On Wed, Jun 18, 2014 at 12:27 PM, Curtis Hovey-Canonical 
 cur...@canonical.com wrote:

 CI tests deploy and upgrade in every CPC because I *think* these two
 scenario test the provider and the streams that were placed in the
 clouds. The upgrade test verifies stable juju understands the new
 streams, can can upgrade to the next juju.

 But does juju-upgrade have provider nuances? I don't recall seeing
 upgrade fail in one provider. It fails in all, or it fails for the
 same reason deploy failed. We have several tests with very slow
 upgrades.


 There is currently a step that only does something for the local provider.


 Maybe we only need one upgrade test, and CI can choose the most stable
 cloud to test that on.


 I would say test on local and one CPC, maybe manual.



 --
 Curtis Hovey
 Canonical Cloud Development and Operations
 http://launchpad.net/~sinzui

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread John Meinel
Well, given it is unit-get shouldn't it be more relation-get
private-address ?
The issue is *that* is give me the private-address for the other side of
this relation.
Which is not quite what you want.
And while I think it is true that many things won't be able to handle
binding to more than one ip address (its either everything with 0.0.0.0 or
one thing), I think we should at least make it *possible* for well formed
services to behave the way we would like.

John
=:-



On Wed, Jun 18, 2014 at 6:12 AM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 On Tue, Jun 17, 2014 at 11:35 PM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:




 On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com
 wrote:

 ...


  In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting
 this data and being notified of its change. introducing non-orthogonal ways
 of doing the same lacks value afaics or at least any rationale in the
 document.


 So maybe the spec isn't very clear, but the idea is that the new hook is
 called on the unit when *its* private address might have changed, to give
 it a chance to respond. After which, relation-changed is called on all
 the associated units to let them know that the address they need to connect
 to has changed.

 It would be possible to just roll relation-address-changed into config
 changed.


 or another unit level change hook (unit-address-changed), again the
 concerns are that we're changing the semantics of relation hooks to
 something fundamentally different for this one case (every other relation
 hook is called for a remote unit) and that we're doing potentially
 redundant event expansion and hook queuing as opposed to
 coalescing/executing the address set change directly at the unit scope
 level.


 The reason it is called for each associated unit is because the network
 model means we can actually have different addresses (be connected on a
 different network) for different things related to me.

 e.g. I have a postgres charm related to application on network A, but
 related to my-statistics-aggregator on network B. The address it needs to
 give to application should be different than the address given to
 my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
 actually be different.


 thanks, that scenario would be useful to have in the spec doc. As long as
 we're talking about unimplemented features guiding current bug fixes,
 realistically there's quite a lot of software that only knows how to listen
 on one address, so for network scoped relations to be more than advisory
 would also need juju to perform some form of nftables/iptables mgmt. Its
 feels a bit slippery that we'd be exposing the user to new concepts and
 features that are half-finished and not backwards-compatible for proxy
 charms as part of a imo critical bug fix.



 the two perspectives of addresses for self vs related also seem to be a
 bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation instead
 of once for the unit?


  - The hook will be called when the relation's address has changed,
 and the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps there is a  misunderstanding of proxies, but things that set
 their own address have taken responsibility for it. ie juju only updates
 private address if it provided it, else its the charms responsibility.

 fwiw, i think this could use some additional discussion.


 So one of the reasons is that it takes some double handling of values to
 know if the existing value was the one that was what we last set it. And
 there is the possibility that it has changed 2 times, and it was the value
 we set it to, but that was the address before this one and we just haven't
 gotten to update it.
 There was a proposal that we could effectively have 2 fields this is
 the private address you are sharing, which might be empty and this is the
 private address we set which is where we put our data. And we return the
 second value if the first is still nil. Or we set it twice, and we only set
 the first one if it matches what was in the second one, etc.
 All these things are possible, but in the discussions we had it seemed
 simpler to not have to track extra data for marginal benefit. Things which
 are proxy charms know that they are, and they found the right address to
 give in the past, and they simply do the same thing again when told that we
 want to change their address.


 there's lots of other implementation complexity in juju that