Re: [charms] Barbican + Identity Standalone - AWS

2016-11-29 Thread Mark Ramm-Christensen (Canonical.com)
Very cool, thanks for sharing!

On Wed, Nov 30, 2016 at 10:36 AM, James Beedy  wrote:

> Another great day of Juju driven successes - deploying the barbican
> standalone stack for identity mgmt and secrets mgmt. For those that don't
> know, newton horizon brings support for identity only! This means you can
> (as I am) use the openstack-dashboard for mgmt of just users, projects, and
> domains, without a full Openstack! In previous Openstack releases, if you
> hooked up horizon and you didn't have the core Openstack services
> registered in your service catalogue, horizon would throw errors and would
> be unusable. This is a huge win for those wanting object storage and
> identity mgmt only, too!
>
> AWS Barbican Stack -> http://paste.ubuntu.com/23556001/
>
> LXD Barbican Bundle (with script to help get started setting secrets in
> barbican)-> https://github.com/jamesbeedy/juju-barbican-lxd-bundle
>
> Also, here's a utility function from barbican-client layer I've been using
> to make getting secrets from barbican containers easy for charms (WIP) ->
> https://github.com/jamesbeedy/juju-layer-barbican-client/
> blob/master/lib/charms/layer/barbican_client.py
>
> ~james
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [charms] Barbican + Identity Standalone - AWS

2016-11-29 Thread Mark Ramm-Christensen (Canonical.com)
Very cool, thanks for sharing!

On Wed, Nov 30, 2016 at 10:36 AM, James Beedy  wrote:

> Another great day of Juju driven successes - deploying the barbican
> standalone stack for identity mgmt and secrets mgmt. For those that don't
> know, newton horizon brings support for identity only! This means you can
> (as I am) use the openstack-dashboard for mgmt of just users, projects, and
> domains, without a full Openstack! In previous Openstack releases, if you
> hooked up horizon and you didn't have the core Openstack services
> registered in your service catalogue, horizon would throw errors and would
> be unusable. This is a huge win for those wanting object storage and
> identity mgmt only, too!
>
> AWS Barbican Stack -> http://paste.ubuntu.com/23556001/
>
> LXD Barbican Bundle (with script to help get started setting secrets in
> barbican)-> https://github.com/jamesbeedy/juju-barbican-lxd-bundle
>
> Also, here's a utility function from barbican-client layer I've been using
> to make getting secrets from barbican containers easy for charms (WIP) ->
> https://github.com/jamesbeedy/juju-layer-barbican-client/
> blob/master/lib/charms/layer/barbican_client.py
>
> ~james
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-02 Thread Mark Ramm-Christensen (Canonical.com)
Exactly.

On Friday, September 2, 2016, Casey Marshall <casey.marsh...@canonical.com>
wrote:

> My main use case for killing controllers is development & testing. For
> this, I have a script that force deletes all the juju-* LXC containers, and
> then unregisters all controllers with cloud: lxd. It's *much* faster than
> waiting for each juju controller to tear itself down. It's also nothing I'd
> provide casually to end users.
>
> I think it should be possible, but not trivial, to destroy controllers and
> everything in them. It's not a bad thing to have to type a long command or
> flag name to do something so destructive -- or write a script to do
> precisely what I want. Feels like my use case for destroying controllers
> isn't really as a normal Juju user -- I'm (ab)using Juju with a developer &
> QA tester's mindset.
>
> -Casey
>
> On Fri, Sep 2, 2016 at 6:15 AM, roger peppe <roger.pe...@canonical.com
> <javascript:_e(%7B%7D,'cvml','roger.pe...@canonical.com');>> wrote:
>
>> It seems to me that this kind of thing is exactly what "blocks" are
>> designed for. An explicit unblock command seems better to me than either an
>> explicit flag or an extra prompt, both of which are vulnerable to typing
>> without thinking. Particularly if "throwaway" controllers created for
>> testing purposes are not blocked by default, so you don't get used to to
>> typing "unblock" all the time.
>>
>> On 1 Sep 2016 16:14, "Mark Ramm-Christensen (Canonical.com)" <
>> mark.ramm-christen...@canonical.com
>> <javascript:_e(%7B%7D,'cvml','mark.ramm-christen...@canonical.com');>>
>> wrote:
>>
>> I get the desire to remove friction everywhere we can, but unless
>> destroying controllers is a regular activity, I actually think SOME
>> friction is valuable here.
>>
>> If controllers are constantly getting into a wedged state where they must
>> be killed, that's likely a product of a fast moving set of betas, and
>> should be addressed *directly* rather than as they say "applying
>> lipstick to a pig"
>>
>> On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi <marco.ce...@canonical.com
>> <javascript:_e(%7B%7D,'cvml','marco.ce...@canonical.com');>> wrote:
>>
>>> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
>>> mark.ramm-christen...@canonical.com
>>> <javascript:_e(%7B%7D,'cvml','mark.ramm-christen...@canonical.com');>>
>>> wrote:
>>>
>>>> I believe keeping the --destroy-all-models flag is helpful in keeping
>>>> you from accidentally destroying a controller that is hosting important
>>>> models for someone without thinking.
>>>>
>>>
>>> What happens if I destroy-controller without that flag? Do I have to go
>>> into my cloud portal to kill those instances? Is there any way to recover
>>> from that to get juju reconnected? If not, it's just a slower death.
>>>
>>>
>>>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi <marco.ce...@canonical.com
>>>> <javascript:_e(%7B%7D,'cvml','marco.ce...@canonical.com');>> wrote:
>>>>
>>>>> Hey everyone,
>>>>>
>>>>> I know we've had discussions about this over the past few months, but
>>>>> it seems we have three commands that overlap pretty aggressively.
>>>>>
>>>>> Using Juju beta16, and trying to 'destroy' a controller it looks like
>>>>> this now:
>>>>>
>>>>> ```
>>>>> root@ubuntu:~# juju help destroy-controller
>>>>> Usage: juju destroy-controller [options] 
>>>>>
>>>>> ...
>>>>>
>>>>> Details:
>>>>> All models (initial model plus all workload/hosted) associated with the
>>>>> controller will first need to be destroyed, either in advance, or by
>>>>> specifying `--destroy-all-models`.
>>>>>
>>>>> Examples:
>>>>> juju destroy-controller --destroy-all-models mycontroller
>>>>>
>>>>> See also:
>>>>> kill-controller
>>>>> unregister
>>>>> ```
>>>>>
>>>>> When would you ever want to destroy-controller and not
>>>>> destroy-all-models? I have to specify that flag everytime, it seems it
>>>>> should just be the default behavior. Kill-controller seems to do what
>>>>> destroy-controller --destroy-all-models does but more

Re: kill-controller, unregister, destroy-controller

2016-09-01 Thread Mark Ramm-Christensen (Canonical.com)
I get the desire to remove friction everywhere we can, but unless
destroying controllers is a regular activity, I actually think SOME
friction is valuable here.

If controllers are constantly getting into a wedged state where they must
be killed, that's likely a product of a fast moving set of betas, and
should be addressed *directly* rather than as they say "applying lipstick
to a pig"

On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
> mark.ramm-christen...@canonical.com> wrote:
>
>> I believe keeping the --destroy-all-models flag is helpful in keeping
>> you from accidentally destroying a controller that is hosting important
>> models for someone without thinking.
>>
>
> What happens if I destroy-controller without that flag? Do I have to go
> into my cloud portal to kill those instances? Is there any way to recover
> from that to get juju reconnected? If not, it's just a slower death.
>
>
>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi <marco.ce...@canonical.com>
>> wrote:
>>
>>> Hey everyone,
>>>
>>> I know we've had discussions about this over the past few months, but it
>>> seems we have three commands that overlap pretty aggressively.
>>>
>>> Using Juju beta16, and trying to 'destroy' a controller it looks like
>>> this now:
>>>
>>> ```
>>> root@ubuntu:~# juju help destroy-controller
>>> Usage: juju destroy-controller [options] 
>>>
>>> ...
>>>
>>> Details:
>>> All models (initial model plus all workload/hosted) associated with the
>>> controller will first need to be destroyed, either in advance, or by
>>> specifying `--destroy-all-models`.
>>>
>>> Examples:
>>> juju destroy-controller --destroy-all-models mycontroller
>>>
>>> See also:
>>> kill-controller
>>> unregister
>>> ```
>>>
>>> When would you ever want to destroy-controller and not
>>> destroy-all-models? I have to specify that flag everytime, it seems it
>>> should just be the default behavior. Kill-controller seems to do what
>>> destroy-controller --destroy-all-models does but more aggressively?
>>>
>>> Finally, unregister and destroy-controller (without
>>> --destroy-all-models) does the same thing. Can we consider dropping the -
>>> very long winded almost always required - flag for destroy-controller?
>>>
>>> Finally, there used to be a pretty good amount of feedback during
>>> destroy-controller, while it was rolling text, I at least knew what was
>>> happening. Now it's virtually silent. Given it runs for quite a long time,
>>> can we get some form of feedback to the user back into the command?
>>>
>>> ```
>>> root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
>>> WARNING! This command will destroy the "cabs" controller.
>>> This includes all machines, applications, data and other resources.
>>>
>>> Continue? (y/N):y
>>> ERROR failed to destroy controller "cabs"
>>>
>>> If the controller is unusable, then you may run
>>>
>>> juju kill-controller
>>>
>>> to forcibly destroy the controller. Upon doing so, review
>>> your cloud provider console for any resources that need
>>> to be cleaned up.
>>>
>>> ERROR cannot connect to API: unable to connect to API: websocket.Dial
>>> wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no route
>>> to host
>>> root@ubuntu:~# juju kill-controller cabs
>>> WARNING! This command will destroy the "cabs" controller.
>>> This includes all machines, applications, data and other resources.
>>>
>>> Continue? (y/N):y
>>> Unable to open API: open connection timed out
>>> Unable to connect to the API server. Destroying through provider.
>>> ERROR listing resource groups: azure.ServicePrincipalToken#Refresh:
>>> Failure sending request for Service Principal 
>>> 83d638b0-841c-4bd1-9e7c-868cae3393f4:
>>> StatusCode=0 -- Original Error: http: nil Request.URL
>>> root@ubuntu:~# juju bootstrap cabs azure
>>> ERROR controller "cabs" already exists
>>> ```
>>>
>>> Marco
>>>
>>> --
>>> Juju-dev mailing list
>>> juju-...@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>>> mailman/listinfo/juju-dev
>>>
>>>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: kill-controller, unregister, destroy-controller

2016-09-01 Thread Mark Ramm-Christensen (Canonical.com)
I get the desire to remove friction everywhere we can, but unless
destroying controllers is a regular activity, I actually think SOME
friction is valuable here.

If controllers are constantly getting into a wedged state where they must
be killed, that's likely a product of a fast moving set of betas, and
should be addressed *directly* rather than as they say "applying lipstick
to a pig"

On Thu, Sep 1, 2016 at 4:04 PM, Marco Ceppi <marco.ce...@canonical.com>
wrote:

> On Thu, Sep 1, 2016 at 9:59 AM Mark Ramm-Christensen (Canonical.com) <
> mark.ramm-christen...@canonical.com> wrote:
>
>> I believe keeping the --destroy-all-models flag is helpful in keeping
>> you from accidentally destroying a controller that is hosting important
>> models for someone without thinking.
>>
>
> What happens if I destroy-controller without that flag? Do I have to go
> into my cloud portal to kill those instances? Is there any way to recover
> from that to get juju reconnected? If not, it's just a slower death.
>
>
>> On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi <marco.ce...@canonical.com>
>> wrote:
>>
>>> Hey everyone,
>>>
>>> I know we've had discussions about this over the past few months, but it
>>> seems we have three commands that overlap pretty aggressively.
>>>
>>> Using Juju beta16, and trying to 'destroy' a controller it looks like
>>> this now:
>>>
>>> ```
>>> root@ubuntu:~# juju help destroy-controller
>>> Usage: juju destroy-controller [options] 
>>>
>>> ...
>>>
>>> Details:
>>> All models (initial model plus all workload/hosted) associated with the
>>> controller will first need to be destroyed, either in advance, or by
>>> specifying `--destroy-all-models`.
>>>
>>> Examples:
>>> juju destroy-controller --destroy-all-models mycontroller
>>>
>>> See also:
>>> kill-controller
>>> unregister
>>> ```
>>>
>>> When would you ever want to destroy-controller and not
>>> destroy-all-models? I have to specify that flag everytime, it seems it
>>> should just be the default behavior. Kill-controller seems to do what
>>> destroy-controller --destroy-all-models does but more aggressively?
>>>
>>> Finally, unregister and destroy-controller (without
>>> --destroy-all-models) does the same thing. Can we consider dropping the -
>>> very long winded almost always required - flag for destroy-controller?
>>>
>>> Finally, there used to be a pretty good amount of feedback during
>>> destroy-controller, while it was rolling text, I at least knew what was
>>> happening. Now it's virtually silent. Given it runs for quite a long time,
>>> can we get some form of feedback to the user back into the command?
>>>
>>> ```
>>> root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
>>> WARNING! This command will destroy the "cabs" controller.
>>> This includes all machines, applications, data and other resources.
>>>
>>> Continue? (y/N):y
>>> ERROR failed to destroy controller "cabs"
>>>
>>> If the controller is unusable, then you may run
>>>
>>> juju kill-controller
>>>
>>> to forcibly destroy the controller. Upon doing so, review
>>> your cloud provider console for any resources that need
>>> to be cleaned up.
>>>
>>> ERROR cannot connect to API: unable to connect to API: websocket.Dial
>>> wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no route
>>> to host
>>> root@ubuntu:~# juju kill-controller cabs
>>> WARNING! This command will destroy the "cabs" controller.
>>> This includes all machines, applications, data and other resources.
>>>
>>> Continue? (y/N):y
>>> Unable to open API: open connection timed out
>>> Unable to connect to the API server. Destroying through provider.
>>> ERROR listing resource groups: azure.ServicePrincipalToken#Refresh:
>>> Failure sending request for Service Principal 
>>> 83d638b0-841c-4bd1-9e7c-868cae3393f4:
>>> StatusCode=0 -- Original Error: http: nil Request.URL
>>> root@ubuntu:~# juju bootstrap cabs azure
>>> ERROR controller "cabs" already exists
>>> ```
>>>
>>> Marco
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at: https://lists.ubuntu.com/
>>> mailman/listinfo/juju-dev
>>>
>>>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-01 Thread Mark Ramm-Christensen (Canonical.com)
I believe keeping the --destroy-all-models flag is helpful in keeping you
from accidentally destroying a controller that is hosting important models
for someone without thinking.

On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi 
wrote:

> Hey everyone,
>
> I know we've had discussions about this over the past few months, but it
> seems we have three commands that overlap pretty aggressively.
>
> Using Juju beta16, and trying to 'destroy' a controller it looks like this
> now:
>
> ```
> root@ubuntu:~# juju help destroy-controller
> Usage: juju destroy-controller [options] 
>
> ...
>
> Details:
> All models (initial model plus all workload/hosted) associated with the
> controller will first need to be destroyed, either in advance, or by
> specifying `--destroy-all-models`.
>
> Examples:
> juju destroy-controller --destroy-all-models mycontroller
>
> See also:
> kill-controller
> unregister
> ```
>
> When would you ever want to destroy-controller and not destroy-all-models?
> I have to specify that flag everytime, it seems it should just be the
> default behavior. Kill-controller seems to do what destroy-controller
> --destroy-all-models does but more aggressively?
>
> Finally, unregister and destroy-controller (without --destroy-all-models)
> does the same thing. Can we consider dropping the - very long winded almost
> always required - flag for destroy-controller?
>
> Finally, there used to be a pretty good amount of feedback during
> destroy-controller, while it was rolling text, I at least knew what was
> happening. Now it's virtually silent. Given it runs for quite a long time,
> can we get some form of feedback to the user back into the command?
>
> ```
> root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
> WARNING! This command will destroy the "cabs" controller.
> This includes all machines, applications, data and other resources.
>
> Continue? (y/N):y
> ERROR failed to destroy controller "cabs"
>
> If the controller is unusable, then you may run
>
> juju kill-controller
>
> to forcibly destroy the controller. Upon doing so, review
> your cloud provider console for any resources that need
> to be cleaned up.
>
> ERROR cannot connect to API: unable to connect to API: websocket.Dial
> wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no route
> to host
> root@ubuntu:~# juju kill-controller cabs
> WARNING! This command will destroy the "cabs" controller.
> This includes all machines, applications, data and other resources.
>
> Continue? (y/N):y
> Unable to open API: open connection timed out
> Unable to connect to the API server. Destroying through provider.
> ERROR listing resource groups: azure.ServicePrincipalToken#Refresh:
> Failure sending request for Service Principal 
> 83d638b0-841c-4bd1-9e7c-868cae3393f4:
> StatusCode=0 -- Original Error: http: nil Request.URL
> root@ubuntu:~# juju bootstrap cabs azure
> ERROR controller "cabs" already exists
> ```
>
> Marco
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: kill-controller, unregister, destroy-controller

2016-09-01 Thread Mark Ramm-Christensen (Canonical.com)
I believe keeping the --destroy-all-models flag is helpful in keeping you
from accidentally destroying a controller that is hosting important models
for someone without thinking.

On Thu, Sep 1, 2016 at 3:40 PM, Marco Ceppi 
wrote:

> Hey everyone,
>
> I know we've had discussions about this over the past few months, but it
> seems we have three commands that overlap pretty aggressively.
>
> Using Juju beta16, and trying to 'destroy' a controller it looks like this
> now:
>
> ```
> root@ubuntu:~# juju help destroy-controller
> Usage: juju destroy-controller [options] 
>
> ...
>
> Details:
> All models (initial model plus all workload/hosted) associated with the
> controller will first need to be destroyed, either in advance, or by
> specifying `--destroy-all-models`.
>
> Examples:
> juju destroy-controller --destroy-all-models mycontroller
>
> See also:
> kill-controller
> unregister
> ```
>
> When would you ever want to destroy-controller and not destroy-all-models?
> I have to specify that flag everytime, it seems it should just be the
> default behavior. Kill-controller seems to do what destroy-controller
> --destroy-all-models does but more aggressively?
>
> Finally, unregister and destroy-controller (without --destroy-all-models)
> does the same thing. Can we consider dropping the - very long winded almost
> always required - flag for destroy-controller?
>
> Finally, there used to be a pretty good amount of feedback during
> destroy-controller, while it was rolling text, I at least knew what was
> happening. Now it's virtually silent. Given it runs for quite a long time,
> can we get some form of feedback to the user back into the command?
>
> ```
> root@ubuntu:~# juju destroy-controller --destroy-all-models cabs
> WARNING! This command will destroy the "cabs" controller.
> This includes all machines, applications, data and other resources.
>
> Continue? (y/N):y
> ERROR failed to destroy controller "cabs"
>
> If the controller is unusable, then you may run
>
> juju kill-controller
>
> to forcibly destroy the controller. Upon doing so, review
> your cloud provider console for any resources that need
> to be cleaned up.
>
> ERROR cannot connect to API: unable to connect to API: websocket.Dial
> wss://10.0.0.4:17070/api: dial tcp 10.0.0.4:17070: getsockopt: no route
> to host
> root@ubuntu:~# juju kill-controller cabs
> WARNING! This command will destroy the "cabs" controller.
> This includes all machines, applications, data and other resources.
>
> Continue? (y/N):y
> Unable to open API: open connection timed out
> Unable to connect to the API server. Destroying through provider.
> ERROR listing resource groups: azure.ServicePrincipalToken#Refresh:
> Failure sending request for Service Principal 
> 83d638b0-841c-4bd1-9e7c-868cae3393f4:
> StatusCode=0 -- Original Error: http: nil Request.URL
> root@ubuntu:~# juju bootstrap cabs azure
> ERROR controller "cabs" already exists
> ```
>
> Marco
>
> --
> Juju-dev mailing list
> juju-...@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju-dev
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Mark Ramm-Christensen (Canonical.com)
My point is not to advocate for a specific solution but rather to suggest
that *any* sensible incremental approach will produce real results.

--Mark Ramm



On Mon, Mar 28, 2016 at 7:51 PM, David Cheney <david.che...@canonical.com>
wrote:

> On Tue, Mar 29, 2016 at 10:42 AM, Mark Ramm-Christensen
> (Canonical.com) <mark.ramm-christen...@canonical.com> wrote:
> > Never a good time to stop feature work entirely and fix what amounts to a
> > race prone set of tests.
> >
> >
> > But I would advocate building in some practices to improve the situation
> > incrementally:
> >
> > fixing one major issue per team per week
>
> SGTM. How do we know which of the millions of private lists of bugs
> are the critical ones? Which of the hundred "critical", "papercut",
> "urgent" LP tags are the critical ones?
>
> > promoting all issues which fail CI more than x times per week to Critical
> > blocking all branches on fridays except for fixes for bugs on the top
> issues
> > list
>
> Which timezone is friday ?
>
> > or some other similar policy
> >
> > Over time over time any of the above policies will bring the total
> number of
> > test failures down significantly, and would still allow progress on
> feature
> > work.
> >
> > On Mon, Mar 28, 2016 at 1:05 PM, Nate Finch <nate.fi...@canonical.com>
> > wrote:
> >>
> >> I'll just note that we've had flaky tests for as long as I've been
> working
> >> on Juju, and there's never a "good" time to fix them. :)
> >>
> >> On Mon, Mar 28, 2016 at 11:48 AM Aaron Bentley
> >> <aaron.bent...@canonical.com> wrote:
> >>>
> >>> -BEGIN PGP SIGNED MESSAGE-
> >>> Hash: SHA256
> >>>
> >>> On 2016-03-28 09:03 AM, Katherine Cox-Buday wrote:
> >>> > Generally +1 on this, but I'm also intrigued by Martin's
> >>> > statistic... do we currently weight test failures by how likely
> >>> > they are to fail (i.e. how likely they are flaky)? That seems like
> >>> > it would be a great metric to use to decide which to fix first.
> >>>
> >>> We don't do it on the likelihood of failure, but we do it on the
> >>> frequency of failure.
> >>>
> >>> http://reports.vapour.ws/releases/top-issues
> >>>
> >>> I report on these on the cross-team call, and once the 2.0 settles
> >>> down, I'll be reporting them on the release call again.
> >>>
> >>> Aaron
> >>> -BEGIN PGP SIGNATURE-
> >>> Version: GnuPG v2
> >>>
> >>> iQEcBAEBCAAGBQJW+VJcAAoJEK84cMOcf+9hWrwH/0JradfscIE0wnt+yCW9nNCR
> >>> 9hTHI2U19v1VuP6pWI4UiC7srfojPI8EXXEXrrAhF9rT8tpVK4EcJRJK9RvWvvz5
> >>> BEquHMS0+eROFOqDJFavEB8hU7BKHErzkSwSG8uKq7JuwHs9gNtQO9z9fIhVKjnr
> >>> aP4z2IliCqbYfXbupfSTD8TmqhI0AipQymTg3QB4C3sJdXzc5GjzIIckUo/X7aJj
> >>> zH1tEtlwOdP0c9F+8ZVs1j6AAkb+uDGc/1Qr4MT1kInqGkli2UNF4TOX/AihNPyH
> >>> iwYgq6O7uOkijFTrL9obRfbXxIFw1WCc9cYzxbRYnGfQff47Dyj7/BUStPPH0i0=
> >>> =8FQ6
> >>> -END PGP SIGNATURE-
> >>>
> >>> --
> >>> Juju-dev mailing list
> >>> Juju-dev@lists.ubuntu.com
> >>> Modify settings or unsubscribe at:
> >>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >>
> >> --
> >> Juju-dev mailing list
> >> Juju-dev@lists.ubuntu.com
> >> Modify settings or unsubscribe at:
> >> https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >>
> >
> >
> > --
> > Juju-dev mailing list
> > Juju-dev@lists.ubuntu.com
> > Modify settings or unsubscribe at:
> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
> >
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Go 1.6 is now in trusty-proposed

2016-03-28 Thread Mark Ramm-Christensen (Canonical.com)
Never a good time to stop feature work entirely and fix what amounts to a
race prone set of tests.


But I would advocate building in some practices to improve the situation
incrementally:


   - fixing one major issue per team per week
   - promoting all issues which fail CI more than x times per week to
   Critical
   - blocking all branches on fridays except for fixes for bugs on the top
   issues list
   - or some other similar policy

Over time over time any of the above policies will bring the total number
of test failures down significantly, and would still allow progress on
feature work.

On Mon, Mar 28, 2016 at 1:05 PM, Nate Finch 
wrote:

> I'll just note that we've had flaky tests for as long as I've been working
> on Juju, and there's never a "good" time to fix them. :)
>
> On Mon, Mar 28, 2016 at 11:48 AM Aaron Bentley <
> aaron.bent...@canonical.com> wrote:
>
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> On 2016-03-28 09:03 AM, Katherine Cox-Buday wrote:
>> > Generally +1 on this, but I'm also intrigued by Martin's
>> > statistic... do we currently weight test failures by how likely
>> > they are to fail (i.e. how likely they are flaky)? That seems like
>> > it would be a great metric to use to decide which to fix first.
>>
>> We don't do it on the likelihood of failure, but we do it on the
>> frequency of failure.
>>
>> http://reports.vapour.ws/releases/top-issues
>>
>> I report on these on the cross-team call, and once the 2.0 settles
>> down, I'll be reporting them on the release call again.
>>
>> Aaron
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v2
>>
>> iQEcBAEBCAAGBQJW+VJcAAoJEK84cMOcf+9hWrwH/0JradfscIE0wnt+yCW9nNCR
>> 9hTHI2U19v1VuP6pWI4UiC7srfojPI8EXXEXrrAhF9rT8tpVK4EcJRJK9RvWvvz5
>> BEquHMS0+eROFOqDJFavEB8hU7BKHErzkSwSG8uKq7JuwHs9gNtQO9z9fIhVKjnr
>> aP4z2IliCqbYfXbupfSTD8TmqhI0AipQymTg3QB4C3sJdXzc5GjzIIckUo/X7aJj
>> zH1tEtlwOdP0c9F+8ZVs1j6AAkb+uDGc/1Qr4MT1kInqGkli2UNF4TOX/AihNPyH
>> iwYgq6O7uOkijFTrL9obRfbXxIFw1WCc9cYzxbRYnGfQff47Dyj7/BUStPPH0i0=
>> =8FQ6
>> -END PGP SIGNATURE-
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Use of Jujucharms to launch non-opensource code

2016-02-09 Thread Mark Ramm-Christensen (Canonical.com)
Thanks Marco!

On Tue, Feb 9, 2016 at 1:33 PM, Marco Ceppi 
wrote:

> This is actually a non-issue. The codebase was moved to LGPL over a year
> ago, there was just two places this was not updated. First LaunchPad and
> second was the setup.py file. I've corrected both.
>
> On Tue, Feb 9, 2016 at 7:09 AM Marco Ceppi 
> wrote:
>
>> I'll take a look into this. charm-helpers seems to suffer a bit from
>> license schizophrenia. LP lists it as GPLv3, the code has both GPL and LGPL
>> license in root and setup.py lists it as AGPL. I will investigate this a
>> bit more and email the list in a separate thread when this is resolved.
>>
>> On Tue, Feb 9, 2016 at 7:07 AM Mark Shuttleworth  wrote:
>>
>>>
>>> OK, let's explore moving that to LGPL which I think would be more
>>> appropriate for things like that and layers.
>>>
>>> Mark
>>>
>>> On 09/02/16 12:04, John Meinel wrote:
>>> > I agree, I was a bit surprised that charmhelpers was AGPL instead of
>>> LGPL.
>>> > I think it makes sense as you still would contribute back to the
>>> layers you
>>> > touch, but it doesn't turn your entire charm into GPL.
>>> >
>>> > John
>>> > =->
>>> >
>>> >
>>> > On Tue, Feb 9, 2016 at 3:38 PM, Mark Shuttleworth 
>>> wrote:
>>> >
>>> >> On 09/02/16 09:25, John Meinel wrote:
>>> >>> The more edge case is that charmhelpers itself is AGPL, so if your
>>> charm
>>> >>> imported charmhelpers, then that is more of a grey area. You likely
>>> need
>>> >> to
>>> >>> open source the actual charm, which sets up configuration, etc of the
>>> >>> program. However, you still don't have to give out the source to the
>>> >>> program you are configuring.
>>> >> For stuff that we publish as libraries, we tend to prefer LGPL, which
>>> >> doesn't force a license on the end product or codebase. So if we need
>>> to
>>> >> revisit the charmhelpers license we will do so.
>>> >>
>>> >> Mark
>>> >>
>>> >>
>>>
>>>
>>> --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju for Redhat linux

2015-11-16 Thread Mark Ramm-Christensen (Canonical.com)
>> Does Juju work on Redhat linux platform ?
> No, unfortunately not. The only supported OSs are Ubuntu and CentOS, and
I'm not aware of any work in progress towards Red Hat. I'm sorry!

Exactly RHEL isn't supported yet, but if CentOS is good enough for you the
work is already done.   If you want RHEL that is not a large technical
distance at all from CentOS -- and if there is a strong desire to go there
juju is completely open source, and patches are always welcome.   Also
given evidence of customer demand, I'm confident our product roadmap will
be adjusted accordingly.

--Mark Ramm
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to make Juju High Availability work properly?

2015-10-26 Thread Mark Ramm-Christensen (Canonical.com)
Thanks for following up!

We will take a look at the logs and get back to you soon.

--Mark Ramm

On Mon, Oct 26, 2015 at 4:41 AM, 曾建銘  wrote:

> Hi Mark & Marco,
>
> Really appreciate your help. I have already provide the logs to Marco.
>
> Hope it could help you to enhance the Juju HA feature.
>
> Sincerely yours,
> Leon
>
> On Sat, Oct 24, 2015 at 7:03 AM, Mark Shuttleworth 
> wrote:
>
>> On 23/10/15 00:54, 曾建銘 wrote:
>> > Was the juju doing something for fixing specific problem? I think that
>> > service on failed node should only become lost and not interfere
>> services
>> > on workings nodes. But it didn't act as I expected.
>> >
>> > By the way, I used Juju to deploy OpenStack, so I deployed a lot of
>> charms
>> > on it. Is that matter?
>>
>> No, the scale of the model you're managing should not affect HA in any
>> way.
>>
>> The team is trying to reproduce your situation by repeatedly causing
>> failover, but I'm told has not seen anything like your symptoms. Can you
>> provide copies of logs to Marco?
>>
>> Mark
>>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: What's the future of Juju?

2015-03-25 Thread Mark Ramm-Christensen (Canonical.com)
Well, I can provide a few things off the top of my head that should help.

   - Canonical is fully committed to Juju as the way we deploy software
   internally, the way we deploy Open Stack clouds for our largest clients
   - Windows workloads are supported in the current beta version of Juju,
   and should after a bit of real-world testing be fully supported in one of
   the next (bi-monthly) production ready releases.
   - CentOS support is nearly feature complete, and should enter a beta
   release of Juju for testing within the next month.  Like windows it will
   flow to a production release after it's had some real-world tests.

There are quite a few big companies working on juju charms.  IBM for
example is delivering quite a few charms and has committed multiple full
time development resources to working with juju.

There are also quite a few other big names working on juju charms -- many
of them in the OpenStack space.  I'll get a list of folks who are already
public about being part of our juju based openstack integration labs for
you as soon as I can.

We also have some big plans for products built on top of juju.  The first
of which is the OpenStack Autopilot which automates the deployment,
scale-out, and management of OpenStack clouds.   But, we are also building
more products on top of Juju right now, and it is core to our future plans
in the cloud.

So, to make a long story short, I think juju is gaining traction with some
big enterprise players, Canonical is fully committed to Juju, and we are
seeing momentum pick up in the marketplace.   So, I personally would
definitely bet on a bright future for Juju.

--Mark Ramm





On Wed, Mar 25, 2015 at 4:01 PM, Merlijn Sebrechts 
merlijn.sebrec...@gmail.com wrote:

 Hi


 I'm interested in what the future of Juju is. From the small experience
 I've had with it, it seems like a product with a lot of potential. It fills
 a gap in our project that no other technology can fill. Its biggest
 strength is how relations between services are managed. This is something
 that, to my knowledge, does not exist in any tool today. It enables a very
 modular approach and solves a lot of problems we would have with other
 tools.

 However, I've also seen some things that worry me. Even after three years,
 there are still a lot of bugs in the project. The documentation is lacking,
 especially in the parts of Juju that are the most competitive. The
 community is also very small. The fact that it can still only manage Ubuntu
 servers worries me too. I could go more into detail here, but I don't think
 it is relevant to this question.

 I'm considering starting a big long-term project on top of Juju. The
 project would be very dependent on Juju, so I don't want to do this if
 there is a chance that Juju will be abandoned in 5 years...

 What can you tell me about the future of Juju? Things I'm interested in:

 - Big companies building services on top of Juju

 - Statements of long-term commitment from Canonical

 - Usage statistics

 - Statements of commitment to support other distro's

 - .. or else, signs that Juju doesn't have a bright future.


 Thanks

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: disabling upgrades on new machines by default?

2014-10-02 Thread Mark Ramm-Christensen (Canonical.com)
In practice, it almost never happens.

I don't remember seeing anything actually from the archive break a charm on
update -- though the cloud tools pocket did have a breakage about 8 months
ago which did bad things.

--Mark Ramm

On Thu, Oct 2, 2014 at 10:14 AM, Matt Rae matt@canonical.com wrote:

 Thanks Dave!

 A downside to disabling upgrades is that users may have a bad experience
 if broken packages get installed and their services don't work. Not sure
 how often this would happen.

 On Thu, Oct 2, 2014 at 6:19 AM, David Cheney david.che...@canonical.com
 wrote:



 On Thu, Oct 2, 2014 at 10:55 PM, Matt Rae matt@canonical.com wrote:

 I don't think the upgrade matters as much as speed. I feel like most
 users know to manage updates already, with their own policies, and that the
 fast user experience is important.

 Even if juju upgrades initially, users will still need to manage updates
 after, so I'm not sure how much the initial upgrade gains.

 Juju is blazing fast! is more exciting than Juju makes sure I'm
 updated initially!

 There is something to be said for having the exact same packages on
 every unit of a service rather than a few units having some versions, then
 units added later getting different versions.


 That happens anyway. Units added later may be built from later releases
 of the cloud image.




 Matt

 On Thu, Oct 2, 2014 at 12:27 AM, Samuel Cozannet 
 samuel.cozan...@canonical.com wrote:

 Why not put our perception to the test?

 Here
 https://docs.google.com/a/canonical.com/spreadsheets/d/1T-8rf_XxXbvCCRRHT69KtRM5k4oJiHyTEuzbENBU0Js/edit#gid=0
 is a spreadsheet where you can compile your variables. The top line
 summarizes the sum of values. The column that gets green is the one we
 should go for [assuming we are representative]

 Sam

 On Thu, Oct 2, 2014 at 7:45 AM, John Meinel j...@arbash-meinel.com
 wrote:

 So there is the question of what is the user experience, and people
 trying out Juju and it seems slow. Though if it is slow, doesn't that mean
 that images are out of date?

 I just bootstrapped a fresh Ubuntu from Amazon's web interface today,
 and I noticed that apt-get upgrade on there installed a new bash to fix 
 the
 newest major security hole. It seems like it is good to at least apply
 security updates, and I'm not sure if it is easy to only install those.

 John
 =:-

 On Thu, Oct 2, 2014 at 7:51 AM, José Antonio Rey j...@ubuntu.com
 wrote:

 I believe that, as Jorge mentioned, most users do value having
 everything up to date by default, specially when they may go directly to
 production environments. Devs may also want to use this switch, as it 
 will
 save time during the deployment for testing the charms they have 
 developed.

 I believe that turning on upgrades as a default would be more valued
 by end-users, but that's just a personal opinion.

 --
 José Antonio Rey
 On Oct 1, 2014 2:34 PM, Jorge O. Castro jo...@ubuntu.com wrote:

 On Wed, Oct 1, 2014 at 3:26 PM, Kapil Thangavelu
 kapil.thangav...@canonical.com wrote:
  juju can save minutes per machine (especially against release
 images) if we
  turn off upgrades by default.

 There are some updates coming to how we build cloud images that might
 be relevant to this discussion:

 http://blog.utlemming.org/2014/08/archive-triggered-cloud-image-builds.html

 IMO safer and slower makes sense for most people, those of us who
 need
 speed for demos/conferences will know about this switch.

 --
 Jorge Castro
 Canonical Ltd.
 http://juju.ubuntu.com/ - Automate your Cloud Infrastructure

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju



 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju




 --
 Samuel Cozannet
 Cloud, Big Data and IoT Strategy Team
 Strategic Program Manager
 Changing the Future of Cloud
 Ubuntu http://ubuntu.com / Canonical http://canonical.com UK LTD
 samuel.cozan...@canonical.com
 +33 616 702 389


 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju



 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju




 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju is still too hard

2014-09-22 Thread Mark Ramm-Christensen (Canonical.com)
I think we need to make sure that we do the best error reporting we can, so
if Juju isn't working because of Azure issues, we should find some way to
let users know that so that they can try another cloud, contact microsoft,
or otherwise find another way forward.

--Mark Ramm

On Mon, Sep 22, 2014 at 9:16 AM, Michael Schwartz m...@gluu.org wrote:

 Juju,

 The note I got from Nate on Azure is below (after wasting a day trying to
 get it working...)

 At OSCON, I worked with Jorge and even he couldn't get my local VM
 working, but I'm willing to give it a try again. I'd be willing to schedule
 a meeting (please email me off the list to schedule).

 But please note, its not just me. I am reflecting the consensus of
 customers and partners about Juju. And its hard for me to dismiss their
 concerns given my own experience.

 - Mike



  Original Message 
 Subject: Re: Problems with Azure
 Date: 2014-08-28 11:35
 From: Nate Finch nate.fi...@canonical.com

 Azure is in a bad state today.  Our tests won't run on Azure today
 either.  So, there's not much we can do except complain to Microsoft.

 -
 Michael Schwartz
 Gluu
 Founder / CEO
 m...@gluu.org

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at: https://lists.ubuntu.com/
 mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Unit Tests Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Thu, Sep 11, 2014 at 3:41 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 Performance is the second reason Roger described, and I disagree that
 mocking code is cleaner.. these are two orthogonal properties, and
 it's actually pretty easy to have mocked code being extremely
 confusing and tightly bound to the implementation. It doesn't _have_
 to be like that, but this is not a reason to use it.


It is easy to do that, though often that is a sign of not having clean
separations of concerns.   Messy mocking can (though does not always)
reflect messiness in the code itself.  Messy, poorly isolated code is bad
and messy mocks, often mean you have not one but two messes to clean up.

 Like any tools, developers can over-use, or mis-use them.   But, if you
  don't use them at all,



 That's not what Roger suggested either. A good conversation requires
 properly reflecting the position held by participants.


You are right, I wasn't precise about the details of his suggestion to not
use them, but he did suggest not using mocks unless there is *no other
choice.* And it is that rule against them that I was trying to make a case
against.

With that said, I definitely agree with the experience that both of you are
trying to highlight about the dangers of over-reliance on mocks.  I think
everybody who has written a significant amount of test code knows that
passing a test against a mock is not the same thing as actually working
against the mocked out library/function/interface.


  you often end up with what I call the binary test suite in which one
  coding error somewhere creates massive test failures.

 A coding error that creates massive test failures is not a problem, in
 my experience using both heavily mocking and heavily non-mocking code
 bases.


It's not a problem for new code, but it makes refactorings and cleanup
harder because you change a method, and rather than the test suite telling
you which things depend on that and therefore need to be updated, and how
far you need to go, you get 100% test failures and you're not quite sure
how many changes are needed, or where they are needed -- until suddenly you
fix the last thing and *everything* passes again.

 My belief is that you need both small, fast, targeted tests (call them
 unit
  tests) and large, realistic, full-stack tests (call them integration
 tests)
  and that we should have infrastructure support for both.

 Yep, but that's besides the point being made. You can do unit tests
 which are small, fast, and targeted, both with or without mocking, and
 without mocking they can be realistic, which is a good thing. If you
 haven't had a chance to see tests falsely passing with mocking, that's
 a good thing too.. you haven't abused mocking too much yet.


Sorry, I was transitioning back to the main point of the thread, raised by
Matty at the beginning.  And I was agreeing that there are two very
different *kinds of tests* and we should have a place for large tests to
go.

I think the two issues ARE related because a bias against mocks, and a
failure to separate out functional tests, in a large project leads to a
test suite that has lots of large slow tests, and which developers can't
easily run many, many, many times a day.

By allowing explicit ways to write larger functional tests as well as small
(unitish) tests you get to let the two kinds of tests be what they need to
be, without trying to have one test suite serve both purposes.  And the
creation of a place for those larger tests was just as much a part of the
point of this thread, as Roger's comments on mocking.

--Mark Ramm

PS, if you want to fit this into the Martin Fowler terminology I'm just
using mocks as a shorthand for all of the kinds of doubles he describes.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Unit Tests Integration Tests

2014-09-12 Thread Mark Ramm-Christensen (Canonical.com)
On Fri, Sep 12, 2014 at 12:25 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 On Fri, Sep 12, 2014 at 12:00 PM, Mark Ramm-Christensen
 (Canonical.com) mark.ramm-christen...@canonical.com wrote:
  I think the two issues ARE related because a bias against mocks, and a
  failure to separate out functional tests, in a large project leads to a
 test
  suite that has lots of large slow tests, and which developers can't
 easily
  run many, many, many times a day.

 There are test doubles in the code base of juju since pretty much the
 start (dummy provider?). If you have large slow tests, this should be
 fixed, but that's orthogonal to having these or not.

 Then, having a bias against test doubles everywhere is a good thing.
 Ideally the implementation itself should be properly factored out so
 that you don't need the doubles in the first place. Michael Foord
 already described this in a better way.


Hmm, there seems to be some nuance missing here.  I see the argument as
originally made as saying don't have doubles anywhere unless you
absolutely have to for performance reasons or because a non-double is the
only possible way to do a test.

I disagree with that.

I know there are good uses of doubles in the code, and bad ones.


 If you want to have a rule Tests are slow, you should X, the best X
 is think about what you are doing, rather than use test doubles.


Agreed. I did not and would never argue otherwise.

 By allowing explicit ways to write larger functional tests as well as
 small
  (unitish) tests you get to let the two kinds of tests be what they need
 to
  be, without trying to have one test suite serve both purposes.  And the
  creation of a place for those larger tests was just as much a part of
 the
  point of this thread, as Roger's comments on mocking.

 If by functional test you mean test that is necessarily slow,
 there should not be _a_ place for them, because you may want those in
 multiple places in the code base, to test local logic that is
 necessarily expensive. Roger covered that by suggesting a flag that is
 run when you want to skip those. This is a common technique in other
 projects, and tends to work well.


I agree with tagging.  A place wasn't necessarily intended to be
prescriptive.  My point, which I feel has already been well enough made is
that there needs to be a way to separate out long running tests.

--Mark Ramm
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: getting rid of all-machines.log

2014-08-26 Thread Mark Ramm-Christensen (Canonical.com)
 you'll have to be more specific, there's been a shotgun of statements in
 this thread, touching on logstash, aggregation removal, rsyslog removal,
 log rotation, deferring to stderr/stdout, 12factor apps, working with ha
 state servers, etc.

 I was referring to Nate's lumberjack package (PR seams to be gone) and
 the syslog streaming change. Lumberjack works on windows as well.


Just so it is easily accessable from this thread the URL for the
aformentioned lumberjack is here:

https://godoc.org/gopkg.in/natefinch/lumberjack.v2

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju induction sprint summary

2014-07-14 Thread Mark Ramm-Christensen (Canonical.com)
Great work,

I am particularly happy to see that we have an incremental, and useful plan
to take care of some of the technical debt in state.State.   This is the
classic form of technical debt as described by Ward Cunningham -- we have
learned a good bit about the problem space and where flexibility is needed
since the code was originally written and the way we would do it today is
different than the best way we knew how to build it two years ago.

--Mark Ramm


On Mon, Jul 14, 2014 at 4:43 AM, Ian Booth ian.bo...@canonical.com wrote:

 Hi all

 So last week we had a Juju induction sprint for Tanzanite and Moonstone
 teams to
 welcome Eric and Katherine to the Juju fold. Following is a summary of
 some key
 outcomes from the sprint that are relevant to others working on Juju (we
 also
 did other stuff not generally applicable for this email). Some items will
 interest some folks, while others may not quite be so relevant to you, so
 scan
 the topics to see what you find interesting.

 * Architectural overview - and a cool new tool

 The sprint started with an architectural overview of the Juju moving parts
 and
 how they interacted to deploy and maintain a Juju environment. Katherine
 noted
 that our in-tree documentation has lots of text and no diagrams. She
 pointed out
 a great tool for easily putting together UML diagrams using a simple text
 based
 syntax - Plant UML http://plantuml.sourceforge.net. Check it out, it's
 pretty
 cool. We'll be adding a diagram or two to the in-tree docs to show how it
 works.

 * Code review (replacement for Github's native code review)

 We are going to use Review Board. When we first looked at it before the
 sprint,
 a major show stopper was lack of an auth plugin which worked with Github.
 Eric
 has stepped up and written the necessary plugin. We'll have something
 deployed
 this week or early next week, once some more tooling to finish the Github
 integration is done. The key features:
 - Login with Github button on main login screen
 - pull requests automatically imported to Review Board and added to review
 queue
 - diffs can be uploaded to Review Board as WIP and submitted to Github when
 finalised

 * Fixing the Juju state.State mess

 state is a mess of layering violations and intermingled concerns. The
 result is
 slow and fragile unit tests, scalability issues, hard to understand code,
 code
 which is difficult to extend and refactor (to name a few issues).

 The correct layering should be something like:
 * remote service interface (aka apiserver)
 * juju services for managing machines, services, units etc
 * juju domain model
 * model persistence (aka state)

 The persistence layer above is all that should be in the state package.
 The plan
 is to incrementally extract Juju service business logic out of state and
 pull it
 up into a services layer. The first to be done is the machine placement and
 deployment logic. Wayne has a WIP branch for this. The benefit of this work
 can't be overstated, and the sprint allowed both teams to be able to work
 together to understand the direction and intent of the work.

 * Mongo 2.6 support

 The work to port Juju to Mongo 2.6 is pretty much complete. The newer Mongo
 version offers a number of bug fixes and  improvements over the 2.4
 series, and
 we need to be able to run with an up-to-date version.

 * Providers don't need to have a storage implementation (almost)

 A significant chunk of old code which was to support agents connecting
 directly
 to mongo was removed (along with the necessary refactoring). This then
 allowed
 the Environ interface to drop the StateInfo() method and instead implement
 a
 method which returns the state server instances (not committed yet but
 close).
 The next step is to remove the Storage() interface from Environ and make
 storage
 an internal implementation detail which is not mandatory, so long as
 providers
 have a way to figure out their state servers (this can be done using
 tagging for
 example).

 * Juju 1.20.1 release (aka juju/mongo issues)

 A number of issues with how Juju and mongo interact became apparent when
 replicasets were used for HA. Unfortunately Juju 1.20 shipped with these
 issues
 unfixed. Part of the sprint was spent working on some urgent fixes to ship
 a bug
 fix 1.20.1 release. There's still an outstanding mongo session issue that
 needs
 to be fixed this week for a 1.20.2 release. Michael is working on it. The
 tl;dr
 is that we are holding onto sessions and not refreshing, which means that
 the
 underlying socket can time out and Juju loses connection to mongo.

 * Add support for Juju in China for Amazon (almost)

 The supported regions for the EC2 provider are hard coded and so new
 regions in
 China were not supported. The Chinese regions also use a new signing
 algorithm.
 There should be a fix in place this week. Since all the changes are in the
 goamz
 library, the change to juju-core is merely a dependency update. So this
 feature
 should be 

Re: Port ranges - restricting opening and closing ranges

2014-06-26 Thread Mark Ramm-Christensen (Canonical.com)
My belief is that as long as the error messages are clear, and it is easy
to close 8000-9000 and then open 8000-8499 and 8600-9000, we are fine.
 Of course it is nicer if we can do that automatically for you, but I
don't see why we can't add that later, and I think there is a value in
keeping a port-range as an atomic data-object either way.

--Mark Ramm


On Thu, Jun 26, 2014 at 2:11 PM, Domas Monkus domas.mon...@canonical.com
wrote:

 Hi,
 me and Matthew Williams are working on support for port ranges in juju.
 There is one question that the networking model document does not answer
 explicitly and the simplicity (or complexity) of the implementation depends
 greatly on that.

 Should we only allow units to close exactly the same port ranges that they
 have opened? That is, if a unit opens the port range [8000-9000], can it
 later close ports [8500-8600], effectively splitting the previously opened
 port range in half?

 Domas

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju is on github

2014-06-04 Thread Mark Ramm-Christensen (Canonical.com)
WOOOT!!!


On Tue, Jun 3, 2014 at 7:38 PM, Martin Packman martin.pack...@canonical.com
 wrote:

 Juju development is now done on github:

 https://github.com/juju/juju

 See the updated CONTRIBUTING doc for the basics. To land code you want
 the magic string $$merge$$ in a comment on the pull request so the
 jenkins bot picks it up.

 Note that the bot is currently taking around three times as long to
 run the tests, as we can't run suites in parallel due to mongo test
 flakiness. We'll work on improving that, but feel free to find and fix
 slow tests while you wait. :)

 As a team we're going to have some pain and discovery this week,
 please share any git tips and information you find useful as you go
 along. Thanks!

 Martin

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: --constraints root-disk=16384M fails in EC2

2014-05-29 Thread Mark Ramm-Christensen (Canonical.com)
The link to the bug is here:

https://bugs.launchpad.net/juju-core/+bug/1324729


On Thu, May 29, 2014 at 7:26 PM, Ian Booth ian.bo...@canonical.com wrote:

 Hi Stein

 This does appear to be a bug in Juju's constraints handling for EC2.
 I'd have to do an experiment to confirm, but certainly reading the code
 appears to show a problem.

 Given how EC2 works, in that Juju asks for the specified root disk size
 when starting an instance, I don't have a workaround that I can think
 of to share with you.

 The fix for this would be relatively simple to implement and so can be
 done in time for the next stable release (1.20) which is due in a few
 weeks. Alternatively, we hope to have a new development release out
 next week (1.19.3).  I'll try to get any fix done in time for that also.

 I've raised bug 1324729 for this issue.

 On Fri 30 May 2014 09:29:15 EST, GMail wrote:
  Trying to deploy a charm with some extra root disk space. When using the
 root-disk constraint defined above I get the following error:
 
  '(error: no instance types in us-east-1 matching constraints
 cpu-power=100 root-disk=16384M)'
 
  I’m deploying a bundle with the following constraints: constraints:
 mem=4G arch=amd64”, but need more disk-space then the default provided.
 
  Any suggestions ?
 
 
  Stein Myrseth
  Bjørkesvingen 6J
  3408 Tranby
  mob: +47 909 62 763
  mailto:stein.myrs...@gmail.com
 
 
 
 

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju