On 14 August 2014 07:31, Menno Smits wrote:
> I like the idea being able to trigger failures using the juju command line.
>
> I'm undecided about how the need to fail should be stored. An obvious
> location would be in a new collection managed by state, or even as a field
> on existing state objec
Just to back up Dave's arguments - all sys admins I know would be a big -1 on
Juju doing it's own log rolling. It's a recipe for lost log files, missing data
etc. It's a mixing of responsibilities that should be handled separately.
Just on the volume point Dave raised - we do log a lot but that's
Ian asked me to post my thoughts here.
I am not in favour of applications rolling their own logs, I believe
that applications should not know anything about their log output,
they should just dump it all to stdout and another process should take
care of shuttling the data, logging it, culling it,
I like the idea of being able to trigger failures stochastically. I'll
integrate this into whatever we settle on for Juju's failure injection.
On 14 August 2014 02:29, Gustavo Niemeyer
wrote:
> Ah, and one more thing: when developing the chaos-injection mechanism
> in the mgo/txn package, I als
I like the idea being able to trigger failures using the juju command line.
I'm undecided about how the need to fail should be stored. An obvious
location would be in a new collection managed by state, or even as a field
on existing state objects and documents. The downside of this approach is
tha
Not much to add except to say I really like this work and I think it is
going to really help us make Juju much better when encountering failures. I
also like the idea of providing easy access to triggering failures through
CLI commands.
On Wed, Aug 13, 2014 at 10:29 AM, Gustavo Niemeyer <
gustav
We also have UpgradeRequired which is what upgrade triggers to force a
restart without treating it as an error.
John
=:->
On Aug 11, 2014 6:02 AM, "Menno Smits" wrote:
> On 11 August 2014 13:48, Andrew Wilkins
> wrote:
>
>> On Mon, Aug 11, 2014 at 5:41 AM, Menno Smits
>> wrote:
>>
>>> How this
Ah, and one more thing: when developing the chaos-injection mechanism
in the mgo/txn package, I also added both a "chance" parameter for
either killing or slowing down a given breakpoint. It sounds like it
would be useful for juju's mechanism too. If you kill every time, it's
hard to tell whether t
That's a nice direction, Menno.
The main thing that comes to mind is that it sounds quite inconvenient
to turn the feature on. It may sound otherwise because it's so easy to
drop files at arbitrary places in our local machines, but when dealing
with a distributed system that knows how to spawn its
Mages, shamans, and practitioners of high mana magic.
All critical bugs affecting stable and devel are fixed, but devel has
yet to passed CI. Sorry.There was a bout of cloud failures that I
discounted by replaying the tests. run-unit-tests-precise-amd64 failed
4 times in a row, but the test failur
There's been some discussion recently about adding some feature to Juju to
allow developers or CI tests to intentionally trigger otherwise hard to
induce failures in specific parts of Juju. The idea is that sometimes we
need some kind of failure to happen in a CI test or when manually testing
but t
Well, iirc 'go get' looks for a branch matching the version of go (e.g.
'go1'), but there are not additional ways of specifying branches.
Domas
On Tue, Aug 12, 2014 at 12:13 PM, Nate Finch
wrote:
> Nope, go get always gets "master" for git branches.
> On Aug 12, 2014 4:25 AM, "roger peppe" wr
If I may make a suggestion, how about looking into Gerrit as a review system:
http://gerrit-review.googlesource.com/Documentation/
Its the review system of choice for the OpenStack project and allows
interdependent pull requests. You can see it in action here:
https://review.openstack.org/
Che
13 matches
Mail list logo