Re: juju api and authenticated request

2014-02-06 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 02/07/2014 05:09 AM, Adam Stokes wrote:
 I read through the docs/api.txt to try and get an understanding of
 how to connect to juju's api server and what I've come up with so
 far is the following:
 
 #!/usr/bin/env perl
 
 use Mojo::UserAgent; use 5.14.0; use DDP;
 
 my $ua = Mojo::UserAgent-new;
 
 $ua-websocket('wss://192.168.122.16:17070' = json = { 
 'RequestId' = 1, 'Type' = 'Admin', 'Request' = 'Login', 'Params'
 = {'Tag' = 'machine-0', 'Password' =
 'f0d44f279b47cc8b5f7ea291f5e3b30a', 'Nonce' = 'fake_nonce'} } =
 sub { my ($ua, $tx) = @_; say failed .$tx-error; p $tx-req; p
 $tx-res; } ); Mojo::IOLoop-start unless
 Mojo::IOLoop-is_running;

The Nonce is used by machine/unit agents, and not by Users. I'm a
bit surprised by Perl, given we have something called Mojo that is
written in Python.

apiInfo := api.Info{
Addrs:endpoint.Addresses,
CACert:   []byte(endpoint.CACert),
Tag:  names.UserTag(info.APICredentials().User),
Password: info.APICredentials().Password,
}

You generally shouldn't be able to log in as a machine agent
(machine-0 in your above name). Instead you would expect to log in as
user-admin.

So something more liek:

'Params' = {'Tag' = 'user-admin',
  'Password' = # Value taken as admin-secret from environments.yaml}

In the go code above, the reason to supply CACert is because we do
strict connection checking, it isn't something that goes over the wire.


 
 This is very early stages and the code doesn't work as it returns
 a 403. My question is am I on the right track for accessing the 
 apiserver over a websocket connection? Should I be passing the
 params as json? The port, and params used are obtained through

I do believe the params should be JSON content, but there is a fair
bit to work out the formatting of content on the wire.

 ~/.juju/environments/local.jenv after a `juju bootstrap`. Should I
 be passing the certs through as well? I went through some of the
 test cases and attempted to decipher how that worked but now I'm a
 bit stuck as to where to go next. The errors returned so far have
 just been 403 forbidden.
 
 Also, is this even the right place I should be for messing around
 with RPC and juju? :)
 
 Thanks!
 
 

That seems a reasonable place, though there is already Python code in
https://launchpad.net/python-jujuclient
and
https://launchpad.net/canonical-mojo

that already have the ability to do most of the stuff you probably
want to do. I realize that isn't in Perl, but you could at least use
it as a starting point/reference code?

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlL0i/4ACgkQJdeBCYSNAAN3+QCZASMui/ooDvNlHqssUIXImkYZ
4GcAnjFmwYgrb8hVE6gpEbJl4459WoLp
=6fa2
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Landing bot back online

2014-02-05 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 We should probably set up a new environment with a current juju
 after fixing up the tarmac charm to have some more of the things we
 needed to manually hack in last time around.
 
 Martin
 

I would agree, except I have to do the whole thing differently in 2-3
weeks anyway. So it isn't worth spending tons of time on this. We'll
be looking at a jenkins lander for our next steps.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlLzMxgACgkQJdeBCYSNAAMyFwCgiNIJyHF9cc1Dq/dWgIaKxrWh
tDMAoMua6g6IThl6gAcQ2KoiUvsulZij
=VMyG
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation dependency

2014-01-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2014-01-30 5:49, Sameer Zeidat wrote:
 Hello,
 
 I'm relatively new to juju, and starting to write some charms.
 
 One thing I'm missing is relation dependency. Say service A can
 join a relation with B and C. I would like for juju to stop A from
 joining C if it has not joined B first. Same also, stop A from
 departing C if it hasn't departed B first.

I'm pretty sure most of this is taken care of in terms of charm
configuration. So if charm A sees C but doesn't yet see B, then it
doesn't configure its connection to C. This is outside of what Juju
specifically controls.

 
 If this isn't possible then I'd like to at least be able to run a
 hook on C, for instance, if A's relation with B changes.

Generally this is done by reflecting some sort of configuration change
that C can see. So when A is connected to B it sets a value in its
relation with C that C then reacts to.

 
 Does it make sense?
 
 Thanks,
 
 

Having specific examples can help set the tone for what is actually
being accomplished and how it would be modeled with Juju's relations.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlLqDbgACgkQJdeBCYSNAAOMVwCgqxZOiAVRbqUfcZ3rpvzNY4Hv
HBUAn1Qtx2ISjVKpS+4rrLwIk7KrHrVr
=PzH5
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju on existing VM

2014-01-22 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2014-01-23 10:33, Gandhi, Ibha (HP Software) wrote:
 Hi,
 
 
 
 I want to deploy application on a specific VM and don?t want to 
 provision a new VM.
 
 This is something I have been doing with chef, please let me know 
 if it?s possible to deploy
 
 application on a particular VM through juju without provisioning a 
 new one.
 
 
 
 Thanks,
 
 -Ibha

We are currently polishing some work we call manual provisioning
where you have a machine, which you then add into an existing
environment. The syntax is something like:
  juju add-machine ssh:user@host

I believe that work isn't very polished in the latest stable release
(1.16.5) but is quite a bit better in our current unstable snapshot
(1.17.0) and should be polished for the next stable (1.18).

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlLgufsACgkQJdeBCYSNAANpEgCbBTV0vD7DXjzQCy4VWQqHVS37
voMAn3uLE4lwV+9ZvmWpOCQw4BP3sbGv
=pb7H
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Closed network support

2014-01-06 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2014-01-06 5:30, Tim Penhey wrote:
 Hi folks,
 
 We have a rather nebulous work item called closed network support
 where we mention supporting proxies.
 
 Do we have a list of what proxies we need to support and where we 
 currently fall down?


I don't know of a specific list, but apt and http(s) proxies are what
has shown up in the past.

 
 Obviously there is apt. What proxy configuration or exports are we
 missing?

We need to be able to configure the proxies in the environment config,
such that they get propagated to the environment. (Preferably early on
in the cloud-init step.)

 
 HTTP proxies I guess are also needed. Do we have a document
 somewhere, or shall I start one?
 
 What proxies are used by the keyservers?

At this point, we shouldn't need one, as we should have the
cloud-archive *key* added to the cloud-init script, rather than the
key ID (which was then used to download the key).

 
 What else should we be considering?
 
 Tim
 

Charm store access I think. And setting up some sort of area so we can
actually use juju without outbound access and see how good/bad/broken
things are.

I think charms themselves can try to download remote things. For
priority charms, (eg Openstack) we want to make sure that they *can*
be installed without outside access.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlLLoMQACgkQJdeBCYSNAAMApQCgsmwDymZ3PD5reYQ/NMXZ7EbT
eCcAmwYrF/jx5gJE0rIMFKrRd0vIk3h/
=Nn1h
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dealing with bugs that are only in trunk.

2013-12-12 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-12 22:09, Curtis Hovey-Canonical wrote:
 I think the number of open critical and high bugs is inflated. The 
 open critical bugs are an incentive to release soon, but many of
 the critical bugs were really closed the moment trunk merged the
 fix.
 
 We close bugs when we know we have delivered a fix to the affected 
 users. Our practice of targeting to milestones/releases is a
 little misleading if we assume the bug was in the previous release.
 Some bugs only affect users of trunk, such as developers and
 testers.
 
 There are several bugs targeted to 1.17.0 that were introduced in 
 trunk and were fixed a few days later. I think these bugs are fix 
 released since developers and CI are no longer affected. I want to 
 close the bugs. It would then be easy to see which critical bugs
 we want to release to users.

I'm perfectly happy marking Fixed Released for bugs that only existed
in trunk.

I would actually go far enough to say that most Critical bugs are
going to be of this form. Because it is sort of we cannot release
with this bug which almost always means it is a regression vs the
last release.

 
 I have previously downgraded bugs to high when the fix hit trunk,
 or if the fix was delivered in a stable point release. I don't like
 this practice because it looks like critical importance was used 
 deceptively. Since we review all the open bugs to write release
 notes, we are constantly re-reading bugs that don't affect anyone
 who will get the release.
 

I think this falls more into do we remove the milestone target if the
fix was done in trunk. Or are you thinking you don't have to worry
about Fix Released bugs when you are going over the milestone list?

I actually like to have milestones in that it gives you a quick
indication of when the bug existed, but I realize Launchpad doesn't
have Infestations to track where the bug exists, vs where it was fixed.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKqiZkACgkQJdeBCYSNAAOrgACgtJP3uZxFfYbeM1Y0usWsfCHY
YqQAn35LOmyT1ydHRkOc0bHl04WPWkbV
=rcLi
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Dealing with bugs that are only in trunk.

2013-12-12 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-12 22:16, Curtis Hovey-Canonical wrote:
 On Thu, Dec 12, 2013 at 1:09 PM, Curtis Hovey-Canonical 
 cur...@canonical.com wrote:
 We close bugs when we know we have delivered a fix to the
 affected users. Our practice of targeting to milestones/releases
 is a little misleading if we assume the bug was in the previous
 release. Some bugs only affect users of trunk, such as developers
 and testers.
 
 Another facet of releases is that we do devel and stable releases.
 We mark bugs as fix released, but we advise users to not use the
 fix in production. Surely this is confusing. The fix is not really
 released until 1.18.0.
 

I think that is a fair point, and I see 2 possible ways of doing it.

One is that we could add a stable release series and add a bug
target for that series for things that are important fixes relative to
the last stable release.

Or we could only mark things Fix Released when it goes to a stable
release.

Although, if we have it in a dev release someone *could* get access to
it if it was critically important. (vs having to build from source.)
But the fact that we aren't guaranteeing they'll be able to upgrade
away from a dev release means that may not be sufficient to ever point
them to.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKqigoACgkQJdeBCYSNAAMnawCgxSRceqPpZ4VpEFScWfaU5+4U
tasAn1NCTrahk2CEQ9mURTw+ZnDVieAT
=azAD
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju bootstrap fails in Azure (BadRequest - The affinity group name is empty or was not specified.)

2013-12-09 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-10 4:11, max cantor wrote:
 Thanks.  Filed a report at
 https://lists.ubuntu.com/archives/juju/2013-December/003323.html
 
 max
 

I think that was just a copypaste error, as that is a link to
Robbie's email to you. I think you meant
  https://bugs.launchpad.net/bugs/1259350

Thanks for filing a bug report.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKmxEwACgkQJdeBCYSNAANtcACeNM9zvTLcfI5gDECp9xiNohzL
xBIAnjB8oyvWDBmEmzwplhCGHGxZToKn
=JuD8
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Does juju work with a JavaScript-less mongodb?

2013-12-09 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 Did you check the 'mgo' source as well? I thought I remember when
 you first connect it uses eval to get date stamps to check for
 clock skew.
 
 I think you might be thinking of the state/presence package there, 
 which uses eval for pre-2.4 mongo to do that.
 

I just remember seeing it, not where. :)

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKlsAwACgkQJdeBCYSNAAMlQwCgzAGxj5ZJg+M6WAHwFEkO/oao
zjcAmweRtDt75EFtxvTln+gR8UrABfcC
=6UrD
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Do we care about 1.16 compatibility for 'juju add-machine ssh:user@host ?

2013-12-08 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

So I'm trying to finish the work to maintain compatibility between
trunk and 1.16 for all commands.

I think I've finished everything except for manual provisioning.
However, this rabbit hole keeps getting deeper, so I wanted to ask how
much we really care.

The issues I've run into so far:
  lp:~jameinel/juju-core/add-machine-manual-compat-1253631

1) Code factoring. We have a call to 'recordMachineInState' which
actually does a bit of work to figure out what state the machine is in
(series, arch, etc) and then directly calls the API. I have one change
which I think makes it much nicer, which is to have 1 call to gather
the information, and then a *different* function to actually record
it. (That way the gather step can be reused when we have to fall back
to direct DB manipulation).

2) Client.InjectMachine was not in 1.16, not a big deal, we can invoke
the State.InjectMachine directly (after casting some parameters
because state.MachineJob is not a params.MachineJob)

3) Client.DestroyMachines was not in 1.16. We might be adding it in
1.16.5, but for now it doesn't exist.

The code that *used* to exist in state was moved solely into the API
Server. Now for juju destroy-machine we copied the old code as
'cmd/juju/destroymachine.go destroyMachines.

I *could* put that code back into State, since we now have two places
that want it.

However, it actually doesn't really do what we want anyway. The code
in question detects that we had an error after allocating a machineId,
and then tries to clean up by calling DestroyMachine. It is fairly
likely, though, that the machine agent will never actually come up.
And 1.16 *also* doesn't have ForceDestroyMachines. Which means while
it tries to clean up it really only sets the machine to
agent-state: Pending, life: Dying. and never really goes away.

It seems like a lot of busy work to end up with a machine that is in a
bad state.

4) Client.MachineConfig didn't exist in 1.16. This is probably the
biggest deal. The API actually does a lot of work. It grabs the API
and State addresses, looks up tools for the machine (in the provider),
and sets up a new Machine ID + Password combination for the agent.

The big thing is having to reproduce all that chunk of code seems like
a PITA and searching for tools from the client is annoying to do again.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKkdYgACgkQJdeBCYSNAAO7+QCgiCRMSS/qZ2+2Z0efYnTRgdDt
zeQAn1gNNu8gQxu3zKqV9Z1wW5BbG/cT
=q07k
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju bootstrap w/ gccgo built cli and tools

2013-12-05 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-06 6:24, David Cheney wrote:
 Ok, good news first.
 
 gccgo compiled tools work fine.
 
 statically linking libgo also makes the tools as self contained as
 our gc compiled ones.
 
 Bad news,
 
 jujud is 40mb, -Os or -O2 effect, in fact the latter makes it a bit
 larger
 
 the binaries cannot be stripped, the debug data is required for 
 operation of the program, i'm guessing gccgo's reflection
 implementation requires the debug symbols.
 
 Dave

Two tidbits from me:

1) jujud on my system is 20MB, so this is approx 2x larger. A fair
amount, but I think with compression we saw it wasn't quite as big of
a deal (4.6MB vs 5.5MB in tgz form for the last test, but that was a
smaller binary).

2) The Ubuntu security team would *really* like it if we used a system
libgo.so so that they could supply security fixes for it (like the
built in SSL libs) without having to have us rebuild all of our jujud
binaries. Which would save us some of that size (I think you said
approx 10MB was libgo.so), at a cost of having to have a libgo package
for all supported platforms. If we name the package appropriately
(libgo-1.1, etc) then we can probably even still migrate to a
different version of the runtime when it becomes useful (we just
install a different package in cloud-init).

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEUEARECAAYFAlKhVPYACgkQJdeBCYSNAAMAGgCXRPrIqBlFaHOEAuLA4zRZGarV
ZACfS1NM6K6bIZHzaRTBvxO8f/LRrKE=
=Xa62
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: KVM provisioning now in trunk

2013-12-04 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 Also landed recently is a KVM option for the local provider.
 For the truly trivial, add container: kvm to the local 
 configuration.
 
 Is it possible to mix LXC and KVM containers under the local
 provider?
 
 I can think of one use case where this would be useful.
 
 Cheers
 
 James
 
 

You could probably use juju deploy --to lxc:1 to put an LXC instance
inside a KVM container (you can do it with lxc too, but it breaks your
env).

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKfCaMACgkQJdeBCYSNAAMnNgCgtvAxEEAZI7ms4HufvSTeOmuL
JQAAmQFkZ1uYW+TeHyE4osW2jK9sIVui
=ppau
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju-mongodb package

2013-12-02 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-02 13:01, James Page wrote:
 Morning folks
 
 so far I have requests for:
 
 mongo (default shell)
 
 mongoexport (used in new juju backup plugin)
 
 mongodump/mongorestore (useful)
 
 I'm keen we don't bloat out the 'server' package to much - the 
 binaries are quite large due to the requirement to statically link
 v8.
 
 Which of these is core to the function of juju, and which are
 useful for debugging?  Please also that the mongodb-clients package
 will still be available in universe so you will always be able to
 get any of the client tools that way as well (I'll probably keep
 those packages maintained to the latest point release for the
 released series).
 

Is mongodump/mongorestore going to be in the mongodb-client or
mongodb-server?

How is this going to relate if we have to release a custom version of
MongoDB (2.6 instead of 2.4).

The only thing we actively use on a regular basis (AFAIK) is mongos
and mongod. However, if we set up regular backup process, then we'll
need one of mongodump or mongoexport (I don't know what the difference
is between them).
And obviously dump isn't very useful if you can't restore, though at
the point you are restoring is a reasonable time to create new tools.

I don't fully understand the difference from mongoexport vs mongodump
(it looks like dump generates a binary snapshot compatible with
restore, while export/import generate text representations of the data.)

I'd *really* like us to stick with *one* of them as the recommended
method for backing up the content.

At which point, mongos, monogod and mongoexport/mongodump become
critical for the regular operation of the system (assuming we set up
backups as a regular process), and then mongo,
mongoimport/mongorestore become tools that you install when you want
to inspect/restore/probe/etc.

If you're happier splitting them, I'm personally fine with that. I
*am* a bit concerned about them being split and then we bump the
version of juju-mongodb and then we end up without the corresponding
tools available.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcTmsACgkQJdeBCYSNAAPrEwCgsYBDSS4Mgm4coYapmrMc6RMk
i5MAoIthw7hMUXKX6VVbG8gaaGo2vuZA
=4DdV
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju-mongodb package

2013-12-02 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-02 13:01, James Page wrote:
 Morning folks
 
 so far I have requests for:
 
 mongo (default shell)
 
 mongoexport (used in new juju backup plugin)
 
 mongodump/mongorestore (useful)
 
 I'm keen we don't bloat out the 'server' package to much - the 
 binaries are quite large due to the requirement to statically link
 v8.
 
 Which of these is core to the function of juju, and which are
 useful for debugging?  Please also that the mongodb-clients package
 will still be available in universe so you will always be able to
 get any of the client tools that way as well (I'll probably keep
 those packages maintained to the latest point release for the
 released series).

Looking at the juju-backup script, it is intended to use both dump and
export. The export is so that we end up with JSON format for our
settings table so we can extract content from there as part of
restore (what credentials were you using, etc).

Are they both really big? If so, we can probably use: mongo --eval
STUFF instead of mongoexport, but that would again presume that we
have a 'mongo' binary available.

I *believe* the data in the mongo dump is just BSON encoded, so it
would be possible to post-filter it with a bson aware tool to get it
into human-consumable form (rather than using mongoexport at backup time).

I guess the question is, how expensive is it to include it? For Juju
core folks the cost of including it is all externality so *we* aren't
particularly motivated to keep our toolset minimal. But if there is a
use case we want to support, we're willing to help.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcT7EACgkQJdeBCYSNAAPefwCgoQpVI4z6KJcyhJduazo0UJKZ
O4EAn0/72/0tujz1PGjVcBbFGQTxuiaW
=98tZ
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New juju-mongodb package

2013-12-02 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-12-02 16:39, roger peppe wrote:
 On 2 December 2013 11:40, Ian Booth ian.bo...@canonical.com
 wrote:
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
 
 
 I don't fully understand the difference from mongoexport vs
 mongodump (it looks like dump generates a binary snapshot
 compatible with restore, while export/import generate text
 representations of the data.)
 
 I'd *really* like us to stick with *one* of them as the
 recommended method for backing up the content.
 
 
 Both export and dump are used by the Juju backup tool. dump is
 used to do a full bson export of the database and is required for
 a database recovery. export is used to write to json the
 environment settings document so that there is a human readable
 copy of the full environment settings contained in the backup
 tarball.
 
 ISTM that the mongo command could be used to similar effect, as
 John suggests, couldn't it? Then we would not need mongexport at
 all.
 

For what we are using this page:

http://stackoverflow.com/questions/11255630/how-to-export-all-collection-in-mongodb

has something like:
  mongo --eval 'printjson(db.getCollectionNames())'

We can easily change the internal one to something like
db.find('session') or whatever it needs to be.

Now, we wouldn't need 'mongo' otherwise, but I think if we have to
pick a tool 'mongo' is more generally useful, and it doesn't bundle v8
so it is a smaller binary as well.

So +1 to switching to 'mongo' instead of 'mongoexport'.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKcgaMACgkQJdeBCYSNAAMkZgCfUppZVL16+mkARijKDfwIm7cp
NAkAoLKxYwuUcTDtrIxdRNBiUPiR6BwD
=AuBf
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: How to make juju aware of IP address changes?

2013-11-28 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-28 17:54, Peter Waller wrote:
 Hi All,
 
 We needed to attach an elastic IP to some of our machines, and it 
 seems juju hasn't recognized that they have new addresses.
 
 How can I force juju to notice?
 
 Thanks,
 
 - Peter

I know in the 1.16 series we have an addressupdater worker that
polls loop that polls every 15 minutes. I'm not sure what version
you're using.

If you're using 1.14 I can't really say how to trigger it to poll the
IP addresses again.

If you *are* on the 1.16 series, then definitely file a bug with
details so we can figure out what is going on.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKXV6gACgkQJdeBCYSNAAOZcACfULBACxQDcVBerAJ6lDseLhiH
hBsAnRwf2DiAX5e1UyH10Ar23DDrrikV
=SU2P
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to make juju aware of IP address changes?

2013-11-28 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-28 18:58, Peter Waller wrote:
 Our clients are 1.16. How do I check the version of the thing which
 does the addressupdatering and how do I update it?
 
 If it's the bootstrap node, how do I find the juju binary? It
 doesn't seem to be in the $PATH.
 
 

Well, if you do juju status it will tell you the version of all the
agents that are running.

Otherwise the 'jujud' binary is in /var/lib/juju/tools/*/jujud

It does multi-version with symlinks for each agent to what tool set it
is running.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKXc6EACgkQJdeBCYSNAAOh8wCfT7Oihx+TdYsYEE/s53AJiFj0
z8gAnRRPUHrycsrvm1ZPtHISAQYjpNZM
=bOG+
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to make juju aware of IP address changes?

2013-11-28 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-28 21:03, Peter Waller wrote:
 On 28 November 2013 16:54, Kapil Thangavelu 
 kapil.thangav...@canonical.com
 mailto:kapil.thangav...@canonical.com wrote:
 
 $ juju upgrade-juju is used for updating the agents within the 
 environment. What version of juju is on the agents in the
 environment?
 
 
 0.13.2.

0.13? or 1.13.2 ?

- From 1.13.2 you need to:

 juju upgrade-juju --version 1.14.1
and then
 juju upgrade-juju --version 1.16.0

There were some changes from 1.13 that need to be applied by the 1.14
upgrade, and then more that need to be applied by the move to 1.16.
1.13 was a 'dev' release so we don't support all possible ugrades.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKXejQACgkQJdeBCYSNAAN06gCdEAtPZaWTEaJDRIlMcsc4mzTi
DbcAn2/mov7zY4XvbKp243skz+lgo/UF
=YzAI
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to make juju aware of IP address changes?

2013-11-28 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You should be able to force a version with juju upgrade-juju
- --version=$XYZ, but the client should have detected what versions
were available before setting the target version.

It is possible you are running into the problem of jumping too many
versions at once.

John
=:-

On 2013-11-28 21:13, Peter Waller wrote:
 I've done `juju upgrade-juju` and it seems to not be working
 correctly. For all of my machines I'm seeing something like this in
 my logs every few seconds. What now?
 
 machine-44:2013-11-28 17:09:59 INFO juju runner.go:253 worker:
 start machiner machine-44:2013-11-28 17:09:59 INFO juju
 runner.go:253 worker: start upgrader machine-44:2013-11-28
 17:09:59 INFO juju runner.go:253 worker: start deployer 
 machine-44:2013-11-28 17:09:59 INFO juju.worker.deployer
 deployer.go:103 checking unit foo-dev/0 machine-44:2013-11-28
 17:09:59 INFO juju.worker.deployer deployer.go:103 checking unit
 bar-dev/0 machine-44:2013-11-28 17:10:00 INFO
 juju.worker.machiner machiner.go:51 machine-44 started 
 machine-44:2013-11-28 17:10:00 INFO juju.worker.deployer
 deployer.go:103 checking unit baz-dev/0 machine-44:2013-11-28
 17:10:01 INFO juju.worker.deployer deployer.go:103 checking unit
 blarg-dev/0 machine-44:2013-11-28 17:10:01 INFO
 juju.worker.deployer deployer.go:103 checking unit baf-dev/0 
 machine-44:2013-11-28 17:10:01 INFO juju.worker.deployer
 deployer.go:103 checking unit yoyo-dev/0 machine-44:2013-11-28
 17:10:01 ERROR juju runner.go:200 worker: fatal upgrader: no
 matching tools available machine-44:2013-11-28 17:10:01 DEBUG juju
 runner.go:234 worker: killing machiner machine-44:2013-11-28
 17:10:01 DEBUG juju runner.go:234 worker: killing deployer 
 machine-44:2013-11-28 17:10:01 ERROR juju runner.go:211 worker:
 exited api: no matching tools available machine-44:2013-11-28
 17:10:01 INFO juju runner.go:245 worker: restarting api in 3s 
 machine-44:2013-11-28 17:10:04 INFO juju runner.go:253 worker:
 start api
 
 
 
 On 28 November 2013 17:03, Peter Waller pe...@scraperwiki.com 
 mailto:pe...@scraperwiki.com wrote:
 
 On 28 November 2013 16:54, Kapil Thangavelu 
 kapil.thangav...@canonical.com 
 mailto:kapil.thangav...@canonical.com wrote:
 
 $ juju upgrade-juju is used for updating the agents within the 
 environment. What version of juju is on the agents in the 
 environment?
 
 
 0.13.2.
 
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKXeqEACgkQJdeBCYSNAAMGaACgvJgcHnX0w1brWxtq4GydW+nq
bucAnibnnvWG0tIxl+OAxXDOluw/2SeO
=ejic
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: How to make juju aware of IP address changes?

2013-11-28 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-28 21:28, Mark Shuttleworth wrote:
 On 28/11/13 17:25, Peter Waller wrote:
 Is it safe to run that whilst my agents appear to be spinning as
 I described in my previous e-mail?
 
 A very good question.
 
 If updates are required to be applied in a sequence, surely Juju
 would know that better than any given user of Juju, and either
 instruct the user accordingly or JFDTRT?
 
 Mark
 

In trunk we prevent you from upgrading past unsupported boundaries.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKXgjgACgkQJdeBCYSNAAMoowCfWmhcYjl7ax7ZKW9rwH87cVNl
+GAAoIlil10hT0at4XG54yzlN97eG3Xj
=nE3E
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Connection performance

2013-11-24 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 5) So if we found a way to pipeline requests, and get rid of gaps, 
 instead of taking 10 RTT, we could potentially get down to 5 RTT.
 (2 gaps, calling Upgrade + Login + SetEnvironmentConstraints
 immediately rather than waiting for responses, not waiting for
 Close).
 
 Is it worth pushing further on this? I certainly think the first
 thing to do is start caching the API, as we know we want to do
 that, but we potentially can speed it up *another* 2x by doing
 better pipelining.

I was curious, so I threw together a Python script that just uses
socket.connect and ssl.wrap_socket, and then replays the bites on the
wire that I captured with wireshark. Some interesting results:

1) s = socket.connect((address, port)); sslSock = ssl.wrap_socket(s)
is the same speed as
  s = socket.socket(); sslSock = ssl.wrap_socket(s); sslSock.connect()

Consistently 800ms for me (w/ 222ms average ping). (I've seen spikes
of 2s to ssl connect.)
Anyway, it just means that it really does require TCP to round trip
the connection before SSL starts talking.


2) It *is* possible to pipeline the HTTP GET/Upgrade request and the
Login request. Such that I can do:

sslSock.sendall((HTTPUPGRADE % (opts.address, opts.port)) + LOGIN)

And everything works fine.

However, you cannot send the LOGIN request and a PING/SETCONSTRAINTS
request without a pause in the middle.

I ran into some really strange behavior here. If I did:
   sslSock.sendall(HTTPUPGRADE)
   sslSock.sendall(LOGIN)
   sslSock.sendall(PING)
   sslSock.sendall(SETCONSTRAINTS)

Then it fails unless I wait a really long time after login (200+ms).
My best guess is the LOGIN bytes don't actually go out the pipe very
quickly when it is 2 sendall. Which ends up putting the PING into the
same approximate time as the LOGIN. (And if you send LOGIN+PING it
fails because go hasn't changed the srvRoot for the connection when it
tries to serve the PING request.)

However, I *can* do:
   sslSock.sendall(HTTPUPGRADE + LOGIN)
   sslSock.sendall(PING + SETCONSTRAINTS + CLOSE)

I tried a small sleep() between the two requests but it isn't actually
required. Since whatever is going on does cause the second set of
packets to get sent at a later time.

Maybe this is an SSL thing? It has to finish encrypting whatever bytes
to give it so that partially frames the requests?

3) ssl.recv has a strange behavior in python. Whereby the first call
always seems to return 1 byte of the content, and the next call
returns the rest of the body for a given request. recv() does seem to
partition the response based on the actual response frame from the
server, even when you pipeline everything.


4) Pipelining does help, quite measurably:

$ time juju set-constraints -e amz-jam mem=2G
real0m2.164s


# Issue each one-by-one and wait for a response
$ time py test_set_constraints.py
connecting to 184.73.69.243:17070
0.222s 0.222s connected
0.934s 0.713s ssl connected to ('184.73.69.243', 17070) ...
1.157s 0.222s HTTP upgraded to WebSocket: HTTP/1.1 101 Switch...
1.512s 0.356s WebSocket Login: \x81\x1d{RequestId:1,Response:{}}
1.734s 0.222s WebSocket Ping: \x81\x1d{RequestId:2,Response:{}}
1.960s 0.226s WebSocket SetConstraints: \x81\x1d{RequestId:3,Res...
2.182s 0.222s WebSocket Close: \x88\x02\x03\xe8


# sendall(HTTPUPGRADE + LOGIN); sendall(PING+SETCONSTRAINTS+CLOSE)
$ time py test_set_constraints.py --pipelined
0.843s 0.843s ssl connected
1.067s 0.224s response: H
1.067s 0.000s response: TTP/1.1 101 Switching ...
1.192s 0.125s response: \x81
1.192s 0.000s response: \x1d{RequestId:1,Response:{}}
1.289s 0.096s response: \x81
1.289s 0.000s response: \x1d{RequestId:2,Response:{}}
1.292s 0.003s response: \x81
1.292s 0.000s response: \x1d{RequestId:3,Response:{}}
1.292s 0.000s response: \x88
1.292s 0.000s response: \x02\x03\xe8

So that is 1.3s to do everything, rather than closer to 2s. If you
subtract out the 0.8s for SSL handshake that we can't get rid of, it
is 0.5s vs 1.2s :).

5) Ping seems to have a negligible real-world effect. Taking it out of
the Pipelined version might take us from 129ms to 128ms.
Certainly it isn't something I can measure given the variability in
real-world RTT.

6) Is this worth doing? I'm not really sure. It would be nice for
commands that only have 1 thing to do (SetConstraints) to make them
take half the time.
I think we already have api.Open() that connects and logs in, so if it
could collapse the HTTP GET with the Login bytes that could shave at
least one Round trip off.

Something to think about, at least.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKR3bgACgkQJdeBCYSNAAMj7wCfYQj0XRvO2fWagr3MQFY17zv+
0A0AoK7UBXgwmZiP13JoNW0QPZ5Twu5M
=lA8M
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Positional vs optional arguments vs consistency

2013-11-22 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'd like to bring up the discussion of what guidelines we're using
when we design CLI commands. There are 2 commands that just came up
that I'd like to use as examples, and see what we think of as a team.

1) juju set-constraints [-s service] [key=value]
vs juju get-constraints [service]

In this case service *is* optional (if you don't supply it you get the
environment constraints). It is supplied as a positional argument to
get-constraints IMO because there was nothing else to conflict with.
But it would potentially be confusing to parse the command line for
set-constraints (is this a constraint we don't know about or a service
name?) So we supply the -s flag.

I'd like to make the case that juju get-constraints -s service would
actually fit better than as a positional argument. Mostly because then
it is consistent with juju set-constraints

2) juju destroy-environment [envname]

This one is probably a bit more controversial, and my opinion is
slightly based on how I actually use juju.

The main thing here is every command except for juju
destroy-environment takes the environment as a -e envname
parameter. Now *I* purposefully don't set a default environment (nor
use switch or JUJU_ENV). Instead I pass the environment I want the
action to occur to each command. (juju bootstrap -e amz-jam; juju
deploy -e amz-jam foobar).

Am I just unique in this? I can see where if you *aren't* used to
passing the '-e' flag to commands, then you haven't really
internalized that to pass an environment I do -e ENV. And only
because I do, does it seem backwards that I *wouldn't* pass -e to juju
destroy-environment.

I can understand the required arguments should be positional, not
flags. However, I also feel that this breaks the argument for
consistency of parameters across commands.

Thoughts?

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKQQhsACgkQJdeBCYSNAAPnJACgn9WqsDJGjfUT/+YeV7SgBCs/
5YcAoINckYsZ3lV/gtb4ML0IFdlD1GVb
=NSrO
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Connection performance

2013-11-22 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I was curious, so I did some benchmarking of CLI performance. The good
news is that we're doing a lot better in Trunk than in 1.16, and we
have a clear win coming up.

For testing I just did:
  time for i in `seq 10`; do juju set-constraints mem=4G; done

143s juju-1.16
 69s juju-trunk
 23s juju-trunk w/ API caching

For API caching I just filled in the user, password, state-servers,
and ca-cert fields in the ~/.juju/environment/ENV.jenv.

So the great news is that we're 2x faster than direct DB access today,
and the even better news is that we can be 3x faster than we are
*today* if we start caching the API parameters.

These measurements were taken with an average 244ms ping time to the
instance. Which means with API caching we're at about 10 round trips.

SSL by itself takes 3-4 round trips. Wireshark says that there are
about 43 frames involved.

I managed to grab the server.pem and debug the traffic a bit.
Unfortunately the Wireshark in Precise (1.6.7) doesn't support
WebSocket traffic, but I was able to get an updated version.

Anyway, the summary breakdown is:

Time (ms)   Action
0   TCP Connect
  222   ACK, TLS Client Hello
  444   Server Hello, Certificate, Server Done, Client Key
Exchange, Change Cipher Spec
*** 2 RTT here
  959   Change Cipher Spec, Client GET / Upgrade: Websocket
 1183   HTTP Switching Protocols, Login request
*** 2 RTT
 1530   Login Response {}, Request Pinger.Ping(), Request
SetEnvironmentConstraints
 1767   Pinger response
 1991   SetEnvironmentConstraits Response, WebsocketClose()
 2216   Websocket Connection Close Response, TCP Close

So a couple of things to note:

1) We have 7 RTT before we have logged in.
1 TCP Connect,
3 TLS including a gap
1 Websocket upgrade request
2 Login including a gap

   I don't know why we have those 2 gaps. It doesn't seem like it
   should take an extra 200ms to setup TLS or Login. It is possible I
   have dropped packets or something, but I don't think I do.

   We also lose a round trip waiting for the Websocket response to come
   back. Is there any reason we can't just pipeline it and assume the
   Websocket request is going to succeed?

   Potentially we can shave 3 RTT here. If we can understand why there
   are 2 gaps, and we can pipeline the first Login request right after
   the please upgrade to a Websocket.

2) Why are we calling Pinger.Ping() and why does that seem to trigger
an extra round trip. We seem to call Ping and
SetEnvironmentConstraints at the same time (1.530516 for Ping, and
1.530686 for SetEnvironmentConstraints), but the response for Ping
comes a full RTT before the response for SetEnvironmentConstraints.

I don't think we need the client to Ping. I think we were doing that
as part of give me an updated view of all the agents but we don't
actually care here.
Alternatively, we *do* want to Ping for all connections to make sure
they stay alive, but it doesn't seem like we need to *start* with a
Ping, do we? (If it didn't delay the SetEnvironmentConstraints
response, I wouldn't care)

3) Similarly to Login, can't we just pipeline our
SetEnvironmentConstraints request? We don't actually seem to need to
wait for Login to finish. There is no Response content, so there isn't
anything like a Token that we need to do the next step.

4) We seem to be asking the Websocket to Close, and then wait for it
to actually be closed. Why are we waiting?

5) So if we found a way to pipeline requests, and get rid of gaps,
instead of taking 10 RTT, we could potentially get down to 5 RTT. (2
gaps, calling Upgrade + Login + SetEnvironmentConstraints immediately
rather than waiting for responses, not waiting for Close).

Is it worth pushing further on this? I certainly think the first thing
to do is start caching the API, as we know we want to do that, but we
potentially can speed it up *another* 2x by doing better pipelining.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKQXcMACgkQJdeBCYSNAAM6zgCfY8go2flKOqpqTJ3fykYNqMuG
1I0An2ocoyHU8ypx4agptmZ3wTgGOuAx
=eBw+
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: API compatibility policy and practices between juju versions

2013-11-20 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-20 20:15, Curtis Hovey-Canonical wrote:
 On Tue, Nov 19, 2013 at 8:43 PM, Tim Penhey
 tim.pen...@canonical.com wrote:
 It was my understanding that the api server needs to be at least
 as advanced as any client.
 
 This means that a 1.18 server should be able to support a 1.16.x
 client.
 
 However we don't support 1.18 clients on a 1.16.x server.
 
 Does this change your thinking?
 
 Newer clients with older servers is going to happen. How will
 devops upgrade their production deployments?
 
 A. Nominate one deployment machine to go to 1.18. From another
 machine use juju 1.16 to call upgrade-juju. Run juju status on both
 machines to watch agent switch to the next version, or if they
 fail, intervene with the appropriate client? Scripting this is
 awkward (the kindest thing I can say).
 
 B. We package co-installable juju clients. The deployment machine 
 installs juju1.16 and juju1.18. Devops and scripts take care to 
 specify the client version? Devops remove the unused clients when
 they remember. I am not sure how this would work with Windows. I am
 certain it wont work with OS X because homebrew only builds the
 most recent stable release.
 
 Our current promise is that you can always upgrade to the next 
 version. I think we need to ensure this case works from 16 and 18 
 clients.
 

Yes. I think that is one of the primary caveats for the we don't
guarantee all cross version compatibility is that we *do* guarantee
upgrade works. (It is the one blocker for making a .ODD release into a
.EVEN, which has delayed 1.14 and 1.16, IIRC)

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKM4h8ACgkQJdeBCYSNAAN7lACguV6SMOTXennV++0Q27HLNGJN
JisAnj+Kmh52B0v7IBWtYqsiCbLklTlY
=BuMc
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic tools-upload/sync during bootstrap

2013-11-18 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-18 10:47, Andrew Wilkins wrote:
 All,
 
 I'm in the process of landing the following MP which removes the 
 --source flag from juju bootstrap; this should go into the
 release notes. As discussed, you can still use --source with
 sync-tools. 
 https://code.launchpad.net/~axwalk/juju-core/lp1236691-null-provider-default-series-take2/+merge/190032

  In case you missed other discussions: the Environ interface's
 Bootstrap method no longer takes a set of possible tools, and is
 now responsible for locating tools itself. The primary driver for
 this is manual provisioning, where the series of the machine
 dictates the tools, and not the other way around like with the
 other providers.
 
 Cheers, Andrew

So *if* we are going to get rid of --source from bootstrap, I would
propose that we might want to get of automatic sync completely. I
suppose if streams.canonical.com is official enough that we just use
it maybe we're fine.

Specifically we had --source for the MaaS case where they are semi
likely to not have outbound access. Which is also why we defaulted to
automatic syncing.

John
=:-


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKJ13cACgkQJdeBCYSNAAOmHACdE4k6Ts5arTKntFllDYeHB2c/
P0sAn0hkAz7SOnTrHDja3pFc1bz7jIpn
=KJYt
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic tools-upload/sync during bootstrap

2013-11-18 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-10-11 10:19, Ian Booth wrote:
 
 So I'd like to see what people think of this: - Keep juju
 bootstrap --upload-tools, but make that the only time we attempt
 to upload during juju bootstrap. There will not be any
 automatic upload if tools aren't found and syncing fails.
 
 The auto upload was added to avoid the less than user friendly
 scenario whereby people running from source could not bootstrap
 without specifying --upload-tools. My opinion is that the current
 functionaility is waranted because it makes bootstrap Just Work and
 removes a common failure path. It also unifies the bootstrap
 command for dev vs prod and consistency is good.

Actually that isn't true. It was added to help the MaaS case where
every environment needs to have the tools bootstrapped. Then Tim
noticed that in the local provider he could cheat and auto-supply
- --upload-tools and make it find the jujud next to the juju binary. And
then *you* noticed, hey why aren't we just always supplying
upload-tools when we can't find the tools we want elsewhere.

Personally, I've seen mixed results. Where bootstrap actually fails
because it can't get the tools (the MaaS case it was trying to solve
but they don't have outbound network). What *is* helped is for us
developers, but I think it helps in a fashion that is actually an
ultimate negative for us.

If we want 'juju bootstrap' to be grabbing the jujud next to it, then
that should *be the way* that we distribute tools. Just having it so
it is easier to test the code you're working on is not a good answer,
when it is *not* the way that we have people use Juju in production.

John
=:-


 
 - Remove the option to specify a tools source. If you really want
 to do that, use juju sync-tools.
 
 
 You mean tools-url in config? We need this to allow private clouds
 (and anyone else) to serve the tools using an arbitary http server
 (or from a shared directory). In any case, sync-tools only copies
 to the private storage (for ec2 and openstack at least)  and such
 tools are not generally available to anyone else. The tools-url is
 intended to be configured with a shared public location from which
 to get tools. So for the above reasons, we need to keep this
 option.
 
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlKLCr4ACgkQJdeBCYSNAAP1gwCdFun5kHD0HvQETkdoQJwMq6rp
6X8AoKR8VytV0TarxNlbGhcF8IsTHX1n
=6SIQ
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: High Availability command line interface - future plans.

2013-11-08 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-08 14:15, roger peppe wrote:
 On 8 November 2013 08:47, Mark Canonical Ramm-Christensen 
 mark.ramm-christen...@canonical.com wrote:
 I have a few high level thoughts on all of this, but the key
 thing I want to say is that we need to get a meeting setup next
 week for the solution to get hammered out.
 
 First, conceptually, I don't believe the user model needs to
 match the implementation model.  That way lies madness -- users
 care about the things they care about and should not have to
 understand how the system works to get something basic done.
 See: 
 http://www.amazon.com/The-Inmates-Are-Running-Asylum/dp/0672326140
 for reasons why I call this madness.
 
 For that reason I think the path of adding a --jobs flag to
 add-machine is not a move forward.  It is exposing implementation
 detail to users and forcing them into a more complex conceptual
 model.
 
 Second, we don't have to boil the ocean all at once. An
 ensure-ha command that sets up additional server nodes is
 better than what we have now -- nothing.  Nate is right, the box
 need not be black, we could have an juju ha-status command that
 just shows the state of HA.   This is fundamentally different
 than changing the behavior and meaning of add-machines to know 
 about juju jobs and agents and forcing folks to think about
 that.
 
 Third, we I think it is possible to chart a course from ensure-ha
 as a shortcut (implemented first) to the type of syntax and
 feature set that Kapil is talking about.  And let's not kid
 ourselves, there are a bunch of new features in that proposal:
 
 * Namespaces for services * support for subordinates to state
 services * logging changes * lifecycle events on juju jobs *
 special casing the removal of services that would kill the
 environment * special casing the stats to know about HA and warn
 for even state server nodes
 
 I think we will be adding a new concept and some new syntax when
 we add HA to juju -- so the idea is just to make it easier for
 users to understand, and to allow a path forward to something
 like what Kapil suggests in the future.   And I'm pretty solidly
 convinced that there is an incremental path forward.
 
 Fourth, the spelling ensure-ha is probably not a very good
 idea, the cracks in that system (like taking a -n flag, and
 dealing with failed machines) are already apparent.
 
 I think something like Nick's proposal for add-manager would be
 better. Though I don't think that's quite right either.
 
 So, I propose we add one new idea for users -- a state-server.
 
 then you'd have:
 
 juju management --info juju management --add juju management
 --add --to 3 juju management --remove-from
 
 This seems like a reasonable approach in principle (it's
 essentially isomorphic to the --jobs approach AFAICS which makes me
 happy).
 
 I have to say that I'm not keen on using flags to switch the basic
 behaviour of a command. The interaction between the flags can then
 become non-obvious (for example a --constraints flag might be
 appropriate with --add but not --remove-from).
 
 Ah, but your next message seems to go along with that.
 
 So, to couch your proposal in terms that are consistent with the 
 rest of the juju commands, here's how I see it could look, in terms
 of possible help output from the commands:
 
 usage: juju add-management [options] purpose: Add Juju management
 functionality to a machine, or start a new machine with management
 functionality. Any Juju machine can potentially participate as a
 Juju manager - this command adds a new such manager. Note that
 there should always be an odd number of active management machines,
 otherwise the Juju environment is potentially vulnerable to
 network partitioning. If a management machine fails, a new one
 should be started to replace it.

I would probably avoid putting such an emphasis on any machine can be
a manager machine. But that is my personal opinion. (If you want HA
you probably want it on dedicated nodes.)

 
 options: --constraints  (= ) additional machine constraints.
 Ignored if --to is specified. -e, --environment (= local) juju
 environment to operate in --series (= ) the Ubuntu series of the
 new machine. Ignored if --to is specified. --to (=) the id of the
 machine to add management to. If this is not specified, a new
 machine is provisioned.
 
 usage: juju remove-management [options] machine-id purpose:
 Remove Juju management functionality from the machine with the
 given id. The machine itself is not destroyed. Note that if there
 are less than three management machines remaining, the operation of
 the Juju environment will be vulnerable to the failure of a single
 machine. It is not possible to remove the last management machine.
 

I would probably also remove the machine if the only thing on it was
the management. Certainly that is how people want us to do juju
remove-unit.


 options: -e, --environment (= local) juju environment to operate
 in
 
 

Last round of Scale testing

2013-11-07 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I did one last round of scale testing before handing this over to Dave.

I was using trunk @2037 with 2 patches (one to set GOMAXPROCS, and one
to allow juju add-unit -n X --to Y.)

I started with:

juju bootstrap --constraints=mem=7G cpu-cores=4
which started the state server on an m1.xlarge (4CPU 15GB RAM)

then I did

juju deploy -n 15 ubuntu --constraints=mem7G cpu-cores=2
which creates 15 m1.larges.

In this case, 1 of those machines just never came up (ended up showing
as Terminated as soon as I started it).

I increased the total number of agents over 1500:

for i in `seq 15`; do juju add-unit ubuntu -n 100 --to $i  done; time
wait
2min50s

I then brought the number of agents up over 4000 with:
for i in `seq 15`; do juju add-unit ubuntu -n 200 --to $i  done; time
wait
9min26s

It took another 3 minutes (12 total) for the log to become quiet
indicating all agents had reach steady state.


I then triggered sudo restart jujud-machine-0

It took 6m41s before the log became quiet again. Which is actually
pretty good.

I then ran another add 3000 agents with
for i in `seq 15`; do juju add-unit ubuntu -n 200 --to $i  done; time
wait
19min8s

It took 27 minutes for all units to come up.
We ended up with 2.6GB resident on the API server.

juju status shows 7029 started agents, 502 pending (the ones on the
dead machine), 0 down, and lsof shows 7048 open file handles on jujud.


I then tried adding nrpe-external-master to all the units.
juju deploy nrpe-external-master
juju add-relation nrpe-external-master ubuntu

This didn't take very long, but at the end of it things seem jammed.
The API server responds when another agent tries to connect to it
(tested by restarting a unit agent on another machine and seeing the
login request locally). However, the Login request never returns and
there is 0 CPU being consumed on the machine.

So I restarted jujud again restart jujud-machine-0

I see it hit 14GB of resident memory (and this machine has no swap)
and jujud+mongodb are able to hit 400% CPU, but the agents show as
started again.

But that says 7000 agents + nrpe-external-master has killed the
machine. But probably smaller than that we'd be ok.

The big concern is that I do see the CPU go all the way down to 5%
while there are requests still waiting to be processed.

Some interesting bits. While spinning up the 4000 agents, this is the
summary of API requests made:
...
  1724 Type:Machiner,Request:Life
  4214 Type:Deployer,Request:APIAddresses
  4214 Type:Deployer,Request:Life
  4214 Type:Deployer,Request:SetPasswords
  4214 Type:Deployer,Request:StateAddresses
  4214 Type:StringsWatcher,Id:6,Request:Stop
  4214 Type:Uniter,Request:CharmArchiveSha256
  4214 Type:Uniter,Request:CharmArchiveURL
  4214 Type:Uniter,Request:CurrentEnvironUUID
  4214 Type:Uniter,Request:ProviderType
  4214 Type:Uniter,Request:SetCharmURL
  4214 Type:Uniter,Request:SetPrivateAddress
  4214 Type:Uniter,Request:SetPublicAddress
  4214 Type:Uniter,Request:WatchConfigSettings
  4215 Type:NotifyWatcher,Id:7,Request:Next
  4226 Type:NotifyWatcher,Id:2,Request:Next
  4228 Type:Agent,Request:SetPasswords
  4228 Type:StringsWatcher,Id:6,Request:Next
  4229 Type:Agent,Request:GetEntities
  4229 Type:Logger,Request:WatchLoggingConfig
  4229 Type:Upgrader,Request:SetTools
  4229 Type:Upgrader,Request:WatchAPIVersion
  4230 Type:StringsWatcher,Id:8,Request:Next
  4230 Type:Upgrader,Request:DesiredVersion
  4243 Type:NotifyWatcher,Id:3,Request:Next
  4260 Type:Admin,Request:Login
  8428 Type:Uniter,Request:SetStatus
  8428 Type:Uniter,Request:Watch
  8428 Type:Uniter,Request:WatchServiceRelations
  8459 Type:Logger,Request:LoggingConfig
  8780 Type:NotifyWatcher,Id:4,Request:Next
 12642 Type:Uniter,Request:APIAddresses
 12642 Type:Uniter,Request:PrivateAddress
 12642 Type:Uniter,Request:PublicAddress
 13003 Type:Uniter,Request:Resolved
257797 Type:NotifyWatcher,Id:5,Request:Next
268730 Type:Uniter,Request:CharmURL
294375 Type:Uniter,Request:Life

So you can see we're doing better than N^2 (because I'm using the
- -n100), but we are still spending a lot of time in this portion.
Everything else is quite constant time (1-3 calls per agent).

on restarting jujud (triggering each agent to reconnect) the log looks
like:
...
494 Type:NotifyWatcher,Id:7,Request:Next
508 Type:StringsWatcher,Id:6,Request:Next
   3721 Type:NotifyWatcher,Id:6,Request:Next
   3721 Type:StringsWatcher,Id:7,Request:Next
   4214 Type:Uniter,Request:APIAddresses
   4214 Type:Uniter,Request:CurrentEnvironUUID
   4214 Type:Uniter,Request:PrivateAddress
   4214 Type:Uniter,Request:ProviderType
   4214 Type:Uniter,Request:PublicAddress
   4214 Type:Uniter,Request:SetPrivateAddress
   4214 Type:Uniter,Request:SetPublicAddress
   4214 Type:Uniter,Request:SetStatus
   4214 Type:Uniter,Request:WatchConfigSettings
   4214 Type:Uniter,Request:WatchServiceRelations
   4228 Type:NotifyWatcher,Id:5,Request:Next
   4229 

Re: Last round of Scale testing

2013-11-07 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-11-07 17:18, John Arbash Meinel wrote:
 I did one last round of scale testing before handing this over to
 Dave.
 

...

 But that says 7000 agents + nrpe-external-master has killed the 
 machine. But probably smaller than that we'd be ok.
 

I did try an m1.xlarge with 1000 agents connected (across 10 actual
machines). And then I added nrpe-external-master and added the relation.

We pretty steadily hit 200% CPU (often up over 300%). And then I saw
memory grow over 14GB and the process was killed by the OOM killer.

So something about having a subordinate is spiking memory enough that
it is killing jujud, even with only 1000 units.

Definitely something I think we'll want to investigate more closely.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJ7wXMACgkQJdeBCYSNAAMPKgCcCJe9CzWKpgYkYCXMuB5w/yQP
IAwAnieCej6geikZbq+rO55I1i7wsYjE
=U5TT
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: API work

2013-11-06 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...
 
 Long term, I'd like to deprecate the root as a way of talking to
 the API and mandate (for instance) /api as a URL path. Short term,
 we can't do that, but we *can* treat, for instance /charm
 differently and serve PUT (and potentially GET requests) on that. 
 It's just a matter of adding a ServeMux  in apiserver.Server.run.
 
 The other issue we'd need to deal with is authentication. I'd
 suggest we would specify that the URL parameters should include
 the user name and password. If we didn't want to do that, there
 are alternatives which I won't go into here as they're more
 complex.
 
 So to summarise, the URL might look something like:
 
 https://my.api.server/charm?user=adminpassword=adminpasswordcharmurl=cs:precise/wordpress

  We'd do a PUT request on the above URL, sending the charm bundle.
 We may well want to add a secure hash of the charm bundle too, so
 that we can tell if the user accidentally drops the connection too 
 early.
 
 I don't think that charms are generally big enough to warrant
 adding resumable uploads, at this point anyway.
 
 How does that sound?
 

Given all the stuff that you describe that actually needs to be done,
I don't quite see what we actually *gain* over just putting it into
the RPC. We don't have to do auth again, we don't have to have the
actual client connection suddenly switch from talking in one spot to
talking in another, etc.

I would be perfectly happy with PUT if we were already a RESTful API,
but it seems a bit strange to just tack that on, and will be a
one-more-special case that we run into when trying to debug, etc.
(logs will likely be different, working in the code will have to think
about multiple paths, etc.)

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJ6S1gACgkQJdeBCYSNAANS5wCgjgrvK9NYNsFrqwK4EpWZor8H
KUIAoK8y9qmw5OYaFjusM5qh3CzhIfyz
=oOmi
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: API work

2013-11-06 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 
 I would be perfectly happy with PUT if we were already a RESTful
 API, but it seems a bit strange to just tack that on, and will be
 a one-more-special case that we run into when trying to debug,
 etc. (logs will likely be different, working in the code will
 have to think about multiple paths, etc.)
 
 The reason is that if you've got a large charm (and we'll probably
 end up uploading tools through this mechanism at some point) PUT
 streams the bytes nicely, but we *really* don't want a single RPC
 containing the entire charm as an arbitrarily large blob, so we'd
 have to add quite a bit more mechanism to stream the data with
 RPC, and even then you have to work out how big your data packets
 are, and you incur round trip latency for as many packets as you
 send - this would make charm upload quite a bit slower.
 
 I suspect that the amount of work outlined above is actually quite
 a bit less than would need to be done to implement charm streaming
 uploads over the RPC interface.
 

The chunked implementation in golang just uses io.Copy which reads and
writes everything in 32kB chunks. We could just as easily do the same
thing, or just make them 1MB chunks or whatever. We can just as easily
pipeline the RPC requests which is what is being done with
transfer-encoding: chunked.

I think your statement about how big your data packets are etc is
making it more complex than exists in reality. It would be nice if
things did that, but they clearly don't.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJ6UlsACgkQJdeBCYSNAAPgDwCgqjvid7hAv/IFpl1rgLgUHYAz
mkYAn0wgh/2S/Yp33pFKIG8Rzf0YbPPH
=gcI1
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: API work

2013-11-05 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 api-endpoints: (I don't recall why we put this in the list at all? 
 What needs to be done here?)

juju api-endpoints is a CLI command, seems like it should use
APIAddresses via the API so that it can return all possible API servers.

 
 upgrade-juju: this requires get/set-environment
 
 deploy, upgrade-charm: These require some way of uploading charms. 
 By the way, what's our maximum size for charms, and for RPC 
 messages?

AFAIK there isn't a maximum size for charms (we know there are ones
that are 100s of MB).
RPC doesn't specifically have a maximum size (AFAICT). Though it
doesn't really do streaming. So user experience uploading 100MB of a
charm is going to be a bit poor.

I know Roger had the idea of switching to a plain HTTP PUT/POST for
uploading charms. However, I don't know that it is actually *easier*
to switch out of the RPC object and do something else.

We could do things like upload in chunks as part of the RPC structure
itself (so each RPC.put is a 1MB upload sort of thing).

 
 status: haven't looked into in detail, looks like a big task.
 don't forget filters :)

Well, we don't support filters today, so we don't have to do all the
design in the first pass.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJ4w1IACgkQJdeBCYSNAAPz8gCgtceZL3lPy1+/36oX3sw5Ae8h
i0IAoKrz5d/uIAB9AusM6P3mExNZ5D/s
=Pbh7
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Scale Testing: Now with profiling!

2013-10-31 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

So I managed to instrument a jujud with both CPU and Mem profiling
dumps. I then brought up 1000 units and did some poking around.

The results were actually pretty enlightening.


1) Guess what the #1 CPU time was. I know I was surprised:
Total: 25469 samples
   14380  56.5%  56.5%14404  56.6% crypto/sha512.block
1261   5.0%  61.4% 1261   5.0% crypto/hmac.(*hmac).tmpPad
1219   4.8%  66.2%15737  61.8% crypto/sha512.(*digest).Write
1208   4.7%  70.9% 9548  37.5% crypto/sha512.(*digest).Sum
 439   1.7%  72.7%19046  74.8%
launchpad.net/juju-core/thirdparty/pbkdf2

So we spend most of our CPU resources on sha512 work. With 72% of that
being caused by pbkdf2. Essentially the fact that on every Login
attempt we have to call PasswordHash which requires many iterations of
sha512. (Essentially we can DOS the juju agent by just asking for lots
of Login requests.)

I know that pbkdf2 is great for user passwords, because it makes it
very hard to do brute force search. However, we aren't getting user
input for Agent passwords. We are giving them nice long random
strings. I don't know if we can change it at this point, but that does
become the bottleneck when dealing with lots of agents. It will slow
down with time, but any time you restart/upgrade/etc you spend
*minutes* just verifying another 1000 PasswordHashes.

2) And what about memory consumption:
(pprof) top5
Total: 94.1 MB
31.5  33.5%  33.5% 31.5  33.5% newdefer
28.1  29.8%  63.3% 28.1  29.8% cnew
 5.0   5.3%  68.6%  5.0   5.3% runtime.malg
 4.5   4.8%  73.4%  4.5   4.8% crypto/rc4.NewCipher
 2.0   2.1%  75.6%  2.0   2.1% compress/flate.NewReader

The best I could dig up for that is:
https://codereview.appspot.com/10784043/
and
https://groups.google.com/forum/#!topic/golang-nuts/I2c7wO0SR3I

The former is that defer() functions seem to be trapping their
references and preventing garbage collection. Or to quote the patch:

  It should help with a somewhat common idiom:
  x, cleanup := foo()
  defer cleanup()
  Currently anything referenced by cleanup is not collected with high
  probability.

This has landed in go trunk, but I don't think it is in 1.1.1.

for 'cnew' a strong possibility is the second link. Which mentions:
 Ah, that makes sense.
 
 For what it's worth, I think I found the issue.
 
 If you look at http.Transport:
 http://golang.org/src/pkg/net/http/transport.go?m=text
 
 getIdleConnCh() is called for every round trip (called by getConn,
 which is called by RoundTrip), regardless of weather keepAlives are
 on. In my case, with an app that's connecting to hundreds of
 thousands of different hosts, that idleConnCh map just keeps
 filling up and filling up forever.
 
 
 To verify I'm going to build my own, very simple and specialized
 RoundTripper, with pretty much no state stored between requests,
 and see if that takes care of the issue.

I don't know if that is what is actually going on.

I was a bit surprised that all of our TLS connections from the agents
back to the API server are actually using rc4, and that this is
triggering about 5% of our total memory consumption.



I'm still looking into the PasswordHash stuff, because the CPU profile
is hinting that it is being called by SetPassword (4000 times), but
that doesn't make sense, since all of those agents are running externally.

At the least, we might have a memory leak in some of our defer()
calls. Though Dave Cheney approved the patch that fixed it, so he
might know what version of golang it landed in.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJyAqIACgkQJdeBCYSNAAP9gwCffPN1QtoXkRM+oXLUOnHrAv0Y
MMwAnim8+peLrmKKKkaFhtNqLRDy6uwD
=f9Jx
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Notes from Scale testing

2013-10-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm trying to put together a quick summary of what I've found out so
far with testing juju in an environment with thousands (5000+) agents.


1) I didn't ever run into problems with connection failures due to
socket exhaustion. The default upstart script we write for jujud has
limit nofile 2 2 and we seem to properly handle that 1 agent
== 1 connection. (vs the old 1 agent = =2 mongodb connections).


2) Agents seem to consume about 17MB resident according to 'top'. That
should mean we can run ~450 agents on an m1.large. Though in my
testing I was running ~450 and still had free memory, so I'm guessing
there might be some copy-on-write pages (17MB is very close to the
size of the jujud binary).


3) On the API server, with 5k active connections resident memory was
2.2G for jujud (about 400kB/conn), and only about 55MB for mongodb. DB
size on disk was about 650MB.

The log file could grow pretty big (up to 2.5GB once everything was up
and running though it does compress to 200MB), but I'll come back to
that later.

Once all the agents are up and running, they actually are very quiet
(almost 0 log statements).


4) If I bring up the units one by one (for i in `seq 500`; do for j in
`seq 10` do juju add-unit --to $j ; time wait; done), it ends up
triggering O(N^2) behavior in the system. Each unit agent seems to
have a watcher for other units of the same service. So when you add 1
unit, it wakes up all existing units to let them know about it. In
theory this is on a 5s rate limit (only 1 wakeup per 5 seconds). In
practice it was taking 3s per add unit call [even when requesting
them in parallel]. I think this was because of the load on the API
server of all the other units waking up and asking for details at the
same time.

- From what I can tell, all units take out a watch on their service so
that they can monitor its Life and CharmURL. However, adding a unit to
a service triggers a change on that service, even though Life and
CharmURL haven't changed. If we split out Watching the
units-on-a-service from the lifetime and URL of a service, we could
avoid the thundering N^2 herd problem while starting up a bunch of
units. Though UpgradeCharm is still going to thundering herd.

Response in log from last AddServiceUnits call:
http://paste.ubuntu.com/6329753/

Essentially it triggers 700 calls to Service.Life and CharmURL (I
think at this point one of the 10 machines wasn't responding, so it
was 1k Units running)


5) Along with load, we weren't caching the IP address of the API
machine, which caused us to read the provider-state file from object
storage and then ask EC2 for the IP address of that machine.
Log of 1 unit agent's connection: http://paste.ubuntu.com/6329661/

Eventually while starting up the Unit agent would make a request for
APIAddresses (I believe it puts that information into the context for
hooks that it runs). Occasionally that request gets rate limited by EC2.
When that request fails it triggers us to stop the
  WatchServiceRelations
  WatchConfigSettings
  Watch(unit-ubuntu-4073) # itself
  Watch(service-ubuntu)   # the service it is running

It then seems to restart the Unit agent, which goes through the steps
of making all the same requests again. (Get the Life of my Unit, get
the Life of my service, get the UUID of this environment, etc., there
are 41 requests before it gets to APIAddress)


6) If you restart jujud (say after an upgrade) it causes all unit
agents to restart the 41 requests for startup. This seems to be rate
limited by the jujud process (up to 600% CPU) and a little bit Mongo
(almost 100% CPU).

It seems to take a while but with enough horsepower and GOMAXPROCS
enabled it does seem to recover (IIRC it took about 20minutes).


7) If I juju deploy nrpe-external-master; juju add-relation ubuntu
nrpe-external-master, very shortly thereafter juju status reports
all agents (machine and unit agents) as agent-state: down. Even the
machine-0 agent. Given I was already close to capacity for even the
unit machines there could be any sort of problem here. I would like to
try another test where we are a bit farther away from capacity.


8) We do end up CPU throttled fairly often (especially if we don't set
GOMAXPROCS). It is probably worth spending some time profiling what
jujud is doing. I have the feeling all of those calls to CharmURL are
triggering DB reads from Mongo, which is a bit inefficient.

I would be fine doing max(1, NumCPUs()-1) or something similar. I'd
rather do it inside jujud rather than in the cloud-init script,
because computing NumCPUs is easier there. But we should have *a* way
to scale up the central node that isn't just scaling out to more API
servers.

9) We also do seem to hit MongoDB limits. I ended up at 100% CPU for
mongod, and I certainly was never above 100%. I didn't see any way to
configure mongo to use more CPU. I wonder if it is limited to 1 CPU
per connection, or if it is just always 1 CPU.

I 

Re: Notes from Scale testing

2013-10-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 
 - From what I can tell, all units take out a watch on their
 service so that they can monitor its Life and CharmURL. However,
 adding a unit to a service triggers a change on that service,
 even though Life and CharmURL haven't changed. If we split out
 Watching the units-on-a-service from the lifetime and URL of a
 service, we could avoid the thundering N^2 herd problem while
 starting up a bunch of units. Though UpgradeCharm is still going
 to thundering herd.
 
 Where is N^2 coming from?

If you add N units one-by-one each new add triggers all existing units
to wake up and ask for the Life and CharmURL of the service again. So
first unit asks, 2nd unit asks and causes the first unit to ask again.
3rd unit asks and causes the first 2 to ask again. Nth unit asks and
causes N-1 units to ask. Thus N adds = N*(N) requests for CharmURL.

In theory it is gated at a 5-sec delay between add unit and triggering
requests. In practice it took 3+s for add-unit to complete, so it was
pretty much 1-add = 1-trigger.

The log I have for bringing up 1000 nodes has the result of the
CharmURL 2,183,716 times. There are other triggers for this, but
1000*1000 = 1M.

Put another way, in a file with 16M lines, 2.1M lines are the Request
of CharmURL (another 2.1M the response), and 2.5M lines are the
Request for Life and another 2.5M response lines.

So 9.2M lines of it is just busy work caused by adding units to a
service causing the units of that service to ask if the Life or
CharmURL of that service has changed.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJxHmMACgkQJdeBCYSNAAPVUgCffRWSn9ERhU8KjS8tFfNGXO/l
OmMAnjHN90MWBlAIfL4J+Uprvpdc/QQC
=wtoN
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Notes from Scale testing

2013-10-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


...

 
 4) If I bring up the units one by one (for i in `seq 500`; do for j
 in `seq 10` do juju add-unit --to $j ; time wait; done), it ends
 up triggering O(N^2) behavior in the system. Each unit agent seems
 to have a watcher for other units of the same service. So when you
 add 1 unit, it wakes up all existing units to let them know about
 it.
 
 
 I tried to talk about this in the hangout this morning, but I'm not
 sure if I got my point across.  I don't know that this really
 qualifies as N^2 given that no single machine sends or receives
 more than N messages. The network takes an N^2 hit. It's really
 only O(N) per unit agent.  It might be N^2 for the state server if
 each agent pings the state server when it receives the unit-add
 message... but it seems unlikely that we'd do that (and if we do,
 we should fix that).
 

Adding 1000 units to a service triggers 1000*1000 (*4) lines written
to the log file. And that is 1M requests against the API server. Seems
N^2 to me.

All Units Watch for changes in their Service. When you add a unit, it
triggers the units to ask the API server what the CharmURL and
Service.Life is.

So yes, it is N^2, I have the 2GB log file if you want to investigate. :)

I did paste a couple snippets, where you can see adding 1 unit caused
several hundred requests for CharmURL to come back.


 
 
 8) We do end up CPU throttled fairly often (especially if we don't
 set GOMAXPROCS). It is probably worth spending some time profiling
 what jujud is doing. I have the feeling all of those calls to
 CharmURL are triggering DB reads from Mongo, which is a bit
 inefficient.
 
 I would be fine doing max(1, NumCPUs()-1) or something similar.
 I'd rather do it inside jujud rather than in the cloud-init
 script, because computing NumCPUs is easier there. But we should
 have *a* way to scale up the central node that isn't just scaling
 out to more API servers.
 
 
 It seems as though GOMAXPROCS = NumCPUs is probably better, and
 just let the OS handle scheduling.
 

I'm happy for that on the API servers, though I would consider always
leaving some free space for Mongo. However, I would consider
throttling more for non-JobHostsState machines, given they are serving
user workloads. (They also shouldn't ever be trying to do as much work
as the state server nodes, though.)
...

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJxH4wACgkQJdeBCYSNAAPKpgCgtEXgZhxZFflodfeCXbhc9lU1
orsAn0jrbN/dyBUs2VPskYjR+0qknmDl
=EdO3
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Notes from Scale testing

2013-10-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-10-30 18:11, Nate Finch wrote:
 
 On Wed, Oct 30, 2013 at 9:23 AM, John Arbash Meinel 
 j...@arbash-meinel.com mailto:j...@arbash-meinel.com wrote:
 
 2) Agents seem to consume about 17MB resident according to 'top'.
 That should mean we can run ~450 agents on an m1.large. Though in
 my testing I was running ~450 and still had free memory, so I'm
 guessing there might be some copy-on-write pages (17MB is very
 close to the size of the jujud binary).
 
 
 17MB seems just fine for an agent. I don't think it's worth
 worrying much about that size, since it's fairly static and you
 generally aren't going to run 450 copies on the same machine :)

Yeah, I'm not worried about unit-agent memory size. Though it does
give some rough estimates of what we can get away with when doing this
sort of testing. (We need to have enough memory/bandwidth/cpu to be
able to not cause artificial bottlenecks when doing testing.)
AFAICT disk space and memory end up the primary bottlenecks with this
testing. Disk space I hopefully have an answer for (don't have all
units download the same object simultaneously and unpack it concurrently).

 
 
 3) On the API server, with 5k active connections resident memory
 was 2.2G for jujud (about 400kB/conn), and only about 55MB for
 mongodb. DB size on disk was about 650MB.
 
 
 400kB per connection seems atrocious.  Goroutines take about 4k on
 their own.  I have a feeling we're keeping copies of a lot of stuff
 in memory per connection that doesn't really need to be copied for
 each connection.  It would be good to get some profiling on that,
 to see if we can get it down to something like 1/10th that size,
 which would be more along the lines of what I'd expect per
 connection.

Each Unit agent triggers ~5 Watch objects in the API server. And the
API server doesn't do any pooling. We could probably shave a lot of
this off if we did something like 1-Watcher per object being watched,
or something like that. The only caveat is that the Watch object on
the server side is the one that tracks the current pointers into
actions that have happened that haven't been reported yet.

Each Watch object is going to be at least 1 goroutine. So you're at
20k right there, without any actual data bookkeeping.

Again, profiling is probably the best thing at this point (dump a
memory profile when 5000 units have reached steady state).

 
 
 The log file could grow pretty big (up to 2.5GB once everything was
 up and running though it does compress to 200MB), but I'll come
 back to that later.
 
 
 interesting question - are our log calls asynchronous, or are we
 waiting for them to get written to disk before continuing?  Wonder
 if that might cause some slowdowns.

I'm pretty sure they are synchronous. I did see 50% of all cycles
consumed by VM time when testing on an m1.small (50% user, 50% VM). I
don't know whether that is I/O or something else.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJxIOYACgkQJdeBCYSNAAMTqACbB+IK5R2s3J3XE0rshnagvFkz
2XsAn0Zb84rpmRL6ysObL396G/xvIVyE
=wun5
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Notes from Scale testing

2013-10-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...

 
 The log size didn't come up again in this email. Not sure if you
 meant separately or just got lost in the message length.

I didn't explicitly enumerate it, but it is because of section (4).
Namely, bringing up 1000 units triggers 1000*1000 (1M) calls to
Service.Life and CharmURL. Each request+response is 2 lines. So adding
1000 units generates at least 4M lines of log.

Each line averages ~189 bytes. So 1000 units = 2*2*1M*189 = 758MB.

 Once all the agents are up and running, they actually are very
 quiet (almost 0 log statements).
 
 
 4) If I bring up the units one by one (for i in `seq 500`; do for j
 in `seq 10` do juju add-unit --to $j ; time wait; done), it ends
 up triggering O(N^2) behavior in the system. Each unit agent seems
 to have a watcher for other units of the same service. So when you
 add 1 unit, it wakes up all existing units to let them know about
 it. In theory this is on a 5s rate limit (only 1 wakeup per 5
 seconds). In
...

 
 5) Along with load, we weren't caching the IP address of the API 
 machine, which caused us to read the provider-state file from
 object storage and then ask EC2 for the IP address of that
 machine. Log of 1 unit agent's connection:
 http://paste.ubuntu.com/6329661/
 
 
 Just to be clear for other readers (wasn't clear to me without
 checking the src)  this isn't the agent resolving the api server
 address from provider-state which would mean provider credentials
 available to each agent, but each agent periodically requesting
 via the api the address of the api servers. So the cache here is
 on the api server.

The cache does need to be either in the DB or on the API server. The
trigger is that running a hook includes the API Addresses in the hook
context. So every hook triggers a call to API Addresses (not sure if
hooks fired in sequence cache the state between calls).

And that triggers the API server to make a request from EC2.

Dave Cheney has a bug that hooks that trigger lots of relation changed
end up DOSing your EC2 account because they end up rate limiting your
account, and then you are unable to use your EC2 creds to kill the
service.
...

 
 
 6) If you restart jujud (say after an upgrade) it causes all unit 
 agents to restart the 41 requests for startup. This seems to be
 rate limited by the jujud process (up to 600% CPU) and a little bit
 Mongo (almost 100% CPU).
 
 It seems to take a while but with enough horsepower and GOMAXPROCS 
 enabled it does seem to recover (IIRC it took about 20minutes).
 
 
 It might be worth exploring how we do upgrades to keep the client
 socket open (ala nginx) to avoid the extra thundering herd on
 restart, ie serialize extant watch state and exec with open fds.
 Upgrade is effectively already triggering a thundering herd with
 the agents as they restart individually, and then the api server
 restart does a restart for another herd.
 
 There's also an extant bug  that restart of juju agents causes 
 unconditional config-changed hook execution even if there is no
 delta on config to the unit.

We've had a few discussions around upgrade. One option is to bring up
all units in upgrade-pending mode. Which is a slow-starting new
herd, but which would prevent the double-thunder at least.

...

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJxIo8ACgkQJdeBCYSNAAP79QCeOcUboSG2R6x5pm3FbDyyunZW
diEAoKPluc3EauJIkTTQR2MUdrw0TOrT
=hBPP
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Sharing a DB user password among units of the app

2013-10-29 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-10-29 16:38, Andreas Hasenack wrote:
 Hi,
 
 There is this charm with a db-admin relation to postgresql. In
 this relation, the charm gets access to a super user in postgresql
 and can do whatever it wants with it.
 
 When the first unit of the app comes up, it creates a random
 password and uses the DB admin user to create an unprivileged DB
 user with that password.
 
 Now I run add-unit, and another unit comes up. How can it get that
 password?
 
...


 
 Is this a case for a peer relation?
 
 

Yes. As I understand it, that is exactly what a peer relation is for.

John
=:-

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJvtAgACgkQJdeBCYSNAAN6fQCeLBJoTNNrWtIc2pnJdDLI80Kh
0VIAoIR31G9ip7f9lCYYHTY/7ZvM2zbo
=a8bp
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Enabling GOMAXPROCS for jujud

2013-10-29 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

My patch uses runtime.GOMAXPROCS(runtime.NumCPU()) which means it
sets GOMAXPROCS=1 for machines that have 1 core.

I don't quite see what the problem is with this particular patch.
(taking advantage of multiple cores when they are available).

John
=:-

On 2013-10-29 10:25, David Cheney wrote:
 Not lgtm. The number of CPU cores available on the default ec2 
 bootstrap machine is 1.
 
 On Tue, Oct 29, 2013 at 5:07 PM, John Arbash Meinel 
 j...@arbash-meinel.com wrote: Do we want to enable
 multiprocessing for Jujud? I have some evidence that it would
 actually help things.
 
 I'm soliciting feedback about this patch: === modified file
 'cmd/jujud/main.go' --- cmd/jujud/main.go 2013-09-13 14:48:13
 + +++ cmd/jujud/main.go   2013-10-28 17:47:52 + @@ -8,6
 +8,7 @@ net/rpc os path/filepath +   runtime
 
 launchpad.net/juju-core/cmd 
 launchpad.net/juju-core/worker/uniter/jujuc @@ -107,6 +108,7 @@ 
 func Main(args []string) { var code int = 1 var err error +
 runtime.GOMAXPROCS(runtime.NumCPU()) commandName :=
 filepath.Base(args[0]) if commandName == jujud { code, err =
 jujuDMain(args)
 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJvZ1MACgkQJdeBCYSNAAObQwCgwx6Ze+rlhoEvMpVpF4aHcVNi
vTkAoLp3D+f4ALYTbLyalqYCKj6JShmF
=mXYg
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Building juju into staging and testing archives

2013-10-29 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The conversation I remember from SFO was that in order to get armhf
builds on non-virtualized builders, we needed to a have a highly
restricted team which would build into a semi-private ppa and then
copy the binaries from there into the devel/stable ppas. As such, it
may be that we already have set up a staging PPA that we can give you
access to for doing releases.

At least, it sounds a lot like what you're proposing here.
John
=:-


On 2013-10-30 0:32, Curtis Hovey-Canonical wrote:
 Hi James, et al.
 
 I know an RT was just closed about setting up the Juju PPAs to
 build for armhf. I wonder if we are in an Uh oh, spaghettios
 moment.
 
 A few weeks ago there was a discussion about the risky period when
 the first package is published, and the time all packages are
 published and the juju tools are also published. There is a period
 between 15 minutes or maybe 15+ hours were users can get the latest
 Juju, but the tools are not published.
 
 I was just setting up PPAs for staging [1] and testing. The PPAS
 will warn users not to add them to their system. I want to build
 into those PPAs. When all packages are built and published, we
 would then assemble and publish the juju tools, then lastly copy
 the packages to the stable/devel PPA.
 
 Do I need to do anything more than ask webops to enable arm for
 the PPAs? Do we need another RT?
 
 [1] https://launchpad.net/~juju/+archive/staging
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJwhxcACgkQJdeBCYSNAAPLuACgju9cajr/xKc5P4D4phaJ1PiW
L4UAnjdKTKl39DTHJlnZ+g5hq6Utkpk7
=Feuz
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: access a unit running in an lxc

2013-10-07 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-10-07 11:47, Mike Sam wrote:
 Thanks. This is actually for when I have deployed a unit on ec2
 machine but within a new lxc on that machine so not for local. Does
 this still apply? I am not quite sure how these network bridges
 need to be configured so if anybody is familiar with their setup
 and how to access the units within them through the machine public
 ip, I would really appreciate that.

We don't currently support using an LXC on EC2 because we don't have a
way to route to the LXC machine. We are looking to add support with
VPC to allow you to request an IP address for the LXC container.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlJSZ3oACgkQJdeBCYSNAAOjTwCgqxLMn5YldsauJ4WpfrtODTZ5
3HwAnjtUhtQ9zUKvJLgtDNYw9Io0Ev+r
=g0jC
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Rietveld cleanup

2013-09-10 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

...


 Is that actually intended to be part of the workflow? It ends up 
 with several more clicks and delays for round trips (you have to 
 wait for your patch to land, then click back through and close
 the Rietveld ticket, etc.)
 
 I just takes a second to close the CL after you submitted it for 
 landing. Actually I always do that after I run bzr rv-submit on
 the approved branch.

There is a small issue that it isn't actually the right time to close
it (it should be closed when it has landed, not when submitted). I
find it interesting that you use rv-submit to handle the LP side of
things, and then manually click on a different website. It *is*
overhead. If it is worth doing, then we should ask everyone to do it,
I just question the premise that it is actually worth adding that
overhead to every submission.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIu/8AACgkQJdeBCYSNAANUYQCfXrMNXRcaUQ8NnKHWzsimxwBz
KnIAoMh9Gpg3xFdWDbw//3QfqTj/6oFd
=SGaj
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Code Review for September 4 from Australia

2013-09-04 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

(forwarded from Andrew who summarized what the other group discussed)

Here's what we found today. I've left off the list so we don't prejudice
anyone who hasn't yet reviewed.

- - State is not a very meaningful name. One possible alternative that
came up was BootstrapState.
- - There's 0% code coverage for the StateInfo function (it may be tested
elsewhere, but not in the provider package).
- - There are some error code paths that are not tested (no tests for
unmarshalling failures, or for storage failures other than IsNotExist).
- - If there were a test for StateInfo, we'd get noisy output - there's no
embedded LoggingSuite in StateSuite.
- - LoadState and LoadStateFromURL are not consistent with the errors
returned: LoadState checks for boostrap-state existence and returns
a NotBootstrappedError; LoadStateFromURL does no such thing. This may be
too difficult to do, though (which HTTP return codes mean that the
environment is not bootstrapped?)
- - In state_test.go, there's a function makeDummyStorage which gets
called in most of the tests. Instead, we should use SetUp/TearDown
methods on the suite, and split out the tests that don't care about
storage into a separate suite.

Tim thought it might be useful to pass on how I came up with the
coverage analysis. I wrote a tool a while ago called gocov:
http://github.com/axw/gocov. If you run gocov test you'll get a JSON
file, which you can feed back into another gocov command (report or
annotate).

Example report: http://paste.ubuntu.com/6061352/
Example source annotation: http://paste.ubuntu.com/6061360/

There are third-party tools for generating prettier output: HTML,
Cobertura XML for importing into Jenkins, and coveralls.io
http://coveralls.io.

The go tool itself is going to have support for coverage analysis in
the 1.2 release (I think). It does essentially the same thing, though I
believe it may be a bit faster.

Cheers,
Andrew
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlInMhgACgkQJdeBCYSNAAOb1QCgvo8mmy0RhvLBbViEzCGcZ5+L
ylAAoKMR6AQiP11j0gFHS1ZAQ4mwLrqY
=IE1W
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju is slow to do anything

2013-08-30 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-08-30 14:28, Peter Waller wrote:
 For the record, I sent the link privately. The run took about 22s 
 but I have measured 30s to 1m.

Some thoughts, nothing that I can give absolute confirmation on.

1) Next week we have a group sprinting on moving a lot of the command
line operations from being evaluated by the client into being
evaluated by the API server (running in the cloud) and then returned.
The explicit benefits that I've seen in other commands are pretty good.

'juju status' is going to be a command that should see a rather large
improvement. Because it does round trip queries for a lot of things
(what machines are there, what are the details of each machine, what
are the units running on each one, etc).

I've prototyped doing those queries in parallel, or trying to do bulk
ops, which actually helped a lot in testing (this was for hundreds of
units/machines).

Doing it on the API server means any round trips are local rather
than from your machine out to Amazon.

2) From here, 'time juju status' with a single instance running on ec2
is 10s. Which breaks down roughly 4s to lookup the IP address, 2s to
establish the state, and 4s to finish up. (resolution of this is 1s
granularity)

Similarly time juju-1.13.2 get not-service takes 8.5s to run. 4s to
lookup the address, 2s to connect, and 3s to give the final 'not
found' result.

With trunk, time ./juju get not-service is 4.6s. 2s to lookup IP
address, 2s to connect, and the not-found result is instantaneous.

So I would expect the 10s of a generic juju status to easily drop
down to sub 5s. Regardless of any round-trip issues.

3) We are also looking to cache the IP address of the API server, to
shave off another ~2-4s for the common case that the address hasn't
changed. (We'll fall back to our current discovery mechanism.)

4) There seems to be an odd timing of your screen cast. It does report
22s which matches the times reported in the debug output. But the
total time of the video is 20s including typing. Is it just running
2:1 speed?

You can see from the debug that you have 7s to lookup the address to
connect to, and then about 1s to connect. The rest is time spent
gathering the information.

I expect it to get a whole lot faster in a couple more weeks, but I'm
not going to guarantee that until we've finished the work.

5) If I counted correctly, you have about 23 machines that are being
considered. A bunch of them down/pending/errored.

I would think for the errored ones you could do some sort of juju
destroy-machine. It might make things better (less time spent
checking on machines you don't care about.)

What happens when you try it? (There may be other issues that make us
think we are waiting for something to happen with a machine that we
don't want to destroy it.)


Anyway in summary, this should be getting better, but I won't have
explicit numbers until the work is done.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIg4UIACgkQJdeBCYSNAAOfxQCeMaRQdqvdyQ11WyRnJ/WPAccp
IysAniDrUq6IDtM0fu9SuZg+2AQto8rP
=JaZw
-END PGP SIGNATURE-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Shared Review for August 28

2013-08-27 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Since Roger was kind enough to do this one in advance, this week's
code review will be:

  state/constraints.go

Remember, we'd like you to spend about an hour going over that file
and its associated tests. Thinking about things that you would comment
on in a normal code review. We'll take an hour on Wednesday to talk
about it.

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIciigACgkQJdeBCYSNAAOPIgCgjo5QEtE/Uu/t+4wdNdnqngyV
ZLAAoMPrM6qIAOizrLqHy3qx/GuiXvrH
=/eQc
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju Windows installer

2013-08-27 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2013-08-27 0:51, Nate Finch wrote:
 The Windows installer for juju is complete, modulo what might be
 asked for in response to this email. I went with Inno Setup on
 John's suggestion, since the learning curve was minimal, it does
 everything we need it to do, and the installer generation script is
 nice plain text. Bonus points for being open source.
 
 Some notes:
 
 1. This creates an exe installer, not an msi. Msi's can be more
 easily packaged together and are somewhat easier to push out to
 many computers on a Windows Domain, but otherwise, the differences
 are minimal.

I think one useful will for msi is automated (unattended install), but
it may be that juju-setup.exe supports that as well.

 2. I'm not sure where to put the installer files (there's two
 installer scripts, two bitmaps, and an icon) in source control.
 Suggestions welcome.

We *could* have a 'juju-installer' project, but I think just putting
it into juju-core in the scripts/ directory like we have for building
the tarballs, etc.


 3. I'm not sure how/if this should get tied into the build process 
 (note that it does require a Windows box to run Inno Setup).

You could probably run it under Wine. And as for alternatives:
  http://www.osalt.com/inno-setup

Lists Nullsoft, Launch4j and InstallJammer. Not that I'm advocating
them, but we might think about it.

I think we treat it like we treat copying tools today. When a release
happens Nate is tasked with running the script and uploading the
juju-setup-1.X.X.exe to launchpad.


 4. I called the setup file juju-setup.exe  'cause it seemed
 like a good idea.

I'd like it to have the version number (juju-setup-1.13.2.exe) but
otherwise fine by me.

I'm also not sure if Windows prefers juju-1.13.2-setup.exe, but
whatever pushes the right buttons.


 5. Right now, the installer is not signed, so you get the ugly UAC 
 dialog box shown below, if we have a cert that Windows can verify, 
 it would look more professional to have the installer signed.

To use U1 as an example, we have
ubuntuone-3.0.1-windows-installer.exe and it has a Canonical UK Ltd
signature on it. We should talk to those guys about how to set up a
process for getting signed installers out, but it isn't something we
need to do today.


 6. I didn't include a way for users to install to a non-default 
 location. It's not hard to do, I just wasn't sure if it would be 
 important, and in general I like as few steps in my installers as 
 possible.

Most installers I've seen have a install with default settings vs
custom install that lets you set those things. I'd like to see that
if it isn't hard to do.

Especially since juju doesn't really need Admin priviledges, it would
be really good if we could let people install it to their local paths.


 7. The installer requires 64 bit Windows (it's a 64 bit go 
 compilation... we could make an executable and installer for a 32 
 bit one as well if people think that's important).

We chatted about this on IRC. We *could* build both, but the client
doesn't really benefit from 64-bits, and the 32-bit version can be
installed everywhere. So it is probably best to just build 32-bit for now.


 8. There's no license file in the installer... I wasn't really
 sure what license file to include, but if there's an appropriate
 one, please let me know.

There is a LICENSE in juju-core/LICENSE which shows juju is AGPLv3. So
probably we should include that.


 9. The setup doesn't create desktop or start menu shortcuts, since 
 they're pretty useless with a command line tool. However, this
 does mean that it's not exactly clear how to start using Juju
 after running the installer. It might be useful to extract the
 relevant test from the code's README and give the user the ability
 pop that up after install.

That seems ok. If we do create a shortcut it should be to cmd.exe
that starts with a PATH that has juju in the path. Otherwise we can
get there when we get there and not worry too much about it today.

 10. The juju.exe executable currently has no icon (go doesn't
 produce executables with icons). Since it's a command line tool,
 that's not really a big deal, but I figured I'd mention it anyway.
 It's fixable after the fact with some third party tools, but I
 wasn't going to bother unless someone asked for it.

My google-fu turns up:
http://stackoverflow.com/questions/673203/add-icon-to-existing-exe-file-from-the-command-line

Which says that the RCEDIT.exe is provided by WinRun4J (which sounds
like it would be related to the above Launch4j).

I would put it under something that we want to do. So at least file
a bug about it if we don't want to spend time today doing it.


 11. The uninstaller won't remove the user's .juju directory. This
 is fairly common, since that is, in effect, user data and not part
 of the installed program itself. However, I wanted to point it
 out.

Sounds perfectly reasonable.

John
=:-

-BEGIN PGP 

api.Client().ServiceGet() and int types

2013-08-26 Thread John Arbash Meinel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I'm implementing juju get via the API. In doing so, I moved over one
of our tests which did (pseudocode)

 set value=int64(0)
 DeepEquals(result, value)

And it used to work when ServiceGet was connecting directly to State.

Now that it goes via the API, the value is now a float64 (probably
because of JSON deserialization).

My guess is that the Mongo BSON type encodes int64 directly, but that
is lost when going via the JSON rpc.

So do we care? Are int types preserved in the rest of the system? I'm
guessing this will effect charms when they grab their settings.
However, can charms be strict about this today?

John
=:-
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlIbkSgACgkQJdeBCYSNAAN2AwCgi7+/oJtlywwAFqcNuW9NdcxM
y3IAoIvrVLruv0nmIcvjH1GjGvoxP+gg
=tQB9
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev