Re: Control different relation sequence

2013-09-04 Thread Kapil Thangavelu
The missing command here is relation-ids to list the relation ids of a
given relation name. JUJU_RELATION_ID gives the current relation be
executed for, to trigger/inspect state on other relations, the relation-ids
command can be used to find their id, which can be passed to
relation-get/set with -r. Any mutations of the other relation state via
relation-set will trigger change hooks on remote related units.

cheers,
Kapil


On Tue, Sep 3, 2013 at 10:33 PM, Gustavo Niemeyer 
gustavo.nieme...@canonical.com wrote:

 The relation-set command accepts a -r parameter which takes the relation
 id
 to act upon. You can pick the relation id of an executing hook from
 the JUJU_RELATION_ID environment variable. This way you can act across
 relations.

 Hopefully this will be better documented at some point.

 On Tue, Sep 3, 2013 at 11:23 PM, Mike Sam mikesam...@gmail.com wrote:
  Thanks Gustavo but I did not quite get your point. The problem is that
 for
  the new unit for service A, the dependent hooks are on two different
  independent relationships. I mean I can control when the new unit of
 Service
  A has properly established a relation with all the units of service B on
 say
  relation x_relation_changed, but how do I make all the units of service
 C to
  now trigger the y_relation_changed hook of the Service A unit because the
  unit is ready to process them? How do I make y_relation_changed hook to
 get
  triggered AGAIN (in case it has already been triggered but ignored
 because
  relation with service B was not done setting up) when x_relation_changed
 see
  fit? Would you please explain your point is the Service A, B, C context
 of
  my example?
 
 
 
 
  On Tue, Sep 3, 2013 at 6:38 PM, Gustavo Niemeyer
  gustavo.nieme...@canonical.com wrote:
 
  Hi Mike,
 
  You cannot control the sequence in which the hooks are executed, but
  you have full control of what you do when the hooks do execute. You
  can choose to send nothing to the other side of the relation until its
  time to report that a connection may now be established, and when you
  do change the relation, the remote hook will run again to report the
  change.
 
  On Tue, Sep 3, 2013 at 10:17 PM, Mike Sam mikesam...@gmail.com wrote:
   Imagine a unit needs to be added to an existing service like service
 A.
   Service A is already in relations with other services like Service B
 and
   Service C on different requires.
  
   For the new unit on Service A to work, it needs to first process the
   relation_joined and relation_changed with the units of service B
 before
   it
   could process  relation_joined and relation_changed with the units of
   service C.
  
   Is there a way to enforce such desired sequence relationship
   establishment
   at the charm level? In other words, I do not think we can control the
   hook
   execution sequence of different relationships officially but then I am
   wondering how can we do a situation like above nicely?
  
   Thanks,
   Mike
  
  
  
  
  
   --
   Juju-dev mailing list
   Juju-dev@lists.ubuntu.com
   Modify settings or unsubscribe at:
   https://lists.ubuntu.com/mailman/listinfo/juju-dev
  
 
  --
  gustavo @ http://niemeyer.net
 
 



 --
 gustavo @ http://niemeyer.net

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Logging changes coming with juju 1.15

2013-09-26 Thread Kapil Thangavelu
I don't think the log collection is the issue though this is still useful
to help mitigate perhaps some of the log accumulation on disk (despite
rotation archival). The problem is the primitive log display which amounts
to tail aggregated syslog. As an end user, i don't care about framework
internals, i care about seeing the provider mutations (create, destroy, sec
group maybe), hook execution, and errors. With juju-core there isn't any
ability to filter the log from the client on a given channel/hierarchy or
level, and currently any usage of log basically fills up with ping api
traffic obscuring actual content.

-k




On Thu, Sep 26, 2013 at 6:03 PM, Tim Penhey tim.pen...@canonical.comwrote:

 Hi All,

 Last night I was pointed at a bug where one of the main problems was
 poor communication on my part as to changes, so here is the
 comprehensive details.

 For a long time (all of juju-core up to now), all the machine agents
 logged everything at debug.  Logging at the debug level is used
 primarily by developers.  Warnings and errors are of more interest to
 the users, and sometimes the informational messages.  Some time back, a
 new hierarchical logging facility was added.  This allows logging to be
 grouped and potentially have different groups of log messages output at
 different severity levels.  These groups are dotted names that should be
 familiar to people that have used log4j, python logging or many other
 logging frameworks.  We call these logical groups modules.  Modules
 have parents, being those the next level up, and there is a root which
 is the parent of all first level modules.

 For example:
   juju.agent is the parent of juju.agent.tools
   juju is the parent of juju.agent
   root is the parent of juju

 The default logging configuration is:
   root=WARNING

 This means that only warnings and above (error and critical) are logged
 by default.

 On the command line, there are two ways to change this.  You can specify
 an environment variable JUJU_LOGGING_CONFIG, or you can specify
 --log-config on the command line.  The default for --log-config is the
 JUJU_LOGGING_CONFIG environment variable, so if you specify --log-config
 you override what the environment has set.

 The environment variable gives developers a simple way to have the
 default logging set to what they are currently working on.  For example,
 if I cared about the provisioner and the azure provider, I could do the
 following:
   export
 JUJU_LOGGING_CONFIG=juju.provider.azure=DEBUG;juju.worker.provisioner=DEBUG

 This would get combined with the default of root=WARNING.

 NOTE: setting --log-config doesn't show the log file on the command
 line, but does propagate the logging configuration into the environment,
 so all agents that start up get that logging config.

 To show the log on the command line, now use --show-log.  The --debug
 has been kept to be short-hand for --log-config=root=DEBUG
 --show-log, and --verbose has been deprecated with its current meaning
 (we intend to have --verbose mean 'show me more output for the command I
 am running', as logging and command output have different audiences
 almost all the time).

 If you are looking at the agent log files, or the debug-log command, you
 will notice that the agents all start logging at the DEBUG level, but
 once the internal workers are up and running, you'll see a line like this:

 2013-09-23 15:56:21 DEBUG juju.worker.logger logger.go:45 reconfiguring
 logging from root=DEBUG to root=WARNING

 Every agent now has a worker that allows the logging configuration to be
 changed on a running environment.  The root=WARNING that the
 configuration is being changed to is the default log-config specified at
 environment bootstrap time, and if one isn't specified, it defaults to
 warnings.

 To change the logging config of a running environment, you can use the
 existing set-environment (or set-env) command.

 juju set-env 'logging-config=root=INFO;juju.provider=DEBUG'

 and this is then propagated through to all the running agents.  Note
 that this is NOT additive.  So if the existing logging config was
 juju.provider.azure=DEBUG, and it was changed to juju.agent=DEBUG,
 the juju.provider.azure would default back to the parent's level, which
 would be WARNING.

 You only need to specify the root logger if you want to change it from
 WARNING to something else, as by default it is set to WARNING.

 Tim

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: local provider issues with trunk (2 fixed)

2013-09-29 Thread Kapil Thangavelu
On Fri, Sep 27, 2013 at 12:33 AM, Tim Penhey tim.pen...@canonical.comwrote:

 On 27/09/13 14:59, Kapil Thangavelu wrote:
  On the topic of local provider bootstrap i ran into this bug earlier
 today
 
  Bootstrapping a local provider first prevents bootstrap on any other
  environment
  https://bugs.launchpad.net/juju-core/+bug/1231724

 This is new addition, I've assigned this to Rog as he has been adding
 this (I think).



 I can't replicate the other issues people are having with LXC on raring,
 so I'm upgrading to saucy (wish me luck).


fwiw, i was able to solve my issues with local provider on saucy, though
the cause still eludes. I had originally thought it was an inotify race in
upstart.Install and tried a sleep ther,e but it turns out that it was that
without using sudo explicitly the job wasn't found. Adding a sudo to cli
params for the upstart.Start resolved it. Fwiw, my env is
JUJU_HOME=/opt/juju  due to encrypted home and restarts for local provider,
juju-core tip, and golang release tag, and i'm bootstrapping with sudo -E
~/bin/juju bootstrap -v --debug --upload-tools. Original error i would get
from bootstrap
http://pastebin.ubuntu.com/6172383/

cheers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju HA

2013-09-29 Thread Kapil Thangavelu
Hi Mike,

The plan for the shared work between jujud with the EnvironmentJobs
(provisioning, firewall) is still being discussed. Its either just failover
with heartbeat, or partitioning work via lock or queue.

cheers,

Kapil


On Fri, Sep 27, 2013 at 9:34 PM, Mike Sam mikesam...@gmail.com wrote:

  I wanted to ask you how juju HA is going to work out? I can understand
 that the state server can be a replica set but how you run multiple jujud
 on different machines? Aren't they going to step on each other without
 partitioning?  what is the plan to do HA?

 Thanks,
 Mike

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: access a unit running in an lxc

2013-10-07 Thread Kapil Thangavelu
Expanding a bit, internal communication between services and units isn't
along defined channels/ports by the charm, so in effect without vpc you'll
need a separate network overlay (ala tinc or gre) which has some overhead.
Its much more efficient to just use the provider  (aws) network
abstractions that directly map interface/addresses onto the containers. For
some restricted set of use cases, ie http app containers, a name virtual
host proxy (hipache, nginx/redis, etc) could do some of the work but again
thats probably coordination and port forwarding (host port to container)
that's outside of juju.

-k



On Mon, Oct 7, 2013 at 9:46 AM, Kapil Thangavelu 
kapil.thangav...@canonical.com wrote:

 Theoretically yes, in juju probably not.


 On Mon, Oct 7, 2013 at 9:43 AM, Mike Sam mikesam...@gmail.com wrote:

 Sure, no worries, thank you for clarifying this.

 I am curious, in terms of lxc work on general ec2 and not vpc, is this
 going to be doable at all?


 On Oct 7, 2013, at 1:43 AM, William Reade william.re...@canonical.com
 wrote:

 Hi Mike

 I'm sorry, it looks like we never hooked up the early validation that
 would have told you it wouldn't work right now. It's my fault -- it
 languished in review a bit, and I didn't think through the consequences of
 leaving it out. At the moment containerization only works against the MAAS
 provider.

 Cheers
 William


 On Mon, Oct 7, 2013 at 10:08 AM, Mike Sam mikesam...@gmail.com wrote:

  the command

 juju deploy --to lxc:1

 on 1.15.1 worked for me and the lxc is running on the machine when I ssh
 to it but did not check if the actual unit has been deployed on it or not.
 Are you saying the accessibility is not supported yet?


 On Oct 7, 2013, at 12:49 AM, John Arbash Meinel j...@arbash-meinel.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 2013-10-07 11:47, Mike Sam wrote:
  Thanks. This is actually for when I have deployed a unit on ec2
  machine but within a new lxc on that machine so not for local. Does
  this still apply? I am not quite sure how these network bridges
  need to be configured so if anybody is familiar with their setup
  and how to access the units within them through the machine public
  ip, I would really appreciate that.
 
  We don't currently support using an LXC on EC2 because we don't have a
  way to route to the LXC machine. We are looking to add support with
  VPC to allow you to request an IP address for the LXC container.
 
  John
  =:-
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.13 (Cygwin)
  Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
  iEYEARECAAYFAlJSZ3oACgkQJdeBCYSNAAOjTwCgqxLMn5YldsauJ4WpfrtODTZ5
  3HwAnjtUhtQ9zUKvJLgtDNYw9Io0Ev+r
  =g0jC
  -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: access a unit running in an lxc

2013-10-07 Thread Kapil Thangavelu
Vpcs have gotten alot of attention from amazon in the last year. The August
2013 added feature for attaching public addresses to public subnet
instances addresses quite alot of the access/management issues afaics for
simple/dev vpc usage (no need for bastion hosts in public subnets to access
or eip usage per instance or via nat instances to private subnets for
outbound network connectivity), effectively you can get something very
similar to a non-vpc setup with just added benefits of vpc capabilities now.

http://aws.typepad.com/aws/2013/08/additional-ip-address-flexibility-in-the-virtual-private-cloud.html


On Mon, Oct 7, 2013 at 1:21 PM, Mike Sam mikesam...@gmail.com wrote:

 Thanks for expanding on it. Vpc is great for production, but during
 development, it makes accessing/managing things somehow more involved which
 can add friction to the process in larger dev teams


 On Oct 7, 2013, at 9:55 AM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:

 I expanded on this in a separate email, but perhaps the real question is
 what's wrong with vpc usage in ec2?

 -k


 On Mon, Oct 7, 2013 at 9:53 AM, Mike Sam mikesam...@gmail.com wrote:

 Could you please elaborate as to why more specifically?

 Also anything we can do within the charm to do this?

 On Oct 7, 2013, at 9:46 AM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:

 Theoretically yes, in juju probably not.


 On Mon, Oct 7, 2013 at 9:43 AM, Mike Sam mikesam...@gmail.com wrote:

 Sure, no worries, thank you for clarifying this.

 I am curious, in terms of lxc work on general ec2 and not vpc, is this
 going to be doable at all?


 On Oct 7, 2013, at 1:43 AM, William Reade william.re...@canonical.com
 wrote:

 Hi Mike

 I'm sorry, it looks like we never hooked up the early validation that
 would have told you it wouldn't work right now. It's my fault -- it
 languished in review a bit, and I didn't think through the consequences of
 leaving it out. At the moment containerization only works against the MAAS
 provider.

 Cheers
 William


 On Mon, Oct 7, 2013 at 10:08 AM, Mike Sam mikesam...@gmail.com wrote:

  the command

 juju deploy --to lxc:1

 on 1.15.1 worked for me and the lxc is running on the machine when I
 ssh to it but did not check if the actual unit has been deployed on it or
 not. Are you saying the accessibility is not supported yet?


 On Oct 7, 2013, at 12:49 AM, John Arbash Meinel j...@arbash-meinel.com
 wrote:

  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 2013-10-07 11:47, Mike Sam wrote:
  Thanks. This is actually for when I have deployed a unit on ec2
  machine but within a new lxc on that machine so not for local. Does
  this still apply? I am not quite sure how these network bridges
  need to be configured so if anybody is familiar with their setup
  and how to access the units within them through the machine public
  ip, I would really appreciate that.
 
  We don't currently support using an LXC on EC2 because we don't have a
  way to route to the LXC machine. We are looking to add support with
  VPC to allow you to request an IP address for the LXC container.
 
  John
  =:-
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.13 (Cygwin)
  Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
 
  iEYEARECAAYFAlJSZ3oACgkQJdeBCYSNAAOjTwCgqxLMn5YldsauJ4WpfrtODTZ5
  3HwAnjtUhtQ9zUKvJLgtDNYw9Io0Ev+r
  =g0jC
  -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Notes from Scale testing

2013-10-30 Thread Kapil Thangavelu
Hi John,

This is awesome, its great to see this scale testing and analysis. Some
additional questions/comments inline.


On Wed, Oct 30, 2013 at 6:23 AM, John Arbash Meinel
j...@arbash-meinel.comwrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I'm trying to put together a quick summary of what I've found out so
 far with testing juju in an environment with thousands (5000+) agents.


 1) I didn't ever run into problems with connection failures due to
 socket exhaustion. The default upstart script we write for jujud has
 limit nofile 2 2 and we seem to properly handle that 1 agent
 == 1 connection. (vs the old 1 agent = =2 mongodb connections).


 2) Agents seem to consume about 17MB resident according to 'top'. That
 should mean we can run ~450 agents on an m1.large. Though in my
 testing I was running ~450 and still had free memory, so I'm guessing
 there might be some copy-on-write pages (17MB is very close to the
 size of the jujud binary).


 3) On the API server, with 5k active connections resident memory was
 2.2G for jujud (about 400kB/conn), and only about 55MB for mongodb. DB
 size on disk was about 650MB.

 The log file could grow pretty big (up to 2.5GB once everything was up
 and running though it does compress to 200MB), but I'll come back to
 that later.


The log size didn't come up again in this email. Not sure if you meant
separately or just got lost in the message length.



 Once all the agents are up and running, they actually are very quiet
 (almost 0 log statements).


 4) If I bring up the units one by one (for i in `seq 500`; do for j in
 `seq 10` do juju add-unit --to $j ; time wait; done), it ends up
 triggering O(N^2) behavior in the system. Each unit agent seems to
 have a watcher for other units of the same service. So when you add 1
 unit, it wakes up all existing units to let them know about it. In
 theory this is on a 5s rate limit (only 1 wakeup per 5 seconds). In
 practice it was taking 3s per add unit call [even when requesting
 them in parallel]. I think this was because of the load on the API
 server of all the other units waking up and asking for details at the
 same time.



 - From what I can tell, all units take out a watch on their service so
 that they can monitor its Life and CharmURL. However, adding a unit to
 a service triggers a change on that service, even though Life and
 CharmURL haven't changed. If we split out Watching the
 units-on-a-service from the lifetime and URL of a service, we could
 avoid the thundering N^2 herd problem while starting up a bunch of
 units. Though UpgradeCharm is still going to thundering herd.

 Response in log from last AddServiceUnits call:
 http://paste.ubuntu.com/6329753/

 Essentially it triggers 700 calls to Service.Life and CharmURL (I
 think at this point one of the 10 machines wasn't responding, so it
 was 1k Units running)


 5) Along with load, we weren't caching the IP address of the API
 machine, which caused us to read the provider-state file from object
 storage and then ask EC2 for the IP address of that machine.
 Log of 1 unit agent's connection: http://paste.ubuntu.com/6329661/


Just to be clear for other readers (wasn't clear to me without checking the
src)  this isn't the agent resolving the api server address from
provider-state which would mean provider credentials available to each
agent, but each agent periodically requesting via the api the address of
the api servers. So the cache here is on the api server.



 Eventually while starting up the Unit agent would make a request for
 APIAddresses (I believe it puts that information into the context for
 hooks that it runs). Occasionally that request gets rate limited by EC2.
 When that request fails it triggers us to stop the
   WatchServiceRelations
   WatchConfigSettings
   Watch(unit-ubuntu-4073) # itself
   Watch(service-ubuntu)   # the service it is running

 It then seems to restart the Unit agent, which goes through the steps
 of making all the same requests again. (Get the Life of my Unit, get
 the Life of my service, get the UUID of this environment, etc., there
 are 41 requests before it gets to APIAddress)




 6) If you restart jujud (say after an upgrade) it causes all unit
 agents to restart the 41 requests for startup. This seems to be rate
 limited by the jujud process (up to 600% CPU) and a little bit Mongo
 (almost 100% CPU).

 It seems to take a while but with enough horsepower and GOMAXPROCS
 enabled it does seem to recover (IIRC it took about 20minutes).


It might be worth exploring how we do upgrades to keep the client socket
open (ala nginx) to avoid the extra thundering herd on restart, ie
serialize extant watch state and exec with open fds. Upgrade is effectively
already triggering a thundering herd with the agents as they restart
individually, and then the api server restart does a restart for another
herd.

There's also an extant bug  that restart of juju agents causes
unconditional config-changed hook 

Re: API work

2013-11-01 Thread Kapil Thangavelu
I'd suggest creating a bug per command (some already have extant bugs),
there's vastly differing amounts of work in them, and its easier to track
progress with appropriately sized tasks. ie kanban style cli api is a story.


On Fri, Nov 1, 2013 at 2:26 AM, Andrew Wilkins andrew.wilk...@canonical.com
 wrote:

 I've created a bug to do the CLI-API work. If there's something existing,
 or a more appropriate place to do this, let me know and this can disappear.

 https://bugs.launchpad.net/juju-core/+bug/1246983

 I'm going to look at doing destroy-machine, which looks pretty trivial.

 Cheers,
 Andrew

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: High Availability command line interface - future plans.

2013-11-06 Thread Kapil Thangavelu
On Thu, Nov 7, 2013 at 2:49 AM, roger peppe rogpe...@gmail.com wrote:

 The current plan is to have a single juju ensure-ha-state juju
 command. This would create new state server machines if there are less
 than the required number (currently 3).

 Taking that as given, I'm wondering what we should do
 in the future, when users require more than a single
 big On switch for HA.

 How does the user:

 a) know about the HA machines so the costs of HA are not hidden, and that
 the implications of particular machine failures are clear?

 b) fix the system when a machine dies?

 c) scale up the system to x thousand nodes?

 d) scale down the system?

For a), we could tag a machine in the status as a state server, and
 hope that the user knows what that means.

 For b) the suggestion is that the user notice that a state server machine
 is non-responsive (as marked in status) and runs destroy-machine on it,
 which will notice that it's a state server machine and automatically
 start another one to replace it. Destroy-machine would refuse to work
 on a state server machine that seems to be alive.

 For c) we could add a flag to ensure-ha-state suggesting a desired number
 of state-server nodes.

 I'm not sure what the suggestion is for d) given that we refuse to
 destroy live state-server machines.

 Although ensure-ha-state might be a fine way to turn
 on HA initially I'm not entirely happy with expanding it to cover
 all the above cases. It seems to me like we're going
 to create a leaky abstraction that purports to be magic (just wave the
 HA wand!) and ends up being limiting, and in some cases confusing
 (Huh? I asked to destroy that machine and there's another one
 just been created)

 I believe that any user that's using HA will need to understand that
 some machines are running state servers, and when things fail, they
 will need to manage those machines individually (for example by calling
 destroy-machine).

 I also think that the solution to c) is limiting, because there is
 actually no such thing as a state server - we have at least three
 independently scalable juju components (the database servers (mongodb),
 the API servers and the environment managers) with different scaling
 characteristics. I believe that in any sufficiently large environment,
 the user will not want to scale all of those at the same rate. For example
 MongoDB will allow at most 12 members of a replica set, but a caching API
 server could potentially usefully scale up much higher than that. We could
 add more flags to ensure-ha-state (e.g.--state-server-count) but we then
 we'd lack the capability to suggest which might be grouped with which.

 PROPOSAL

 My suggestion is that we go for a slightly less magic approach.
 that provides the user with the tools to manage
 their own high availability set up, adding appropriate automation in time.

 I suggest that we let the user know that machines can run as juju server
 nodes, and provide them with the capability to *choose* which machines
 will run as server nodes and which can host units - that is, what *jobs*
 a machine will run.

 Here's a possible proposal:

 We already have an add-machine command. We'd add a --jobs flag
 to allow the user to specify the jobs that the new machine(s) will
 run. Initially we might have just two jobs, manager and unit
 - the machine can either host service units, or it can manage the
 juju environment (including running the state server database),
 or both. In time we could add finer levels of granularity to allow
 separate scalability of juju server components, without losing backwards
 compatibility.

 If the new machine is marked as a manager, it would run a mongo
 replica set peer. This *would* mean that it would be possible to have
 an even number of mongo peers, with the potential for a split vote
 if the nodes were partitioned evenly, and resulting database stasis.
 I don't *think* that would actually be a severe problem in practice.
 We would make juju status point out the potential problem very clearly,
 just as it should point out the potential problem if one of an existing
 odd-sized replica set dies. The potential problems are the same in both
 cases, and are straightforward for even a relatively naive user to avoid.

 Thus, juju ensure-ha-state is almost equivalent to:

 juju add-machine --jobs manager -n 2

 In my view, this command feels less magic than ensure-ha-state - the
 runtime implication (e.g. cost) of what's going on are easier for the
 user to understand and it requires no new entities in a user's model of
 the system.

 In addition to the new add-machine flag, we'd add a single new command,
 juju machine-jobs, which would allow the user to change the jobs
 associated with an existing machine.  That could be a later addition -
 it's not necessary in the first cut.

 With these primitives, I *think* the responsibilities of the system and
 the model to the user become clearer.  Looking back to the original
 user questions:

Re: Software Defined Networking as a Charm in OpenStack (or others)

2013-11-11 Thread Kapil Thangavelu
Hi Tim,

That sounds great. I'd suggest breaking up the charms into two components,
the hypervisor agent as a subordinate that is deployed as a subordinate to
the nova-compute service and also a relation to the neutron service.
Subordinate services 'live' in the same machine as their 'parent' services
and are scaled out along with the parent service. The other part sounds
like it would be a neutron charm plugin to configure neutron with the
midokura backend.

cc'ing our openstack charmers.

cheers,

Kapil


On Thu, Nov 7, 2013 at 12:47 AM, Tim Fall t...@midokura.com wrote:

 Howdy from OpenStack Hong Kong everyone.

 Some of you may be aware that MidoKura is working on JuJu support for our
 SDN virtual network. On that front, I have a question for consideration.

 Since networking isn’t really a straightforward “component” of OpenStack
 that fits nicely inside a box, I was wondering what would be best way to go
 about integrating it for general use. The standard install process involves
 installing agents on each hypervisor, and doing upstream calls through the
 neutron API. Would it be wise to try to include this sort of thing as a
 pure charm and then establish service connections to the relevant
 components, or to try and include it as a networking “option” within other
 charms (like Neutron)?

 Thanks for the thoughts!

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Initial ssh key management functionality in trunk

2013-12-13 Thread Kapil Thangavelu
whether it supports gh:username is somewhat dependent on distro version,
afaics precise versions of ssh-import-id do not support it. if we want to
support the large repository of keys and users from gh on precise, we
should just implement the lookup and addition in go.. key retrieval from
either lh/gh is a simple http get away.


On Fri, Dec 13, 2013 at 7:33 PM, Ian Booth ian.bo...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1



 On 14/12/13 00:22, Aaron Bentley wrote:
  On 13-12-13 03:55 AM, Ian Booth wrote:
  I'm guessing people will mostly use import to pull in ssh keys from
  Launchpad or Github eg juju authorised-keys import lp:wallyworld.
  But for clouds which do not have access to the internet, add is
  useful since it allows a full key to be imported directly.
 
  If lp: URLs are supported, I recommend using lp:~wallyworld for
  consistency with other lp: URLs.
 

 The utility which retrieves the keys is /usr/bin/ssh-import-id.
 So the key id format is determined by that. As well as lp:username, it
 also
 supports retrieving keys from Github using gh:username.
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.14 (GNU/Linux)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iJwEAQECAAYFAlKrp2AACgkQCJ79BCOJFcY91wP9HahzOUERxlqnSkCqxFSUi/RV
 AdcHJ4tiM+1o0p6KkCwhMFDl+BS09rH133P56CaWY/lL3vmvRmYYx0v833efz2ru
 nnFWA1RByDRQVy8IEu1chkxwAS5L1GK3LSBouS4BSYQLEhPHBZ4f8nl8RxJ+gXbe
 jojhRvG/sfB6M8X54ZE=
 =9k+c
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: API Changes to AllWatcher / Environment Tags

2014-02-18 Thread Kapil Thangavelu
how does the api client know the uuid prior to connection? jenv parsing on
cli where applicable?


On Tue, Feb 18, 2014 at 11:11 AM, Dimiter Naydenov 
dimiter.nayde...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 18.02.2014 17:03, John Meinel wrote:
  Can we make the API /uuid/api ? That makes them all peer paths.

 Sure we can, I like that better in fact!

 Dimiter
 
  John =:-
 
  On Feb 18, 2014 7:43 PM, Dimiter Naydenov
  dimiter.nayde...@canonical.com
  mailto:dimiter.nayde...@canonical.com wrote:
 
  Hi all,
 
  This is an announcement / request for comments for upcoming
  juju-core API changes to the way AllWatcher works and also what
  URIs/paths the API server listens to.
 
  Very soon we'll make a few changes to the way AllWatcher work in
  the API, and also will add a different endpoint for the API
  server.
 
  1) Annotations changes to the environment entity will no longer be
  returned with the environment tag as environment-uuid, but
  instead with just environment. This most likely affects the
  GUI/Landscape/CLI that use the API. It's a minor change, and it's
  needed because we are making all API connections specific to a
  single environment (see the related point 2).
 
  The code that depends on having an environment tag with UUID will
  need to change so that it accepts both environment and
  environment-uuid as valid. We'll change juju-core to send only
  UUID-less environment tags most likely before the next release
  (1.18), but not before juju API clients are notified.
 
  2) Right now the API server's URIs for websocket and HTTPS
  connections are plain (/ for the API and /charms for HTTPS,
  soon to have /log for access to the consolidated debug logs).
  We'll change the API server to start accepting URIs in the form
  /uuid/ for the websocket API and /uuid/charms for HTTPS
  respectively. The UUID in the URL must match the environment that
  the client wants to connect to and will get a 404 if it does not
  match the one in state. The old URIs will still be usable, but
  deprecated and about to get removed in a future release (likely
  before 14.04).
 
  Thoughts, comments are welcome!
 
  Regards, juju-core team
 
  -- Juju-dev mailing list Juju-dev@lists.ubuntu.com
  mailto:Juju-dev@lists.ubuntu.com Modify settings or unsubscribe
  at: https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTA4Y3AAoJENzxV2TbLzHwvD8IAKH0GLvSmCx6mBxuTuKiLUsK
 UlDogXv26jXIFm/rcoXVY1gM6hESbZPBkuFv95ruyvDmc8zQc2471zayD7k7ydaY
 pWam7GImq/X/QEW9gGkPXx+5RqaBIaimuqbyiASj2I8aUArwBANWAGBKVyZEiud0
 c1y7XpkwsyOLzgQLY2LNh+OZwvlIgkl2NxWz8ptGipU17vsBYbcPjwbA9JYfHdnl
 egASETYLzLyQfP6o9gJeyuU4QtikO5l/JanQfogEgoIk5H/Mm4tUek6MZLYFaYOd
 K5PFm7ph5DjWwbEtadLb1rX45+mA4bD1ouYDTyAcA21p+Hmay8J+Z7D8je1G8yA=
 =Gq1Q
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Two new bugs blocking CI

2014-02-19 Thread Kapil Thangavelu
i ran into this as well, there's a bug in manual provider on trunk which
effectively needs --upload-tools to bootstrap correctly (filed previously
as https://launchpad.net/bugs/1280678)

-k



On Wed, Feb 19, 2014 at 6:56 PM, Curtis Hovey-Canonical 
cur...@canonical.com wrote:

 Juju CI found this regression today:
 upgrade-juju broken in lxc
 https://bugs.launchpad.net/juju-core/+bug/1282224

 We are setting up manual provider tests. We get failures though. We
 are not certain if CI has found a bug, or if the tests need to be
 setup differently.
 manual provider: juju bootstrap fails with 102 bootstrapping
 failed, removing state file: exit status 1
 https://bugs.launchpad.net/juju-core/+bug/1282235

 --
 Curtis Hovey
 Canonical Cloud Development and Operations
 http://launchpad.net/~sinzui

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: manual bootstrap in 1.18

2014-03-03 Thread Kapil Thangavelu
On Mon, Mar 3, 2014 at 11:54 AM, William Reade
william.re...@canonical.comwrote:

 Hi all

 We've been talking about what we need to do to finalise 1.18, and the
 issue with manual bootstrap addresses in environments.yaml [0] has been
 causing us some problems; in particular, we think it was a mistake to put
 the bootstrap node address in environments.yaml in the first place.


agreed.. i've had to do some nasty work arounds for manual provider to work
around this when automating  (temp juju_home and bootstrap and drop .jenv
from that temp into actual home environments).


 and that what we should have done was to allow for `juju bootstrap --name
 foo --to ssh:blah.blah` to create a .jenv without any reference to
 environments.yaml at all.


+1



 However, blocking 1.18 on this change would be most unhelpful to many
 people, so we would like to simply switch off manual provider bootstrap for
 the 1.18 release; this would *not* be a regression, because we didn't have
 manual bootstrap in 1.16.





 If this would make you seriously unhappy, please speak now so we can try
 to figure out a path that minimises global sadness.


ugh.. no.. please keep it. i'm shipping partner integrations, and plugins
(digitalocean, and lxc) that all use manual provider.


cheers,
Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: What happened to pinned bootstrap

2014-03-30 Thread Kapil Thangavelu
sounds like a great case being made for --upload-tools by default.



On Sun, Mar 30, 2014 at 12:23 AM, John Meinel j...@arbash-meinel.comwrote:

 I thought at one point we were explicitly requiring that we bootstrap
 exact versions of tools (so juju CLI 1.17.2 would only bootstrap a 1.17.2
 set of tools). We at least did 1.17 will only bootstrap 1.17, but looking
 at the code we still always deploy the latest 1.17 (which broke all the
 1.17 series of CLI because 1.17.7 has an incompatible required flag).

 There is an argument that we can't get away with such a thing in a stable
 series anyway, so it isn't going to be a problem. Mostly, though, I had
 thought that we did exact matching, but I can see from the code that is
 clearly not true.

 Would it be very hard to do so? I think William had a very interesting
 idea that CLI bootstrap would always only bootstrap the exact version of
 tools, but could set the AgentVersion to the latest stable minor version,
 so it essentially bootstraps and then immediately upgrades. (With the big
 benefit that the upgrade process to migrate from old versions to new
 versions gets run.)

 This could be a distraction from the other stuff we're working on, but it
 doesn't look that hard to implement, and would avoid some of these
 semi-accidental breaking of old tools.

 John
 =:-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: What happened to pinned bootstrap

2014-03-30 Thread Kapil Thangavelu
On Sun, Mar 30, 2014 at 4:17 PM, Ian Booth ian.bo...@canonical.com wrote:


 On 31/03/14 02:11, Kapil Thangavelu wrote:
  sounds like a great case being made for --upload-tools by default.
 

 --upload-tools does happen automatically on bootstrap, but only if no
 matching,
 pre-built tools are found. So, if a 1.19 client were used to bootstrap and
 only
 1.18 tools were available, upload-tools would be done automatically.


On trunk it currently just falls back to the latest tools major/minor
version match found in streams afaics. (ie 1.17.8 trunk client bootstraps
1.17.7 env) which may or may not be compatible and is backwards version
movement, though it matches up with the major/minor match you write of.



 As John points out, tools matching is done based on major.minor version
 number.
 My understanding was that X.Y.Z should be compatible with X.Y.W where W !=
 Z. So
 1.17.6 clients should have been compatible with 1.17.7 tools. If we break
 compatibility, then we should have incremented the minor version number.
 Or, in
 this case, given we didn't want to do that, ensure 1.17.7 tools were
 backwards
 compatible with 1.17.6 clients.

 Note that we used to just match tools on major version. This was correctly
 deemed unworkable, and so the move to major.minor matching was at the time
 considered to be sufficient so long as we coded for such compatibility. I
 think
 the core issue here was just a simple mistake and/or misunderstanding of
 the
 version compatibility policies in place. If the situation has highlighted
 the
 need for a change in policy, that's fine, but we then need to agree that
 we need
 to be stricter on tools matching.


there's still a few issues with simplestreams vs upload-tools even when it
works perfectly. additional steps and maintenance for private clouds, zero
visibility into version chosen when upgrading juju (ie. no dryrun).
thankfully as of last week private clouds setup are publicly documented for
initial bootstrap.

still, all told, the simplicity of just use the binary i'm running is that
its incredibly transparent and obvious what the result will be and always
works, and i've basically hardwired it to avoid ambiguity. i know many of
us have spent a few hrs helping users debug tool issues. but perhaps this
is just the last step on the road to working reliably and transparently. in
that case, i'd suggest for dev versions we default to major/minor/micro
match, and stable can keep major/minor match or do the same, and never go
backwards on versions when bootstrapping. For the transparency aspect
having a flag/plugin/cli to find what version juju will pick for a given
env on bootstrap  upgrade would be good.

cheers,

kapil





 
 
  On Sun, Mar 30, 2014 at 12:23 AM, John Meinel j...@arbash-meinel.com
 wrote:
 
  I thought at one point we were explicitly requiring that we bootstrap
  exact versions of tools (so juju CLI 1.17.2 would only bootstrap a
 1.17.2
  set of tools). We at least did 1.17 will only bootstrap 1.17, but
 looking
  at the code we still always deploy the latest 1.17 (which broke all the
  1.17 series of CLI because 1.17.7 has an incompatible required flag).
 
  There is an argument that we can't get away with such a thing in a
 stable
  series anyway, so it isn't going to be a problem. Mostly, though, I had
  thought that we did exact matching, but I can see from the code that is
  clearly not true.
 
  Would it be very hard to do so? I think William had a very interesting
  idea that CLI bootstrap would always only bootstrap the exact version of
  tools, but could set the AgentVersion to the latest stable minor
 version,
  so it essentially bootstraps and then immediately upgrades. (With the
 big
  benefit that the upgrade process to migrate from old versions to new
  versions gets run.)
 
  This could be a distraction from the other stuff we're working on, but
 it
  doesn't look that hard to implement, and would avoid some of these
  semi-accidental breaking of old tools.
 
  John
  =:-
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
 
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: What happened to pinned bootstrap

2014-04-18 Thread Kapil Thangavelu
On Fri, Apr 18, 2014 at 11:34 AM, Aaron Bentley aaron.bent...@canonical.com
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 14-04-18 06:28 AM, William Reade wrote:
  As for automatically upgrading: it's clearly apparent that there's
  a compelling case for not *always* doing so. But the bulk of patch
  releases *will* be server-side bug fixes, and it's not great if we
  generally fail to deliver those to casual users.

 I think that users should upgrade their clients in order to get bug
 fixes.  I think that users who don't upgrade their client are
 expecting to get a lock-down experience, bugs and all.


And how does that work with multi-user environments? Divergence is
inevitable.


 I don't think it's a good idea to default to deploying untested
 software combinations, especially when using a tested software
 combination will give a superior experience (i.e. client-side bug fixes).


We need better client api compatibility on minor versions. The exception is
bootstrap for which exact match seems reasonable for *dev* versions. For
stable versions we should be compatible across micro releases and testing
the same, --version still seems good for users who want exact
behavior/reproduction.


 Even though you don't intend to introduce incompatibilities with old
 clients in patchlevel updates, we're human and mistakes happen.  CI
 found lots of compatibility-breaking mistakes in the 1.17 series, and
 I'm sure there were many more that were caught by code review and
 juju-core's unit tests.



The way to be certain we don't introduce such incompatibilities is
 testing with every patchlevel of the client, and that scales an
 already-big workload linearly with the number of patchlevels.


for stable i think we need to go there, client versions across multiple
clients and server versions will diverge across a stable series. Afaik we
don't throw incompatibility flags/errors for older clients using 1.16 api
against 1.18 api servers, but perhaps we should.

To me a followup to that is to distribute binaries (static link) for the
client as well so that people can get a newer client version as needed for
their platform.



 There is value in using the latest patchlevel of the agent code.
 There is risk in using untested client/agent combinations.  It is hard
 to weigh one against the other, and I say we don't have to-- we can
 get the value without introducing the risk by upgrading the client.



  I'm inclined to go with (1) a --version flag for bootstrap,
  accepting only major.minor = client tools and (2) a --dry-run flag
  for upgrade-juju (with the caveat that you'd need to pass its
  result in as --version to the next command, because new tools
  *could* be published in the gap between the dry-run and the, uh,
  wet one). Aaron, even though this doesn't promote your use case to
  the default, I think this'll address your issues; confirm?

 - --version would be an improvement, but we have a workaround, so it's
 not /that/ important.  It's really the users I'm thinking of, the ones
 who care about reproducibility.  I'd honestly rather have
 - --bootstrap-host, because the lack of it is making our testing of the
 manual provider a bit weird.

 Aaron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBAgAGBQJTUUYPAAoJEK84cMOcf+9h1l8IAJTlK7+6bhoAGSmD0uVvFjmN
 XjqO26yQcQT+YNBLK5cNt2L6/nFmUUjLg9B1XA/y4rX6zTGUKKk9Ge1iyrfWRXf7
 ZQwWHgsMIKxTmVak9x12ack/0PQQ4/D8qoXcM5mVRDCyXJx+zVDnGSw7Cfq+5Td7
 cL79xrJb9Eakhw4AUzDnW7MGMIlQQIFbkMpRoO5YBhSLN+DCf8mpXRapCKGVwxf6
 oLBarulsDGuolE8641wz39vraYbOpVWZG6NVtK7hYSVjyF689rt1uitJD79ebDGc
 zhoKNBdGQQbDceORfK9wxQcK5072XwzZpIQTaQAPioqJ7BJQ+SL7RWksZdraVTU=
 =Fb/3
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: [Proposal] Requiring Go 1.2 across the board

2014-05-16 Thread Kapil Thangavelu
fwiw the only interim release still under support is S  (along with lts
releases of L, P, T).  interim releases get 9 months of support, and S
expires in July.

https://wiki.ubuntu.com/Releases


On Fri, May 16, 2014 at 9:15 AM, Gustavo Niemeyer gust...@niemeyer.netwrote:

 On Fri, May 16, 2014 at 4:08 AM, David Cheney
 david.che...@canonical.com wrote:
  This is a proposal that we raise the minimum Go spec from Go 1.1 to Go
 1.2.

 Sounds sensible.

  [1] I am ignoring the intermediate, non LTS series', as there are no
  charms for them, nor do CTS offer support for them. If this is
  unacceptable, anything which applies to Precise wrt. backports, also
  applies to Q, R and S.

 I suppose we do want people using these releases on their own machines
 to be able to use juju, at least as a client. What's the proposed mechanism
 for getting it to them?


 gustavo @ http://niemeyer.net

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: This is why we should make go get work on trunk

2014-06-06 Thread Kapil Thangavelu
just as it fails for many other projects.. etcd, docker, serf, consul,
etc... most larger projects are going to run afoul of trying to do cowboy
dependency management and adopt one of the extant tools for managing deps
and have a non standard install explained to users in its readme, else its
vendoring its deps.

-k





On Fri, Jun 6, 2014 at 5:05 PM, Nate Finch nate.fi...@canonical.com wrote:

 (Resending since the list didn't like my screenshots)

 https://twitter.com/beyang/statuses/474979306112704512

 https://github.com/juju/juju/issues/43

 Any tooling that exists for go projects is going to default to doing go
 get.  Developers at all familiar with go, are going to use go get.

 People are going to do

 go get github.com/juju/juju

 and it's going to fail to build, and that's a terrible first impression.

 Yes, we can update the README to tell people to run godeps after running
 go get, and many people are not going to read it until after they get the
 error building.

 Here's my suggestion:

 We make go get work on trunk and still use godeps (or whatever) for
 repeatable builds of release branches.

 There should never be a time when tip of trunk and all dependent repos
 don't build.  This is exceedingly easy to avoid.

 Go crypto (which I believe is what is failing above) is one of the few
 repos we rely on that isn't directly controlled by us.  We should fork it
 so we can control when it updates (since the people maintaining it seem to
 not care about making breaking API changes).
  -Nate

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Kapil Thangavelu
On Tue, Jun 17, 2014 at 12:39 AM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 Hi all,

 I've started looking into fixing
 https://bugs.launchpad.net/juju-core/+bug/1215579. The gist is, we
 currently set private-address in relation settings when a unit joins, but
 never update it.

 I've had some preliminary discussions with John, William and Dimiter, and
 came up with the following proposal:
 https://docs.google.com/a/canonical.com/document/d/1jCNvS7sSMZqtSnup9rDo3b2Wwgs57NimqMorXr9Ir-o/edit

 If you're a charm author, particularly if you work on proxy charms,
 please take a look at this and let me know of any concerns or suggestions.
 I have opened up comments on the doc.

 In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


This seems less than ideal, we already have standards ways of getting this
data and being notified of its change. introducing non-orthogonal ways of
doing the same lacks value afaics or at least any rationale in the document.

the two perspectives of addresses for self vs related also seem to be a bit
muddled. a relation hook is called in notification of a remote unit change,
but now we're introducing one that behaves in the opposite manner of every
other, and we're calling it redundantly for every relation instead of once
for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


perhaps there is a  misunderstanding of proxies, but things that set their
own address have taken responsibility for it. ie juju only updates private
address if it provided it, else its the charms responsibility.

fwiw, i think this could use some additional discussion.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-17 Thread Kapil Thangavelu
On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com wrote:

 ...


  In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting
 this data and being notified of its change. introducing non-orthogonal ways
 of doing the same lacks value afaics or at least any rationale in the
 document.


 So maybe the spec isn't very clear, but the idea is that the new hook is
 called on the unit when *its* private address might have changed, to give
 it a chance to respond. After which, relation-changed is called on all
 the associated units to let them know that the address they need to connect
 to has changed.

 It would be possible to just roll relation-address-changed into config
 changed.


or another unit level change hook (unit-address-changed), again the
concerns are that we're changing the semantics of relation hooks to
something fundamentally different for this one case (every other relation
hook is called for a remote unit) and that we're doing potentially
redundant event expansion and hook queuing as opposed to
coalescing/executing the address set change directly at the unit scope
level.


 The reason it is called for each associated unit is because the network
 model means we can actually have different addresses (be connected on a
 different network) for different things related to me.

 e.g. I have a postgres charm related to application on network A, but
 related to my-statistics-aggregator on network B. The address it needs to
 give to application should be different than the address given to
 my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
 actually be different.


thanks, that scenario would be useful to have in the spec doc. As long as
we're talking about unimplemented features guiding current bug fixes,
realistically there's quite a lot of software that only knows how to listen
on one address, so for network scoped relations to be more than advisory
would also need juju to perform some form of nftables/iptables mgmt. Its
feels a bit slippery that we'd be exposing the user to new concepts and
features that are half-finished and not backwards-compatible for proxy
charms as part of a imo critical bug fix.




 the two perspectives of addresses for self vs related also seem to be a
 bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation instead
 of once for the unit?


  - The hook will be called when the relation's address has changed, and
 the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps there is a  misunderstanding of proxies, but things that set
 their own address have taken responsibility for it. ie juju only updates
 private address if it provided it, else its the charms responsibility.

 fwiw, i think this could use some additional discussion.


 So one of the reasons is that it takes some double handling of values to
 know if the existing value was the one that was what we last set it. And
 there is the possibility that it has changed 2 times, and it was the value
 we set it to, but that was the address before this one and we just haven't
 gotten to update it.
 There was a proposal that we could effectively have 2 fields this is the
 private address you are sharing, which might be empty and this is the
 private address we set which is where we put our data. And we return the
 second value if the first is still nil. Or we set it twice, and we only set
 the first one if it matches what was in the second one, etc.
 All these things are possible, but in the discussions we had it seemed
 simpler to not have to track extra data for marginal benefit. Things which
 are proxy charms know that they are, and they found the right address to
 give in the past, and they simply do the same thing again when told that we
 want to change their address.


there's lots of other implementation complexity in juju that we don't leak,
we just try to present a simple interface to it. we'd be breaking existing
proxy charms if we update the values out from the changed values. The
simple basis of update being you touched you own it and if you didn't it
updates, is simple, explicit, and backwards compatible imo.

There's also the question of why the other new hook (relation-created) is
needed or how it relates to this functionality, or why the existing
unit-get private-address needs to be supplemented by address-get.

chers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Relation addresses

2014-06-18 Thread Kapil Thangavelu
addresses are just keys in a unit relation data bag. relation-get is the
cli tool to retrieve either self or related units databag key/values (ie
for self's address in the rel $ relation-get $JUJU_UNIT_NAME
private-address). unit-get is used to retrieve the current iaas properties
of a unit.

my point regarding binding and addresses was more that we're forward
thinking bug fixes by introducing a bunch of user facing stuff without
having completely thought/designed or started implementation on the
proposed solution that is the reason we're exposing additional things to
users. instead i'd rather we just fix the bug, and actually implement the
feature when we get around to implementing the feature.  By the time we get
around to implementing it (which for this cycle is against a single
provider) we may have a different implementation and end-user exposed
surface in mind.

moreover the  user facing (charm author) aspects of the changes as
currently in the spec are going to be confusing, ie. relation hooks are
always called for remote units, except for this one case which is special.

additionally i'd prefer we have a plan for maintaining backwards
compatibilities with proxy charms that are already extant.

-k



On Wed, Jun 18, 2014 at 1:44 AM, John Meinel j...@arbash-meinel.com wrote:

 Well, given it is unit-get shouldn't it be more relation-get
 private-address ?
 The issue is *that* is give me the private-address for the other side of
 this relation.
 Which is not quite what you want.
 And while I think it is true that many things won't be able to handle
 binding to more than one ip address (its either everything with 0.0.0.0 or
 one thing), I think we should at least make it *possible* for well formed
 services to behave the way we would like.

 John
 =:-



 On Wed, Jun 18, 2014 at 6:12 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 On Tue, Jun 17, 2014 at 11:35 PM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:




 On Tue, Jun 17, 2014 at 9:29 AM, John Meinel j...@arbash-meinel.com
 wrote:

 ...


  In a nutshell:
  - There will be a new hook, relation-address-changed, and a new tool
 called address-get.


 This seems less than ideal, we already have standards ways of getting
 this data and being notified of its change. introducing non-orthogonal 
 ways
 of doing the same lacks value afaics or at least any rationale in the
 document.


 So maybe the spec isn't very clear, but the idea is that the new hook
 is called on the unit when *its* private address might have changed, to
 give it a chance to respond. After which, relation-changed is called on
 all the associated units to let them know that the address they need to
 connect to has changed.

 It would be possible to just roll relation-address-changed into config
 changed.


 or another unit level change hook (unit-address-changed), again the
 concerns are that we're changing the semantics of relation hooks to
 something fundamentally different for this one case (every other relation
 hook is called for a remote unit) and that we're doing potentially
 redundant event expansion and hook queuing as opposed to
 coalescing/executing the address set change directly at the unit scope
 level.


 The reason it is called for each associated unit is because the network
 model means we can actually have different addresses (be connected on a
 different network) for different things related to me.

 e.g. I have a postgres charm related to application on network A, but
 related to my-statistics-aggregator on network B. The address it needs to
 give to application should be different than the address given to
 my-statistics-aggregator. And, I believe, the config in pg_hba.conf would
 actually be different.


 thanks, that scenario would be useful to have in the spec doc. As long
 as we're talking about unimplemented features guiding current bug fixes,
 realistically there's quite a lot of software that only knows how to listen
 on one address, so for network scoped relations to be more than advisory
 would also need juju to perform some form of nftables/iptables mgmt. Its
 feels a bit slippery that we'd be exposing the user to new concepts and
 features that are half-finished and not backwards-compatible for proxy
 charms as part of a imo critical bug fix.



 the two perspectives of addresses for self vs related also seem to be
 a bit muddled. a relation hook is called in notification of a remote unit
 change, but now we're introducing one that behaves in the opposite manner
 of every other, and we're calling it redundantly for every relation 
 instead
 of once for the unit?


  - The hook will be called when the relation's address has changed,
 and the tool can be called to obtain the address. If the hook is not
 implemented, the private-address setting will be updated. Otherwise it is
 down to you to decide how you want to react to address changs (e.g. for
 proxy charms, probably just don't do anything.)


 perhaps

Re: Relation addresses

2014-06-18 Thread Kapil Thangavelu
On Wed, Jun 18, 2014 at 5:21 PM, William Reade william.re...@canonical.com
wrote:

 On Wed, Jun 18, 2014 at 7:05 PM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:


 addresses are just keys in a unit relation data bag. relation-get is the
 cli tool to retrieve either self or related units databag key/values (ie
 for self's address in the rel $ relation-get $JUJU_UNIT_NAME
 private-address). unit-get is used to retrieve the current iaas properties
 of a unit.


 Yes: unit-get retrieves iaas properties; and relation-get retrieves unit
 properties; but self's private-address is *not* put in self's relation data
 bag for the benefit of self; it's for the remote units that *react* to
 changes in that data bag.


Its not a write only bag and we don't constrain reads.  Charms can retrieve
their own relation properties when evaluating a remote relation change.
address is simply a key in that bag. The benefit to self/local unit, and to
all charm authors was the one boilerplate property that every single one of
them needed to provide/relation-set was effectively handled by the
framework. Afaics it also makes it easier for us to do some of the sdn
relation binding because we provide that value else we'd be rewriting all
extant charms to support it.


 Using `relation-get $JUJU_UNIT_NAME private-address` is Doing It Wrong:
 the canonical way to get that data is `unit-get private-address`, and the
 problem is not that we don't magically update the relation data bag: the
 problem is that we don't provide a means to know when the relation's data
 bag should be updated.


it sort of depends why your retrieving wrt to if its wrong, if a unit
want's its own address then retrieving it directly from unit-get is clearly
correct. if wants to reason about the address its advertising to related
units, then retrieving from the relation is valid. Agreed re the issue
being lack of updates.  but adding -r to the unit-get seems to be more
conflating of the relation data bags and iaas properties associated to a
set of unit addresses. per the original network sketch i'd imagine in a
multiple network and address world unit-get would grow facilities for
retrieving list of networks and addresses. as for relation to network or
route binding, it also seems its missing the notion of retrieving the named
network on the rel.. ie either more framework relation properties.. or
ideally this could get shuffled into relation-config or exposed more
explicitly.


 Honestly, it's kinda bad that we prepopulate private-address *anyway*.
 It's helpful in the majority of cases, but it's straight-up wrong for proxy
 charms.


its debatable, given that it would be simply boilerplate for the majority,
its seems reasonable and its been easy for proxy charms to explicitly set
what they want the actual value to be. Afaics  the only real issue to-date
has been juju isn't updating the property its populating.


 I don't want to take on the churn caused by reversing that decision; but
 equally I don't want to fix it with magical rewrites of the original
 magic writes.


to me its a question of ownership.. if the framework owned the value by
providing it, then the framework is responsible for updating the value till
such time as the charm takes ownership by writing a new one.


 my point regarding binding and addresses was more that we're forward
 thinking bug fixes by introducing a bunch of user facing stuff without
 having completely thought/designed or started implementation on the
 proposed solution that is the reason we're exposing additional things to
 users. instead i'd rather we just fix the bug, and actually implement the
 feature when we get around to implementing the feature.  By the time we get
 around to implementing it (which for this cycle is against a single
 provider) we may have a different implementation and end-user exposed
 surface in mind.


 That's not impossible; but I don't think it's a good reason to pick an
 approach at odds with our current best judgment of where we're heading.


but we're not heading there yet at best we're still doing plumbing afaics,
and we're going to expose all this end user machinery which we'll have to
support before we even started on the path and under the seeming aegis of
providing a bug fix that could be addressed much more simply without
exposing additional concepts and hooks to charm authors. ie. to me the
analogy is the plumbing to the sink is stopped up, and instead of calling a
plumber to clean the pipes, we're doing a renovation. yes we may want to do
a renovation in the future, but that's no reason we shouldn't just fix the
sink till we start it.



 moreover the  user facing (charm author) aspects of the changes as
 currently in the spec are going to be confusing, ie. relation hooks are
 always called for remote units, except for this one case which is special.


 I don't agree that this one case is special; relation hooks are called in
 response to changes in a local unit's view

Re: api/cli compatability between juju minor versions

2014-07-29 Thread Kapil Thangavelu
There's an extant version incompatibility between 1.18 and 1.20 that was
highlighted during the 1.19 dev cycle which is unaddressed till the
unreleased 1.21 (http://pad.lv/1311227).  We should treat compatibility
breakage as a blocker for stable releases.

Also in addition to the api  cli, environment.jenv files need to adhere to
compatibility constraints as their the only reasonable mechanism for a
client to connect (credentials, certs, addresses).



On Mon, Jul 28, 2014 at 10:44 AM, Curtis Hovey-Canonical 
cur...@canonical.com wrote:

 Thank you everyone.

 I am glad we are promising compatibility. We are not fulling testing
 minor-to-minor compatibility yet, The feature isn't even scheduled to
 start yet, but we have added some tests and a lot of infrastructure to
 support it. I may need to stop work on other things to deliver more of
 these tests now.


 --
 Curtis Hovey
 Canonical Cloud Development and Operations
 http://launchpad.net/~sinzui

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Port ranges - restricting opening and closing ranges

2014-08-05 Thread Kapil Thangavelu
imo, no, its a no-op. the end state is still the same. if its an error, and
now we have partial failure modes to consider against ranges.




On Tue, Aug 5, 2014 at 1:25 PM, David Cheney david.che...@canonical.com
wrote:

 Yes, absolutely.

 On Tue, Aug 5, 2014 at 8:33 PM, Domas Monkus domas.mon...@canonical.com
 wrote:
  A follow-up question: should closing a port that was not opened previous
 to
  that result in an error?
 
  Domas
 
 
  On Fri, Jun 27, 2014 at 2:13 PM, Matthew Williams
  matthew.willi...@canonical.com wrote:
 
  +1 on an opened-ports hook tool, I've added it to the task list
 
 
  On Fri, Jun 27, 2014 at 9:41 AM, William Reade
  william.re...@canonical.com wrote:
 
  Agreed. Note, though, that we'll want to give charms a way to know what
  ports they have already opened: I think this is a case where
  look-before-you-leap maybe beats
 easier-ask-forgiveness-than-permission (and
  the consequent requirement that error messages be parsed...). An
  opened-ports hook tool should do the trick.
 
 
  On Thu, Jun 26, 2014 at 9:18 PM, Gustavo Niemeyer 
 gust...@niemeyer.net
  wrote:
 
  +1 to Mark's point. Handling exact matches is much easier, and does
  not prevent a fancier feature later, if there's ever the need.
 
  On Thu, Jun 26, 2014 at 3:38 PM, Mark Ramm-Christensen (Canonical.com)
  mark.ramm-christen...@canonical.com wrote:
   My belief is that as long as the error messages are clear, and it is
   easy to
   close 8000-9000 and then open 8000-8499 and 8600-9000, we are fine.
   Of
   course it is nicer if we can do that automatically for you, but I
   don't
   see why we can't add that later, and I think there is a value in
   keeping a
   port-range as an atomic data-object either way.
  
   --Mark Ramm
  
  
   On Thu, Jun 26, 2014 at 2:11 PM, Domas Monkus
   domas.mon...@canonical.com
   wrote:
  
   Hi,
   me and Matthew Williams are working on support for port ranges in
   juju.
   There is one question that the networking model document does not
   answer
   explicitly and the simplicity (or complexity) of the implementation
   depends
   greatly on that.
  
   Should we only allow units to close exactly the same port ranges
 that
   they
   have opened? That is, if a unit opens the port range [8000-9000],
 can
   it
   later close ports [8500-8600], effectively splitting the previously
   opened
   port range in half?
  
   Domas
  
   --
   Juju-dev mailing list
   Juju-dev@lists.ubuntu.com
   Modify settings or unsubscribe at:
   https://lists.ubuntu.com/mailman/listinfo/juju-dev
  
  
  
   --
   Juju-dev mailing list
   Juju-dev@lists.ubuntu.com
   Modify settings or unsubscribe at:
   https://lists.ubuntu.com/mailman/listinfo/juju-dev
  
 
 
 
  --
 
  gustavo @ http://niemeyer.net
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Port ranges - restricting opening and closing ranges

2014-08-06 Thread Kapil Thangavelu
agreed. to be clear .. imo, close-port shouldn't error unless there's a
type mismatch on inputs. ie none of the posited scenarios in this thread
should result in an error.
-k



On Tue, Aug 5, 2014 at 8:34 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 On Tue, Aug 5, 2014 at 4:18 PM, roger peppe rogpe...@gmail.com wrote:
  close ports 80-110 - error (mismatched port range?)

 I'd expect ports to be closed here, and also on 0-65536.


 gustavo @ http://niemeyer.net

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: getting rid of all-machines.log

2014-08-15 Thread Kapil Thangavelu
On Fri, Aug 15, 2014 at 7:36 AM, Gabriel Samfira 
gsamf...@cloudbasesolutions.com wrote:

  I think this thread has become a bit lengthy, and we have started to
 loose perspective on what we are actually trying to accomplish.


agreed. afaics that's how do we support logging on windows.


 Gustavo's idea to save the logs to is awesome and it works across
 platforms, allows immense flexibility and would give people a powerful
 tool. We should deffinately aspire to get that done sooner rather then
 later. However, at this point in time its only an idea, without a clear
 blueprint.


agreed.


 What Nate is proposing *already exists*, its tangible, proposed as a PR
 and improves the way juju handles logs.


you'll have to be more specific, there's been a shotgun of statements in
this thread, touching on logstash, aggregation removal, rsyslog removal,
log rotation, deferring to stderr/stdout, 12factor apps, working with ha
state servers, etc.



 The only thing I see missing, that might ease people's minds is a
 --log-file option (that works with --debug) to actually enforce the usage
 of a log file. If we omit that option, then juju should just log to
 stdout/stderr. So we get to keep what we have, but also solve a huge PITA
 on Windows or any other platform that have limitations in this respect,
 with a minimal change...


afaics your referencing your branch (
https://github.com/gabriel-samfira/syslog) which will directly send logs to
a remote aggregating syslog server on windows nodes. with regard to the
default service behavior on windows, where does stdout/stderr go for a
service? isn't the expected behavior for a windows service to use the event
log facility? ie. to be inline with expected windows behavior and extant
juju semantics, shouldn't we have multiple handlers on the log facility,
one to rsyslog, and one to event log on windows.


 Please keep in mind that its better to move forward no matter how small
 the steps, then to just stand still while we figure out the perfect logging
 system. I would much rather have windows support today, then 2 months from
 now when someone actually gets around to implement a *new* logging system.

 This should not be a discussion about which logging system is _best_. This
 should be a discussion about which logging system is _better_ and available
 *now*. Otherwise we risk of getting caught up in details and loose sight of
 our actual goal.

 Just my 2 cents.

 Regards,
 Gabriel


Thanks,

Kapil




 On 14.08.2014 23:47, Kapil Thangavelu wrote:




 On Thu, Aug 14, 2014 at 2:14 PM, Nate Finch nate.fi...@canonical.com
 wrote:

 I didn't bring up 12 factor, it's irrelevant to my argument.

  I'm trying to make our product simpler and easier to maintain.  That is
 all.  If there's another cross-platform solution that we can use, I'd be
 happy to consider it.  We have to change the code to support Windows.  I'd
 rather the diff be +50 -150  than +75 -0.  I don't know how to state it
 any simpler than that.


  the abrogation of responsibility which is what ic you adocating for in
 this thread,  also makes our product quite a lot less usable imo... Our
 product is a distributed system with emergent behavior. Having a debug log
 is one of the most useful things you can have to observe the system and
 back in py days was one of the most used features and it was just a simple
 dump to the db with querying. Its unfortunate that ability to use it
 usefully didn't land to core till recently and did so in broken fashion
 (still requiring internal tag names for filtering).. or lots more people
 would be using it. Gustavo's suggestion of storing the structured log data
 in mongo sounds really good to me. Yes, features are work and require code
 but that sort of implementation is also cross platform portable. The
 current implementation and proposed alternatives I find somewhat ridicolous
 in that we basically dump structured data into an unstructured format only
 to reparse it every time we look at it (or ingest into logstash) given that
 we already have the structured data. Asking people to setup one of those
 distributed log aggregation systems systems and configure them is a huge
 task, and anyone suggesting punting that to an end user or charm developer
 has never setup one up themselves i suspect. ie. an analogy imo
 http://xahlee.info/comp/i/fault-tolerance_NoSQL.png As for the operations
 follks who do have them.. we can continue sending messages send to local
 syslog and let them collect per their preference.

  -k





 On Thu, Aug 14, 2014 at 1:35 PM, Gustavo Niemeyer 
 gustavo.nieme...@canonical.com wrote:

  On Thu, Aug 14, 2014 at 1:35 PM, Nate Finch nate.fi...@canonical.com
 wrote:
  On Thu, Aug 14, 2014 at 12:24 PM, Gustavo Niemeyer
  gustavo.nieme...@canonical.com wrote:
 
   Why support two things when you can support just one?
 
  Just to be clear, you really mean why support two existing and well
  known things when I can implement a third thing, right

Re: First customer pain point pull request - default-hook

2014-08-18 Thread Kapil Thangavelu
That doc implies a completely different style of authoring ie. rewrite then
of most extant (95%) charms using symlinks to a single implementation.
There are a minority  that do indeed reconsider all current state from juju
each hook invocation, in which case this level of optimization is useful,
but its orthogonal to solving the tedium of current authors that the
pull-request is addressing.

-k



On Sun, Aug 17, 2014 at 6:28 AM, John Meinel j...@arbash-meinel.com wrote:

 The main problem with having a hook that just fires instead of the others
 is that you end up firing a hook a whole bunch of times where it
 essentially does nothing because it is still waiting for some other hook
 for it to actually be ready. The something-changed proposal essentially
 colapses the 10 calls to various hooks into a single firing.

 William has thought much more about it, so I'd like him to fill in any
 details I've missed.

 John
 =:-



 On Sun, Aug 17, 2014 at 1:59 PM, Nate Finch nate.fi...@canonical.com
 wrote:

 That's an interesting document, but I feel like it doesn't really explain
 the problem it's trying to solve.

 Why does a single entry point cause a lot of boilerplate (I presume he
 means code boilerplate)? Isn't it just a switch on the name of the hook?
  What does it mean when a new hook is introduced?  Doesn't the charm
 define what hooks it has?  And wouldn't the aforementioned switch mean that
 any new hook (whatever that means) would be ignored the same way it would
 if the hook file wasn't there?

 Can someone explain to me what exactly the problem is?


 On Sun, Aug 17, 2014 at 1:30 AM, John Meinel j...@arbash-meinel.com
 wrote:

 I'd just like to point out that William has thought long and hard about
 this problem, and what semantics make the most sense (does it get called
 for any hook, does it always get called, does it only get called when the
 hook doesn't exist, etc).
 I feel like had some really good decisions on it:
 https://docs.google.com/a/canonical.com/document/d/1V5G6v6WgSoNupCYcRmkPrFKvbfTGjd4DCUZkyUIpLcs/edit#

 default-hook sounds (IMO) like it may run into problems where we do
 logic based on whether a hook exists or not. There are hooks being designed
 like leader-election and address-changed that might have side effects, and
 default-hook should (probably?) not get called for those.

 I'd just like us to make sure that we actually think about (and
 document) what hooks will fall into this, and make sure that it always
 makes sense to rebuild the world on every possible hook (which is how charm
 writers will be implementing default-hook, IMO).

 John
 =:-



 On Sat, Aug 16, 2014 at 1:02 AM, Aaron Bentley 
 aaron.bent...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 14-08-15 04:36 PM, Nate Finch wrote:
  There's new hook in town: default-hook.  If it exists and a hook
  gets called that doesn't have a corresponding hook file,
  default-hook gets called with the name of the original hook as its
  first argument (arg[1]).
 
  That's it.

 Nice!  Thank you.

 Aaron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJT7nVvAAoJEK84cMOcf+9h90UH/RMVabfJp4Ynkueh5XQiS6mD
 TPWwY0FVHfpAWEIbnQTQpnmkhMzSOKIFy0fkkXkEx4jSUt6I+iNYXdu8T77mA38G
 7IZ7HAi+dAzRCrGTIZHsextrs5VpxhdzFJYOxL+TN5VUWYt+U+awSPFn0MlUZfAC
 5aUuV3p3KjlHByLNT7ob3eMzR2mwylP+AS/9UgiojbUOahlff/9y83dYqkCDYzih
 C2rlwf0Wal12svu70ifggGKWcnF/eiwSm4TQjJsfMdCfw0gSg4ICgmIbWQ78OytJ
 AM4UBk1/Ue94dUm3YP+lcgAqJCC9GW5HksCFN74Qr+4xcnuqYoCJJxpU5fBOTls=
 =5YwW
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev




 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: First customer pain point pull request - default-hook

2014-08-19 Thread Kapil Thangavelu
hmm.. there's three distinct threads here.

default-hook - charms that do so symlink 0-100% - to one hook.. in
practice everything, sometimes minus install (as the hook infrastructure
needs pkgs).. and most typically implemented via dispatch table.

something-changed - completely orthogonal to either the default-hook merge
request, and in practice full of exceptions, but useful as an optimization
of event collapsing around charms capable of event coalescence

periodic async framework invoked - metrics and health checks. afaics best
practices  around these would be not invoking default-hook which is
lifecycle event handler, while these are periodic poll, yes its possible
but it conflates different roles.

cheers,
Kapil

also +1 to default-hook using JUJU_HOOK_NAME.





On Tue, Aug 19, 2014 at 6:10 PM, Gustavo Niemeyer gust...@niemeyer.net
wrote:

 On Tue, Aug 19, 2014 at 6:58 PM, Matthew Williams
 matthew.willi...@canonical.com wrote:
  Something to be mindful of is that we will shortly be implementing a new
  hook for metering (likely called collect-metrics). This hook differs
  slightly to the others in that it will be called periodically (e.g. once
  every hour) with the intention of sending metrics for that unit to the
 state
  server.
 
  I'm not sure it changes any of the details in this feature or the pr -
 but I
  thought you should be aware of it

 Yeah, that's a good point. I'm wonder how reliable the use of
 default-hook will be, as it's supposed to run whenever any given hook
 doesn't exist, so charms using that feature should expect _any_ hook
 to be called there, even those they don't know about, or that don't
 even exist yet. The charms that symlink into a single hook seem to be
 symlinking a few things, not everything. It may well turn out that
 default-hook will lead to brittle charms.


 gustavo @ http://niemeyer.net

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Thoughts on Dense Container testing

2014-08-27 Thread Kapil Thangavelu
On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel john.mei...@canonical.com
wrote:

 So I played around with manually assigning IP addresses to a machine, and
 using BTRFS to make the LXC instances cheap in terms of disk space.

 I had success bringing up LXC instances that I created directly, I haven't
 gotten to the point where I could use Juju for the intermediate steps. See
 the attached document for the steps I used to set up several addressable
 containers on an instance.

 However, I feel pretty good that Container Addressability would actually
 be pretty straightforward to achieve with the new Networker. We need to
 make APIs for requesting an Address for a new container available, but then
 we can configure all of the routing stuff without too much difficulty.

 Also of note, is that because we are using MASQUERADE in order to route
 the traffic, it doesn't require putting the bridge (br0) directly onto
 eth0. So it depends if MaaS will play nicely with routing rules if you
 assign an IP address into a container on a machine, will the routes end up
 routing the traffic there (I think it will, but we'd have to test to
 confirm it).

 Ideally, I'd rather do the same thing everywhere, rather that have
 containers routed one way in MaaS and a different way on EC2.

 It may be that in the field we need to not Masquerade, so I'm open to
 feedback here.

 I wrote this up a bit like how I would want to use dense containers for
 scale testing, since you can then deploy actual workloads into each of
 these LXCs if you wanted (and had the horsepower :).

 I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC
 containers and was able to connect to them from another machine running
 inside the VPC.



Thanks for exploring this John. I'm excited about utilizing something like
this for regular scale testing on the cheap (10 instances for 1 hr on spot
markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use
ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs
volume for btrfs) in ec2 via
https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some
other scripts in there (add.py) for provisioning the container with
userdata (ie. automate key installation and machine setup) which can
obviate/automate several of these steps. Either ebs or instance ephemeral
disk (ssd) is preferable i think to loopback dev for perf testing.  Re
uniform networking handling, it still feels like we're exploring here its
unclear if we have the knowledge base to dictate a common mechanism yet.

cheers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Thoughts on Dense Container testing

2014-08-31 Thread Kapil Thangavelu
i went with a different approach and wrote a charm to do the overlay
network using coreos's newly released rudder
http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt


it works across all providers (including manual and the digitalocean and
softlayer manual based plugins) using udp for encapsulation.

cheers,

Kapil

ps. ec2 is/was broken due to archive error across regions today.




On Wed, Aug 27, 2014 at 9:52 AM, Kapil Thangavelu 
kapil.thangav...@canonical.com wrote:





 On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel john.mei...@canonical.com
 wrote:

 So I played around with manually assigning IP addresses to a machine, and
 using BTRFS to make the LXC instances cheap in terms of disk space.

 I had success bringing up LXC instances that I created directly, I
 haven't gotten to the point where I could use Juju for the intermediate
 steps. See the attached document for the steps I used to set up several
 addressable containers on an instance.

 However, I feel pretty good that Container Addressability would actually
 be pretty straightforward to achieve with the new Networker. We need to
 make APIs for requesting an Address for a new container available, but then
 we can configure all of the routing stuff without too much difficulty.

 Also of note, is that because we are using MASQUERADE in order to route
 the traffic, it doesn't require putting the bridge (br0) directly onto
 eth0. So it depends if MaaS will play nicely with routing rules if you
 assign an IP address into a container on a machine, will the routes end up
 routing the traffic there (I think it will, but we'd have to test to
 confirm it).

 Ideally, I'd rather do the same thing everywhere, rather that have
 containers routed one way in MaaS and a different way on EC2.

 It may be that in the field we need to not Masquerade, so I'm open to
 feedback here.

 I wrote this up a bit like how I would want to use dense containers for
 scale testing, since you can then deploy actual workloads into each of
 these LXCs if you wanted (and had the horsepower :).

 I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC
 containers and was able to connect to them from another machine running
 inside the VPC.



 Thanks for exploring this John. I'm excited about utilizing something like
 this for regular scale testing on the cheap (10 instances for 1 hr on spot
 markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use
 ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs
 volume for btrfs) in ec2 via
 https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some
 other scripts in there (add.py) for provisioning the container with
 userdata (ie. automate key installation and machine setup) which can
 obviate/automate several of these steps. Either ebs or instance ephemeral
 disk (ssd) is preferable i think to loopback dev for perf testing.  Re
 uniform networking handling, it still feels like we're exploring here its
 unclear if we have the knowledge base to dictate a common mechanism yet.

 cheers,

 Kapil



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: juju local bootstrap from tip

2014-09-01 Thread Kapil Thangavelu
on a similar topic (local on tip) i was debugging with pitti his lxc
environment on utopic host earlier today, and the culmination of several
rounds of debugging and bug filing revealed this one
Bug #1364069: local provider must transform localhost in apt proxy address
amd64 apport-bug utopic juju-core (Ubuntu):New 
https://launchpad.net/bugs/1364069


On Mon, Sep 1, 2014 at 2:53 AM, John Meinel j...@arbash-meinel.com wrote:

 Right, I think it has to *know* about the target, which is obviously an
 issue here. But we still do *heavily* encourage (probably just outright
 require) cloud-archive:tools for running Juju agents on Precise.

 John
 =:-


 On Mon, Sep 1, 2014 at 10:46 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 On Mon, Sep 1, 2014 at 2:04 PM, John Meinel j...@arbash-meinel.com
 wrote:

 I thought --target-release was supposed to just change the priorities
 and prefer a target, not require it.
 We need it because we add cloud-archive:tools but we explicitly pin it
 to lower priority because we don't want to mess up charms that we are
 installing.


 Sorry, I should have included the error message.

 andrew@precise:~$ sudo apt-get --option=Dpkg::Options::=--force-confold
 --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install
 --target-release precise-updates/cloud-tools mongodb-server
 Reading package lists...
 E: The value 'precise-updates/cloud-tools' is invalid for
 APT::Default-Release as such a release is not available in the sources

 (if I apt-add-repository cloud-archive:tools, it's happy)

 John
 =:-


 On Mon, Sep 1, 2014 at 9:57 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 On Mon, Sep 1, 2014 at 1:50 PM, John Meinel j...@arbash-meinel.com
 wrote:

 The version of mongodb in Precise is too old (2.2.4?),


 Ah, so it is. The entry in apt-cache I was looking at has main in it,
 but it's actually from the juju/stable PPA.


 we require a version at least 2.4.6 (which is in cloud-archive:tools
 and is what we use when bootstrapping Precise instances in the cloud).
 It is recommended that if you are running local on Precise that you
 should have cloud-archive:tools in your apt list.


 The problem is, code-wise it's currently a requirement. Should we drop
 --target-release for local? I'm not apt-savvy enough to know what the right
 thing to do here is.


 John
 =:-


 On Mon, Sep 1, 2014 at 9:16 AM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 On Mon, Sep 1, 2014 at 12:53 PM, Andrew Wilkins 
 andrew.wilk...@canonical.com wrote:

 Works fine on my trusty laptop, but I'm also getting a new error
 when I try bootstrapping on precise:

 2014-09-01 04:51:27 INFO juju.utils.apt apt.go:132 Running: [apt-get
 --option=Dpkg::Options::=--force-confold
 --option=Dpkg::options::=--force-unsafe-io --assume-yes --quiet install
 --target-release precise-updates/cloud-tools mongodb-server]
 2014-09-01 04:51:37 ERROR juju.utils.apt apt.go:166 apt-get command
 failed: unexpected error type *errors.errorString
 args: []string{apt-get,
 --option=Dpkg::Options::=--force-confold,
 --option=Dpkg::options::=--force-unsafe-io, --assume-yes, --quiet,
 install, --target-release, precise-updates/cloud-tools,
 mongodb-server}

 2014-09-01 04:51:37 ERROR juju.cmd supercommand.go:323 cannot
 install mongod: apt-get failed: unexpected error type 
 *errors.errorString
 Bootstrap failed, destroying environment

 I'm looking into it at the moment.


 So that error message was unhelpful, and I'll fix that, but the
 underlying issue is that the agent is expecting to install mongodb-server
 from cloud-archive:tools, and the Makefile does not add that repo. I'm 
 not
 sure it *should* add it either. Is there something wrong with the one in
 main? After all, that's where the juju-local package's dependency was
 resolved.

 Cheers,
 Andrew


 On Sat, Aug 30, 2014 at 8:19 PM, Matthew Williams 
 matthew.willi...@canonical.com wrote:

 Hi Folks,

 I thought I'd try looking into the lxc failing to creates machines
 bug: https://bugs.launchpad.net/juju-core/+bug/1363143

 If I wanted to do a local deploy using tip I thought it would be as
 simple as doing make install then juju bootstrap is that correct? It
 doesn't seem to work for me, are there any steps I'm missing

 Just to be annoying - I've just shutdown my precise vm so I can't
 paste the errors I get here. I'll follow up with pastes next week

 Matty

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev




 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev







 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


psa pls don't use transactions for single doc unobserved atomic updates

2014-09-21 Thread Kapil Thangavelu
Its sort of misses the point on why we're doing client side transactions.
Mongodb has builtin atomic operations on an individual document. We use
client side txns (multiple order of magnitude slower) for multi-document
txns *and/or* things we want to observe for watches.

-k
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ACTION: if you have admin hardcoded in scripts, this is a warning that things will change soon(ish)

2014-09-28 Thread Kapil Thangavelu
On Fri, Sep 26, 2014 at 12:57 AM, Tim Penhey tim.pen...@canonical.com
wrote:

 Hi folks,

 All environments that exist so far have had an admin user being the
 main (and only) user that was created in the environment, and it was
 used for all client connections.

 Code has landed in master now that makes this initial username
 configurable.  The juju client is yet to take advantage of this, but
 there is work due to be finished off soon that does exactly that.

 Soon, the 'juju bootstrap' command will use the name of the currently
 logged in user as the initial username to create [1].  So, for me

juju bootstrap

 would create the initial user tim (or thumper if I am logged in as
 my other user).

 If the current username is not translatable to a valid username, the
 command will fail and require the user to specify the name of the
 initial user on the command line.

   juju bootstrap --user eric

 After talking with Rick this morning, he mentioned that 'juju
 quickstart' had admin hard coded, and there are bound to be other
 places too.


Your about to break alot of api using programs out there without having
given a reason for why? Ie. can we do admin and logged in user for the
bootstrapping user? I've moved jujuclient to using jenv files for
connections (as its the only way to get the username, password, api
servers, and cert info), i'll see about issuing an update for it to use the
login user from there as well.

The issue is for remote environments / ie servers interacting with juju,
there is no logged in environment user (and no jenv), and no clear way to
get one without changing user interfaces for the user to input one if they
know it (was it thumper or tim)?

cheers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: ACTION: if you have admin hardcoded in scripts, this is a warning that things will change soon(ish)

2014-09-29 Thread Kapil Thangavelu
That could be useful, assuming it has properties of not hanging on dead
envs.. etc. at the moment jenv parsing clients are responsible for manually
verifying connectivity to servers.

Although it doesn't really address the issue for servers interacting with
the api. ie they'll need to have their ui or config  modified to take user
info or the output of  api-info.

Realistically as i pointed out in a previous api compatibility discussion,
jenvs are part of the api atm. Fwiw, we've got roughly a half-dozen
programs i can think of that of the top of my head that use the api
(landscape, community cloud installer, cloudfoundry orchestrator, deployer,
gui, and others). In general we should try to keep or version compatible
with non juju binary clients on the api, unless we're saying the api is
private.  And if we need to break compatibility, at minimum we should state
why we're doing so, which is still missing in this thread. ie. why can't
bootstrap user have an alias to 'admin' for compatibility in this case?

fwiw, looks like latest jujuclient Environment.connect already used the the
user in the jenv instead of hardcoding admin.

-k






On Mon, Sep 29, 2014 at 5:10 AM, John Meinel j...@arbash-meinel.com wrote:

 I think we want a simpler single-command to get everything you need to
 connect to the API. juju api-info or something like that, which
 essentially gives you the structured .jenv information that you would use
 (cert information, username, password, IP addresses, etc)

 John
 =:-


 On Mon, Sep 29, 2014 at 12:54 AM, Tim Penhey tim.pen...@canonical.com
 wrote:

 On 26/09/14 20:39, Bjorn Tillenius wrote:
  On Fri, Sep 26, 2014 at 04:57:17PM +1200, Tim Penhey wrote:
  Hi folks,
 
  All environments that exist so far have had an admin user being the
  main (and only) user that was created in the environment, and it was
  used for all client connections.
 
  Code has landed in master now that makes this initial username
  configurable.  The juju client is yet to take advantage of this, but
  there is work due to be finished off soon that does exactly that.
 
  Soon, the 'juju bootstrap' command will use the name of the currently
  logged in user as the initial username to create [1].
 
  What's the official way of getting the username in 1.20.8? I see 'juju
  api-endpoints' which returns the state servers, and 'juju
  get-environment' that returns a bunch of information, except the
  username.
 
  The only way I see is to get the .jenv file and parse it, but it feels a
  bit dirty. Is it guaranteed that the location and name of the file won't
  change, and that the format of it won't be changed in way that breaks
  backwards-compatibility?

 We don't have one yet, but one command that was proposed was
juju whoami

 This would be pretty trivial to implement.  There are a bunch of user
 commands that will be coming on-line soon.

 We won't land the change to change the admin user until there is an easy
 way to determine what that name it.

 The change will not change the user for any existing environment, only
 newly bootstrapped ones.

 Tim


 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Juju Scale Test

2014-10-02 Thread Kapil Thangavelu
Unfortunately that's not very representative of the current implementation
as it was based on pyjuju while the current implementation is in go and
utilizing mongodb instead of zookeeper.

-kapil

On Thu, Oct 2, 2014 at 9:40 AM, Charles Butler charles.but...@canonical.com
 wrote:

 There's this article which was published a while ago:


 https://maas.ubuntu.com/2012/06/04/scaling-a-2000-node-hadoop-cluster-on-ec2ubuntu-with-juju/

 Hope this helps,

 Charles

 On Thu, Oct 2, 2014 at 9:02 AM, Mike Sam mikesam...@gmail.com wrote:

 I was wondering what is the largest vm count that has been provisioned
 and deployed with juju in testing so far? In other words, what is the
 demonstrated scale that juju has proven to handle well so far?

 Thanks,
 Mike

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: mongodb log file size

2014-10-20 Thread Kapil Thangavelu
That should be fine, the dictates here are from mongodb default semantics,
we've tweaked them minorly but for the most part there per upstream
recommends. The amount of data juju uses is miniscule (1-2mb).. till juju
1.21 where we store charms in mongodb.

cheers,

Kapil


On Sun, Oct 19, 2014 at 2:35 PM, Vasiliy Tolstov v.tols...@selfip.ru
wrote:

 Hi again =). I find discussion about mongodb repl log size, that it
 minimal 512Mb and maximum 1024Mb (max may be wrong...). Does it
 possible to minimize it to 128Mb?
 I'm understand about negative sides of this, but now i deploy via juju
 single machine and don't want to keep this logs big. Does it possible
 to minimize it, but if i need big cluster - set new size?

 --
 Vasiliy Tolstov,
 e-mail: v.tols...@selfip.ru
 jabber: v...@selfip.ru

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: mongodb log file size

2014-10-20 Thread Kapil Thangavelu
we're trying to make providers easier to right, currently we use provider
object-storage, but a number of low cost providers don't have it and it
complicates writing a provider (ie. no all openstack installs have object
storage either), obviating the need by making juju handle its own very
limited need object storage needs makes things a bit simpler. most charms
are fairly small minus those that bundle binaries.

-k


On Mon, Oct 20, 2014 at 4:53 PM, Vasiliy Tolstov v.tols...@selfip.ru
wrote:

 2014-10-20 21:16 GMT+04:00 Kapil Thangavelu 
 kapil.thangav...@canonical.com:
  That should be fine, the dictates here are from mongodb default
 semantics,
  we've tweaked them minorly but for the most part there per upstream
  recommends. The amount of data juju uses is miniscule (1-2mb).. till juju
  1.21 where we store charms in mongodb.


 Hmm, why you need to store charms in mongodb =( ?

 --
 Vasiliy Tolstov,
 e-mail: v.tols...@selfip.ru
 jabber: v...@selfip.ru

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Azure US West having problems

2014-10-20 Thread Kapil Thangavelu
possibly related to http://azure.microsoft.com/en-us/status/

Starting at approximately 19:00 on the 18th Oct, 2014 UTC a limited subset
of customers may experience intermittent errors when attempting to access
Azure Virtual Networks. Engineers are continuing with their manual recovery
and have validated significant improvement as a result of their action
plan. Customers may begin to see improvements to availability of their
Virtual Networks. The next update will be provided in 2 hours or as events
warrant.

-k


On Mon, Oct 20, 2014 at 5:05 PM, Nate Finch nate.fi...@canonical.com
wrote:

 This is a pretty major problem it *seems* like it must be Azure's
 fault, but it would be good to get more information about it.  If anyone
 cares to investigate, here's the bug:

 https://launchpad.net/bugs/1383310


 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: supplement open--port/close-port with ensure-these-and-only-these-ports?

2014-11-01 Thread Kapil Thangavelu
On Sat, Nov 1, 2014 at 12:58 PM, John Meinel j...@arbash-meinel.com wrote:

 I believe there is already opened-ports to tell you what ports Juju is
 currently tracking.


That's cool and news to me, it looks like it landed in trunk earlier on
october 2nd (ie 1.21) and hasn't made release notes or docs yet. Especially
for charm environment changes we really need corresponding docs as charm
env changes are not easily discover-able otherwise. Really great to see
that land as its been a common issue for charms and one that previously
forced them into state management.

cheers,

Kapil


 As for the rest, open-port only takes a single port (or range), which
 means that if you wanted only 80 and 8080 open, you would need a different
 syntax. (something that lets you specify multiple ports/ranges to be
 opened).

 I can see a point to it, but we do already have opened-ports if you're
 looking for the behavior you want.

 John
 =:-

 On Sat, Nov 1, 2014 at 6:13 PM, Aaron Bentley aaron.bent...@canonical.com
  wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,

 I take a stateful approach to writing charms.  That is, my charms
 determine what the target state is, then take whatever actions are
 needed to get the unit into that state, paying as little attention to
 the current state as is possible.

 open-port / close-port require knowledge of the current state; if I
 know that I want only port 314 open, then I need to know whether any
 other ports are open and close them.  In most cases, a charm only
 opens specific ports, so I know which ports to close.

 Right now, I'm writing an update to the Apache2 charm that would allow
 the user to specify which ports to serve http on, which means that
 when a user changes the port, I may need to close the old port and
 open the new one.  If I want to use close-port / open-port, I need to
 track what ports are open.  But juju already knows this, so I
 shouldn't have to track it separately-- that violates DRY.

 The smallest change would be to provide a way to list the open ports,
 so that charms can close any open ports they no longer want open.  But
 that leaves a bunch of work for a stateful charm author.  What they
 actually want is a command that ensures specific ports are open and
 closes all others.

 ensure-these-and-only-these-ports was the first thing I thought of,
 but we could extend open-port instead.  open-port would need to accept
 multiple ports, not just ranges, and it would need to accept a
 - --close-all-others flag, that would close all open ports not listed.

 Does that seem like a sensible change?

 Aaron
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iQEcBAEBAgAGBQJUVOo3AAoJEK84cMOcf+9h2acIAL5ogJIy4O23TKa/RiWUcv0E
 wX9NHpNj9r7P8LoEHwUN/0nIeLi0UPQtDMN/w2orKGK01oXsPvvoVy/SPmMH+8G+
 yjOWQY1ppjB42vFsdLlP1d6VFutI94hiLEFgfT1ss9JSbPZXteakoKmhG3Og+W4e
 pZSrvVjccZPp3IhSsGclfVxVJLD+lMYxXL7NA/x4ji74YMiUE8pH3OCbCeOjderw
 oHlDMPClItugqvgAtCiHpr/n79yB75y1FARalsbXelXullgBLpiRxTQHgBq/yfn+
 o22d1uCmp+xqIveyUS433RffEzMDDt61UaZTuyui8ZG9n4/Jy9xOpKN9wGDhhvE=
 =gzrL
 -END PGP SIGNATURE-

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Feature Request: show running relations in 'juju status'

2014-11-17 Thread Kapil Thangavelu
On Mon, Nov 17, 2014 at 11:23 PM, Ian Booth ian.bo...@canonical.com wrote:



 On 17/11/14 15:47, Stuart Bishop wrote:
  On 17 November 2014 07:13, Ian Booth ian.bo...@canonical.com wrote:
 
  The new Juju Status work planned for this cycle will hopefully address
 the main
  concern about knowing when a deployed charm is fully ready to do the
 work for
  which it was installed. ie the current situation whereby a unit is
 marked as
  Started but it not ready. Charms are able to mark themselves as Busy
 and also
  set a status message to indicate they are churning and not ready to
 run. Charms
  can also indicate that they are Blocked and require manual intervention
 (eg a
  service needs a database and no relation has been established yet to
 provide the
  database), or Waiting (the database on which the service relies is busy
 but will
  resolve automatically when the database is available again).
 
  As long as the 'ready' state is managed by juju and not the unit, I'll
  stand happily corrected :-) The focus I'd seen had been on the unit
  declaring its own status, and there is no way for a unit to know that
  is ready because it has no way of knowing that, for example, there are
  another 10 peer units being provisioned that will need to be related.
 

 You are correct that the initial scope of work is more about the unit, and
 less
 about the deployment as a whole. There are plans though to address the
 issue.
 We're throwing around the concept of a goal state, which is conceptually
 akin
 to looking forward in time to be able to inform units what relations they
 will
 expect to participate in and what units will be deployed. They'd likely be
 something like a relation-goals hook tool (to compliment relation-list and
 relation-ids), as well as hook(s) for when the goal state changes. There's
 ongoing work in the uniter by William to get the architecture right so
 this work
 can be considered. There's still a lot of value in the current Juju Status
 work,
 but as you point out, it's not the full story.


for clusters... its not a question of futures but being informed of known
unit count to establish quorum. ie 1 to 3 or n+1. leader election helps,
but actually knowing the unit count is critical to being able to establish
a clear state without throwing away data (aka race on peer knowing quorum
and leader) as adhoc leader election has to throw away data from non
leaders who may already be serving clients due to lack of quorum knowledge.


 
  So although there are not currently plans to show the number of running
 hooks in
  the first phase of this work, mechanisms are being provided to allow
 charm
  authors to better communicate the state of their charms to give much
 clearer and
  more accurate feedback as to 1) when a charm is fully ready to do work,
 2) if a
  charm is not ready to do work, why not.
 
  A charm declaring itself ready is part of the picture. What is more
  important is when the system is ready. You don't want to start pumping
  requests through your 'ready' webserver, only to have it torn away as
  a new block device is mounted on your database when its storage-joined
  hook is invoked and returned to 'ready' state again once the
  storage-changed hook has completed successfully.
 

Also being thrown around is the concept of a new agent-state called Idle,
 which would be used when there are no pending hooks to run. There are
 plans as
 well for the next phase of the Juju status work to allow collaborating
 services
 to notify when they are busy, and mark relationships as down. So if the
 database
 had it's storage-attached hook invoked, it would mark itself as Busy, mark
 its
 relation to the webserver as Down, thus allowing the webserver to put
 itself
 into Waiting. Or, if we are talking about the initial install phase, the
 database would not initially mark itself as Running until its declared
 storage
 requirements were met, so the webserver would go from Installing to
 Waiting and
 then to Running one the database became Running.



status per future impl helps, as does explicitly marking units.. but
pending cluster count is a missing and important property to properly
establish quorum in a peer rel from one to n that is only resolved by
knowing recorded units count for a svc.

cheers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: local provider

2014-12-12 Thread Kapil Thangavelu
On Fri, Dec 12, 2014 at 11:26 AM, Nate Finch nate.fi...@canonical.com
wrote:

 It seems like a lot of people get confused by Juju, because it is
 different than the tools they know. They want to deploy stuff with Juju,
 and so they get a machine from AWS/Digital Ocean/whatever, ssh into the
 machine, install juju, and run the local provider and then wonder why
 they can't access their services from outside the machine.

 I think this stems from two things - one is the people are used to
 chef/puppet/etc where you ssh into the machine and then run the install
 there (note: I know nothing about these tools, so may be mis-characterizing
 them).  Whereas with Juju, you are perfectly able to orchestrate an install
 on a remote machine in the cloud from your laptop.

 The other is the local provider.  The intent of the local provider is to
 give users a way to easily try out Juju without needing to spin up real
 machines in the cloud. It's also very useful for testing out charms during
 charm development and/or testing service deployments.  It's not very useful
 for real production environments... but yet some people still try to
 shoehorn it into that job.

 I think one easy thing we could do to better indicate the purpose of the
 local provider is to simply rename it.  If we named it the demo provider,
 it would be much more clear to users that it is not expected to be used for
 deploying a production environment. This could be as easy as aliasing
 local to demo and making the default environments.yaml print out with
 the local provider renamed to demo.  (feel free to s/demo/testing/ or any
 other not ready for production word)

 What do you think?


no, that's a bad idea, imo.

first as you say its people first experience with juju and the way its
deployment usage fits very well with some folks production needs ( ie. i
have a  big machine in the corner and juju can deploy workloads on it). I
think the issue primarily is that of implementation, and the mindset among
developers/implementers that we don't support it.

Most of the reasons why its different on an implementation level disappear
with lxd, at which point we should support it for dev and prod.

-k
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


separating out golang juju client from

2014-12-19 Thread Kapil Thangavelu
one of the issues with having it in tree, means client usage falls under
the AGPL. We want to have the client used widely under a more permissive
license. I've already had contributions to other projects n'acked due to
license on our libraries. I'd like to see it moved to a separate repo so
that's possible. Thoughts?

cheers,
Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: separating out golang juju client from

2014-12-19 Thread Kapil Thangavelu
On Fri, Dec 19, 2014 at 7:02 AM, Nate Finch nate.fi...@canonical.com
wrote:

 While I am generally for using more permissive licenses, I'm not sure how
 useful that might be... most significant changes require modifications to
 both the client and the server, or at least to libraries used by both.


That sort of misses the point of building apps that use juju apis. Yes the
two packages need to be updated together for new changes same as today.


 There's not that much code under cmd/juju compared to the whole rest of
 the repo.


Again its not about that code, its about building other applications and
facilitating integrations.


cheers,
Kapil


 On Fri, Dec 19, 2014 at 6:03 AM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:

 one of the issues with having it in tree, means client usage falls under
 the AGPL. We want to have the client used widely under a more permissive
 license. I've already had contributions to other projects n'acked due to
 license on our libraries. I'd like to see it moved to a separate repo so
 that's possible. Thoughts?

 cheers,
 Kapil



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New charms client

2015-01-26 Thread Kapil Thangavelu
root issue ended up being a little different. my client wasn't explicitly
passing api facade versions, which meant i was getting version 0 of the
facade, per go default int value. all of which worked fine except when the
facade didn't have a version 0 as is the case for the Annotations, Charms,
HA on trunk.

thanks
Kapil

On Sun, Jan 25, 2015 at 4:18 PM, roger peppe rogpe...@gmail.com wrote:

 On 25 January 2015 at 16:53, Kapil Thangavelu
 kapil.thangav...@canonical.com wrote:
  odd, i don't show any deltas (godeps/install and output below).. and i'm
  only getting it on a few of the facades (charms and annotations) not all.
  i'll play around with it a bit more in a bit. good to know about the
  functional api tests ( i was wondering). thanks for the tips.
 
  kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u
 dependencies.tsv
  kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u
 dependencies.tsv
  kapil@realms-slice:~/src/github.com/juju/juju$ go install -v
 github.com/juju/juju
  kapil@realms-slice:~/src/github.com/juju/juju$

 Note that that last line only installs the top level juju Go package,
 not the juju command.

 Better would be go install github.com/juju/juju/... (or within
 the juju project directory, just go install ./...) to install
 everything.

 go install github.com/juju/juju/cmd/juju would install
 the command only.

   cheers,
 rog.

 
 
 
  On Sun, Jan 25, 2015 at 10:04 AM, Andrew Wilkins
  andrew.wilk...@canonical.com wrote:
 
  On Fri, Jan 23, 2015 at 11:32 PM, Kapil Thangavelu
  kapil.thangav...@canonical.com wrote:
 
  I'm having some problems actually using this api, is it enabled? or
 does
  it need a feature flag?
 
  return self.rpc._rpc({
  Type: Charms,
  Request: List,
  Params: {Names: names}})
 
  gets
 
  jujuclient.EnvError: Env Error - Details:
   {   u'Error': u'unknown object type Charms',
  u'ErrorCode': u'not implemented',
  u'RequestId': 1,
  u'Response': {   }}
 
  same code works for every other facade, using a trunk checkout. I do
 see
  the Charms facade in the login data, ie.
 
 
  Did you run godeps -u dependencies.tsv? I was seeing weird behaviour
  similar to this (different facade tho), updated dependencies and it went
  away.
 
  Cheers,
  Andrew
 
 
  {u'EnvironTag': u'environment-fb933e3d-5293-486a-8ff9-7ac565271c35',
   u'Facades': [{u'Name': u'Action', u'Versions': [0]},
{u'Name': u'Agent', u'Versions': [0, 1]},
{u'Name': u'AllWatcher', u'Versions': [0]},
{u'Name': u'Annotations', u'Versions': [1]},
{u'Name': u'Backups', u'Versions': [0]},
{u'Name': u'CharmRevisionUpdater', u'Versions': [0]},
{u'Name': u'Charms', u'Versions': [1]},
 
 
  On Mon, Jan 19, 2015 at 1:59 AM, Anastasia Macmood
  anastasia.macm...@canonical.com wrote:
 
  Hi
 
  I have just landed a new charms client.
 
  This client can list charms.
 
  The intention is to have a dedicate charms client for 1.23,
 deprecating
  old client. However, at the moment the only ported method from old
  client is CharmInfo.
 
  Sincerely Yours,
 
  Anastasia
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 
 
 
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: New charms client

2015-01-25 Thread Kapil Thangavelu
odd, i don't show any deltas (godeps/install and output below).. and i'm
only getting it on a few of the facades (charms and annotations) not all.
 i'll play around with it a bit more in a bit. good to know about the
functional api tests ( i was wondering). thanks for the tips.

kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv
kapil@realms-slice:~/src/github.com/juju/juju$ godeps -u dependencies.tsv
kapil@realms-slice:~/src/github.com/juju/juju$ go install -v
github.com/juju/juju
kapil@realms-slice:~/src/github.com/juju/juju$



On Sun, Jan 25, 2015 at 10:04 AM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 On Fri, Jan 23, 2015 at 11:32 PM, Kapil Thangavelu 
 kapil.thangav...@canonical.com wrote:

 I'm having some problems actually using this api, is it enabled? or does
 it need a feature flag?

 return self.rpc._rpc({
 Type: Charms,
 Request: List,
 Params: {Names: names}})

 gets

 jujuclient.EnvError: Env Error - Details:
  {   u'Error': u'unknown object type Charms',
 u'ErrorCode': u'not implemented',
 u'RequestId': 1,
 u'Response': {   }}

 same code works for every other facade, using a trunk checkout. I do see
 the Charms facade in the login data, ie.


 Did you run godeps -u dependencies.tsv? I was seeing weird behaviour
 similar to this (different facade tho), updated dependencies and it went
 away.

 Cheers,
 Andrew


 {u'EnvironTag': u'environment-fb933e3d-5293-486a-8ff9-7ac565271c35',
  u'Facades': [{u'Name': u'Action', u'Versions': [0]},
   {u'Name': u'Agent', u'Versions': [0, 1]},
   {u'Name': u'AllWatcher', u'Versions': [0]},
   {u'Name': u'Annotations', u'Versions': [1]},
   {u'Name': u'Backups', u'Versions': [0]},
   {u'Name': u'CharmRevisionUpdater', u'Versions': [0]},
   {u'Name': u'Charms', u'Versions': [1]},


 On Mon, Jan 19, 2015 at 1:59 AM, Anastasia Macmood 
 anastasia.macm...@canonical.com wrote:

 Hi

 I have just landed a new charms client.

 This client can list charms.

 The intention is to have a dedicate charms client for 1.23, deprecating
 old client. However, at the moment the only ported method from old
 client is CharmInfo.

 Sincerely Yours,

 Anastasia

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: A cautionary tale of names

2015-01-12 Thread Kapil Thangavelu
On Mon, Jan 12, 2015 at 10:03 AM, roger peppe roger.pe...@canonical.com
wrote:

 On 12 January 2015 at 15:43, Gustavo Niemeyer gust...@niemeyer.net
 wrote:
  A few quick notes:
 
  - Having an understandable name in a resource useful

 It's also good to be clear about what a name actually signifies.

 Currently (unless things have changed since I last looked)
 it's entirely possible to start an environment with one name,
 then send the resulting .jenv file to someone else, who can
 store it under some other name and still access the environment
 under the different name.

 Local aliases/names are nice - no worry about global name space
 clashes.

 But I agree that meaningful resource names are useful too.

 One possibility is that the UUID could incorporate the original
 environment name (I guess it would technically no longer be
 a UUID then, but UUID standards are overrated IMHO).

 Another possibility is to provide some other way to give
 a name at environment bootstrap time (e.g. a config option)
 that would be associated with resources created by the environment.


This is effectively what happens albeit implicitly, the name is associated
at bootstrap, and is used by the state server when provisioning resources.
ie. in this context (aws) we don't actually use native tag facilities (part
of why all instances allocated by juju are missing names in the aws
console), but instead use a security group for implicit tagging. the
secgroup name corresponds to this initial bootstrap name, other users can
name the env how they want as further provisioning is done by the state
servers which will continue to use the initial bootstrap name.. there are
still niggles here around destroy-environment force if its clientside. the
secgroup name in aws can be up to 255 chars. it would be good if we used
tags better for aws resources (instances, drives, etc) as it can help
usability (aws console) and cost accounting (very common to roll up charges
by tags for chargeback).

-k
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: any ways to update unit public address ?

2015-05-26 Thread Kapil Thangavelu
On Tue, May 26, 2015 at 4:36 PM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 On Tue, May 26, 2015 at 11:45 PM, Vasiliy Tolstov v.tols...@selfip.ru
 wrote:

 Hi! users sometimes can changed their server ip address, does it
 possible to change unit public address?
 Or only way is to edit mongodb database on state server?


 Hi Vasiliy,

 Do you mean in relation settings? The charm's config-changed hook
 will be invoked when the machine addresses change. You can use this
 hook to update a unit's relation settings.

 There was a long thread about automatically updating the address,
 but we didn't go there because it would break proxy charms; charms
 that manage remote services, presenting addresses for remote machines.


relation settings only propagates private ip, which juju sets, and its imo
bug that juju should update and invoke relation change hook if it changes
and has the previously set value (ie. thus works w/ proxy charms). at least
that was my summary going in and out of the that monster thread. at the
moment juju doesn't update the addresses it set, but it will invoke
config-changed when addresses changed afaicr but effectively zero charms
are ready to handle that.

re public address, unit-get public-address makes it available as info into
the charm, but there isn't a mechanism per-se to modify the address or
notify wrt to that information.

you can manually modify the relation settings that convey private/public
address info with juju run ala
https://gist.github.com/kapilt/a61efcb4eaef9e685397

might be helpful if you could clarify the context that the public address
changed and how its causing a problem with a concrete example.

cheers,

Kapil



 Cheers,
 Andrew


 --
 Vasiliy Tolstov,
 e-mail: v.tols...@selfip.ru

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev



 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Upcoming change in 1.24: tags in EC2

2015-05-25 Thread Kapil Thangavelu
On Thu, May 21, 2015 at 10:26 PM, Andrew Wilkins 
andrew.wilk...@canonical.com wrote:

 Hi all,

 Just a small announcement, in case anyone cares. In the EC2 provider, from
 1.24, we will start tagging instances and volumes with their Juju-internal
 names and the Juju environment UUID. Instances, for example, will have a
 name of machine-0, machine-1, etc., corresponding to the ID in juju
 status. There is not currently an upgrade step to tag existing resources,
 so only newly created resources will be tagged.

 The environment UUID can be used to identify all instances and volumes
 that are part of an environment. This could be used to for billing
 purposes; to charge the infrastructure costs for a Juju environment to a
 particular user/organisation.

 We will be looking at doing the same for OpenStack for 1.24 also.

 Cheers,
 Andrew


That's super awesome, and very helpful for real world usage. A few
suggestions, For users with multiple environments, seeing a bunch machine-0
in the ui, is rather confusing, i'd suggest prefixing with the env name.
Potentially even more useful is actually naming the machines not for their
pet names, but their cattle names (workload name), ie. name with the
primary unit that caused the machine to exist, or was the first unit
assigned to the machine (minus state servers).

cheers,
Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "environment" vs "model" in the code

2016-01-18 Thread Kapil Thangavelu
out of curiosity is there any public explanation on the reason for the
change? environments map fairly naturally to various service topology
stages, ie my prod, qa, dev environments. while model is a rather opaque
term that doesn't convey much.

On Thu, Jan 14, 2016 at 7:16 PM, Menno Smits 
wrote:

> Hi all,
>
> We've committed to renaming "environment" to "model" in Juju's CLI and API
> but what do we want to do in Juju's internals? I'm currently adding
> significant new model/environment related functionality to the state
> package which includes adding new database collections, structs and
> functions which could include either "env/environment" or "model" in their
> names.
>
> One approach could be that we only use the word "model" at the edges - the
> CLI, API and GUI - and continue to use "environment" internally. That way
> the naming of environment related things in most of Juju's code and
> database stays consistent.
>
> Another approach is to use "model" for new work[1] with a hope that it'll
> eventually become the dominant name for the concept. This will however
> result in a long period of widespread inconsistency, and it's unlikely that
> things we'll ever completely get rid of all uses of "environment".
>
> I think we need arrive at some sort of consensus on the way to tackle
> this. FWIW, I prefer the former approach. Having good, consistent names for
> things is important[2].
>
> Thoughts?
>
> - Menno
>
> [1] - but what defines "new" and what do we do when making significant
> changes to existing code?
> [2] - http://martinfowler.com/bliki/TwoHardThings.html
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: "environment" vs "model" in the code

2016-01-20 Thread Kapil Thangavelu
On Mon, Jan 18, 2016 at 8:24 PM, Rick Harding <rick.hard...@canonical.com>
wrote:

> No, there's not been a public note yet. It's work going into the 2.0
> updates currently.
>
> The gist of the reason is that as support for things such as networking,
> storage, and workloads expand out the idea is that Juju is doing more to
> model your infrastructure and workloads vs an environment.
>

networking and storage are very much part of an application's deployment
environment. ie. there different from dev, stage, prod. not sure what
workloads are (renamed or built on actions i presume).


> So far it's helped one of the issue that Juju has had in that it takes
> time to explain what it's actually doing before folks 'get it'.
>
Starting from the point of 'take what you have running and let Juju model
> it' seems to be clicking with new folks more.
>

so its a verb, its an instance/noun, does it also apply to templates
(previously known as bundles)?

i'm curious to try out the re-branding on some guinea pigs. re what's
commonly running to model, autoscale groups, elbs, multiple networks,
security groups, iam roles, rds.

thanks,

Kapil



>
> On Mon, Jan 18, 2016 at 9:15 AM Kapil Thangavelu <kap...@gmail.com> wrote:
>
>> out of curiosity is there any public explanation on the reason for the
>> change? environments map fairly naturally to various service topology
>> stages, ie my prod, qa, dev environments. while model is a rather opaque
>> term that doesn't convey much.
>>
>> On Thu, Jan 14, 2016 at 7:16 PM, Menno Smits <menno.sm...@canonical.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> We've committed to renaming "environment" to "model" in Juju's CLI and
>>> API but what do we want to do in Juju's internals? I'm currently adding
>>> significant new model/environment related functionality to the state
>>> package which includes adding new database collections, structs and
>>> functions which could include either "env/environment" or "model" in their
>>> names.
>>>
>>> One approach could be that we only use the word "model" at the edges -
>>> the CLI, API and GUI - and continue to use "environment" internally. That
>>> way the naming of environment related things in most of Juju's code and
>>> database stays consistent.
>>>
>>> Another approach is to use "model" for new work[1] with a hope that
>>> it'll eventually become the dominant name for the concept. This will
>>> however result in a long period of widespread inconsistency, and it's
>>> unlikely that things we'll ever completely get rid of all uses of
>>> "environment".
>>>
>>> I think we need arrive at some sort of consensus on the way to tackle
>>> this. FWIW, I prefer the former approach. Having good, consistent names for
>>> things is important[2].
>>>
>>> Thoughts?
>>>
>>> - Menno
>>>
>>> [1] - but what defines "new" and what do we do when making significant
>>> changes to existing code?
>>> [2] - http://martinfowler.com/bliki/TwoHardThings.html
>>>
>>> --
>>> Juju-dev mailing list
>>> Juju-dev@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>>
>>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Planning for Juju 2.2 (16.10 timeframe)

2016-03-19 Thread Kapil Thangavelu
On Tue, Mar 8, 2016 at 6:51 PM, Mark Shuttleworth  wrote:

> Hi folks
>
> We're starting to think about the next development cycle, and gathering
> priorities and requests from users of Juju. I'm writing to outline some
> current topics and also to invite requests or thoughts on relative
> priorities - feel free to reply on-list or to me privately.
>
> An early cut of topics of interest is below.
>
>
>
> *Operational concerns ** LDAP integration for Juju controllers now we
> have multi-user controllers
> * Support for read-only config
> * Support for things like passwords being disclosed to a subset of
> user/operators
> * LXD container migration
> * Shared uncommitted state - enable people to collaborate around changes
> they want to make in a model
>
> There has also been quite a lot of interest in log control - debug
> settings for logging, verbosity control, and log redirection as a systemic
> property. This might be a good area for someone new to the project to lead
> design and implementation. Another similar area is the idea of modelling
> machine properties - things like apt / yum repositories, cache settings
> etc, and having the machine agent setup the machine / vm / container
> according to those properties.
>
>
ldap++. as brought up in the user list better support for aws best practice
credential management, ie. bootstrapping with transient credentials (sts
role assume, needs AWS_SECURITY_TOKEN support), and instance role for state
servers.



>
>
> *Core Model * * modelling individual services (i.e. each database
> exported by the db application)
>  * rich status (properties of those services and the application itself)
>  * config schemas and validation
>  * relation config
>
> There is also interest in being able to invoke actions across a relation
> when the relation interface declares them. This would allow, for example, a
> benchmark operator charm to trigger benchmarks through a relation rather
> than having the operator do it manually.
>
>
in priority order, relation config, config schemas/validation, rich status.
relation config is a huge boon to services that are multi-tenant to other
services, as is the workaround is to create either copies per tenant or
intermediaries.


> *Storage*
>
>  * shared filesystems (NFS, GlusterFS, CephFS, LXD bind-mounts)
>  * object storage abstraction (probably just mapping to S3-compatible APIS)
>
> I'm interested in feedback on the operations aspects of storage. For
> example, whether it would be helpful to provide lifecycle management for
> storage being re-assigned (e.g. launch a new database application but reuse
> block devices previously bound to an old database  instance). Also, I think
> the intersection of storage modelling and MAAS hasn't really been explored,
> and since we see a lot of interest in the use of charms to deploy
> software-defined storage solutions, this probably will need thinking and
> work.
>
>
it maybe out of band, but with storage comes backups/snapshots. also of
interest, is encryption on block and object storage using cloud native
mechanisms where available.


>
>
> *Clouds and providers *
>  * System Z and LinuxONE
>  * Oracle Cloud
>
> There is also a general desire to revisit and refactor the provider
> interface. Now we have seen many cloud providers get done, we are in a
> better position to design the best provider interface. This would be a
> welcome area of contribution for someone new to the project who wants to
> make it easier for folks creating new cloud providers. We also see constant
> requests for a Linode provider that would be a good target for a refactored
> interface.
>
>
>
>
> *Usability * * expanding the set of known clouds and regions
>  * improving the handling of credentials across clouds
>


Autoscaling, either tighter integration with cloud native features or juju
provided abstraction.
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Support AWS EIP

2016-11-07 Thread Kapil Thangavelu
Hi James

What's the use case your using them for? Elastic IPs in aws are a very
limited commodity you get like 5 per region per account by default. IMO
it's generally not a recommended practice to depend on them as effectively
they represent public endpoints mapping to a single instance in aws. Using
elb or r53 is typically better for scale out. Note eips are distinct from
Enis re multiple private addresses and nats re shadow ips.

Ps. My wishlist for juju and aws would be to support non static credentials
per best practices.
On Sun, Nov 6, 2016 at 5:18 AM Mark Shuttleworth  wrote:

On 05/11/16 17:42, James Beedy wrote:
> How does everyone few about extending the AWS provider to support elastic
ips?
>
> Capability to attach eips using juju would alleviate one more manual step
I have to preform from the aws console every time I spin up an instance.
>
> I have created a feature request here ->
> https://bugs.launchpad.net/juju/+bug/1639459

Yes, this would be excellent. Conceptually at least, in Juju, we have a
place for "the internet" as a dedicated Network (networks are
collections of spaces) and for "shadow-ip addresses" (which are
addresses on one network that tunnel to addresses on another network).
These concepts give us elastic IPs very naturally, but they also are
important for cross-model relations in the private cloud, and I think we
should map out and implement this carefully as one coherent hybrid cloud
operations story.

Mark

--
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at:
https://lists.ubuntu.com/mailman/listinfo/juju-dev
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev