Re: [Review Queue] hpcc charm

2014-08-27 Thread Mark Shuttleworth
On 27/08/14 00:10, Matt Bruzek wrote:
  First and most importantly the hpcc charm deploys according to the readme
 file! I had to increase the memory constraints on the HP-cloud to 4GB per
 machine (juju set-constraints mem=4GB) so all the services had enough
 memory to start up. After that I was able to cluster by adding units of
 hpcc.

We have a couple of charms which break on tiny instances on some clouds
because of this sort of disconnect. Would it be helpful to be able to
encode minimum requirements in the charm metadata?

Obviously, real requirements are configuration and load dependent, but I
think we could avoid the obvious try it then debug it cycle if we had
some explicit minimum requirements up front.

Thanks for the review commentary and advice to charmers!

Mark

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [Review Queue] hpcc charm

2014-08-27 Thread José Antonio Rey
It is a nice idea, but it should definitely fire up a warning saying
that the machine will have larger specs, as well as asking for
confirmation. I don't want to see any surprise charges in my AWS bills!

On 08/27/2014 02:34 AM, Mark Shuttleworth wrote:
 On 27/08/14 00:10, Matt Bruzek wrote:
  First and most importantly the hpcc charm deploys according to the readme
 file! I had to increase the memory constraints on the HP-cloud to 4GB per
 machine (juju set-constraints mem=4GB) so all the services had enough
 memory to start up. After that I was able to cluster by adding units of
 hpcc.
 
 We have a couple of charms which break on tiny instances on some clouds
 because of this sort of disconnect. Would it be helpful to be able to
 encode minimum requirements in the charm metadata?
 
 Obviously, real requirements are configuration and load dependent, but I
 think we could avoid the obvious try it then debug it cycle if we had
 some explicit minimum requirements up front.
 
 Thanks for the review commentary and advice to charmers!
 
 Mark
 

-- 
José Antonio Rey

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: [Review Queue] hpcc charm

2014-08-27 Thread Matt Bruzek
I am tentatively +1 for adding some charm metadata for minimum requirements
for the charm.  We could certainly add that kind of information in the
README but having it in the metadata would make it automatic (magic).

José makes a great point about how something like this could increase the
cloud bill unexpectedly.  Perhaps some kind of informational message
requiring a response from the user would be good here.  Users could accept
or override the constraints.

The number of charms that would take advantage of this metadata would be a
small subset of what we have.  The big data charms, and other resource
intensive charms could set this optional metadata to give the user a good
experience.

Are there any other concerns that people have about this metadata idea?

   - Matt Bruzek matthew.bru...@canonical.com


On Wed, Aug 27, 2014 at 7:21 AM, José Antonio Rey j...@ubuntu.com wrote:

 It is a nice idea, but it should definitely fire up a warning saying
 that the machine will have larger specs, as well as asking for
 confirmation. I don't want to see any surprise charges in my AWS bills!

 On 08/27/2014 02:34 AM, Mark Shuttleworth wrote:
  On 27/08/14 00:10, Matt Bruzek wrote:
   First and most importantly the hpcc charm deploys according to the
 readme
  file! I had to increase the memory constraints on the HP-cloud to 4GB
 per
  machine (juju set-constraints mem=4GB) so all the services had enough
  memory to start up. After that I was able to cluster by adding units of
  hpcc.
 
  We have a couple of charms which break on tiny instances on some clouds
  because of this sort of disconnect. Would it be helpful to be able to
  encode minimum requirements in the charm metadata?
 
  Obviously, real requirements are configuration and load dependent, but I
  think we could avoid the obvious try it then debug it cycle if we had
  some explicit minimum requirements up front.
 
  Thanks for the review commentary and advice to charmers!
 
  Mark
 

 --
 José Antonio Rey

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju Hortonworks Big Data Solution

2014-08-27 Thread Charles Butler
Amir Sanjar and I have been hard at work on grinding out Hadoop bundles for
mass consumption. To those of you that have never deployed hadoop before,
it can be a long winded process that spans many days when done manually.

We've distilled the process down to dragging and dropping on the GUI, and
12 minutes later you have a Hortonworks Big Data stack ready for you to
plug in your map reduce applications, complete with distributed file
storage, data warehousing, and a powerful and scale-able Map Reduce cluster.

https://www.youtube.com/watch?v=f9yTWK7Z9Wgfeature=youtu.be

In this 10 minute video, I give a brief introduction to Juju, deploy the
Hortonworks Hadoop/Hive/HDFS bundle, and inspect each of the moving
components briefly showing it's put together.

Thanks!

- Charles
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Hortonworks Big Data Solution

2014-08-27 Thread Akash Chandrashekar
This is freaking sweet!

Everyday, the more and more we talk to customers on the Partner and
Enterprise side,  the #1 conversation that captures their interest is :
Reference architectures.

These architectures spurn ideas, they prompt improvements in architectural
design, help to drive conversations around optimization of workloads  and
they help promote our ability to sell SOLUTIONS rather than tools and
widgets. I applaud the work done here, and really look forward to seeing
others.

Regards, Akash
On Aug 27, 2014 6:10 PM, Antonio Rosales antonio.rosa...@canonical.com
wrote:

 Chuck,

 Thanks for also posting this to your blog to @
 http://blog.dasroot.net/juju-3s-big-data/

 Thanks Chuck and Amir for distilling your Big Data knowledge into
 these bundles to enable folks to get past deploying, configuring, and
 connecting and focus on crunching data.

 -Antonio

 On Wed, Aug 27, 2014 at 2:25 PM, Charles Butler
 charles.but...@canonical.com wrote:
  Amir Sanjar and I have been hard at work on grinding out Hadoop bundles
 for
  mass consumption. To those of you that have never deployed hadoop
 before, it
  can be a long winded process that spans many days when done manually.
 
  We've distilled the process down to dragging and dropping on the GUI,
 and 12
  minutes later you have a Hortonworks Big Data stack ready for you to
 plug in
  your map reduce applications, complete with distributed file storage,
 data
  warehousing, and a powerful and scale-able Map Reduce cluster.
 
  https://www.youtube.com/watch?v=f9yTWK7Z9Wgfeature=youtu.be
 
  In this 10 minute video, I give a brief introduction to Juju, deploy the
  Hortonworks Hadoop/Hive/HDFS bundle, and inspect each of the moving
  components briefly showing it's put together.
 
  Thanks!
 
  - Charles
 
  --
  Juju mailing list
  Juju@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju
 



 --
 Antonio Rosales
 Juju Ecosystem
 Canonical

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Hortonworks Big Data Solution

2014-08-27 Thread José Antonio Rey
Grabbing for next UWN issue!

--
José Antonio Rey
On Aug 27, 2014 8:10 PM, Antonio Rosales antonio.rosa...@canonical.com
wrote:

 Chuck,

 Thanks for also posting this to your blog to @
 http://blog.dasroot.net/juju-3s-big-data/

 Thanks Chuck and Amir for distilling your Big Data knowledge into
 these bundles to enable folks to get past deploying, configuring, and
 connecting and focus on crunching data.

 -Antonio

 On Wed, Aug 27, 2014 at 2:25 PM, Charles Butler
 charles.but...@canonical.com wrote:
  Amir Sanjar and I have been hard at work on grinding out Hadoop bundles
 for
  mass consumption. To those of you that have never deployed hadoop
 before, it
  can be a long winded process that spans many days when done manually.
 
  We've distilled the process down to dragging and dropping on the GUI,
 and 12
  minutes later you have a Hortonworks Big Data stack ready for you to
 plug in
  your map reduce applications, complete with distributed file storage,
 data
  warehousing, and a powerful and scale-able Map Reduce cluster.
 
  https://www.youtube.com/watch?v=f9yTWK7Z9Wgfeature=youtu.be
 
  In this 10 minute video, I give a brief introduction to Juju, deploy the
  Hortonworks Hadoop/Hive/HDFS bundle, and inspect each of the moving
  components briefly showing it's put together.
 
  Thanks!
 
  - Charles
 
  --
  Juju mailing list
  Juju@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju
 



 --
 Antonio Rosales
 Juju Ecosystem
 Canonical

 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: getting rid of all-machines.log

2014-08-27 Thread Gabriel Samfira
Hi David,

(some comments in-line)

As a user that wants to deploy a charm on a Windows machine, I want to 
be able to have a local log file on that machine for the machine agent 
and for the units deployed to it. I also want to be able to aggregate 
all those logs the same way Ubuntu workloads do (at the moment, to the 
syslog on the state machine(s) ).

Right now, the debug-log that juju generates works the same way on both 
platforms. However, windows services cannot redirect stdout to a file 
the same way we do it using upstart. So when starting juju as a service, 
that log does not get written.

So I guess the issues right now are:

* how can we have a local logfile on both Ubuntu as well as Windows? 
(lumberjack is one option that works on both platforms)
* how can we aggregate logs from both platforms? (using a go package to 
write directly to syslog is one option that works cross platform)

On 27.08.2014 04:32, David Cheney wrote:
 Hi Horatio,

 I don't see a way to resolve the very disparate set of opinions you've
 highlighted below. It's also not clear from your email who is
 responsible for making a decision.

 I suggest reframing the discussion as user stories. ie

 * As a Juju user with a Windows workstation I want to use juju
 debug-log (this should work today)
 * As a Juju user who has deployed a mixed environment (to the best of
 my knowledge there is no requirement for the state servers to run on
 Windows, this seems contrary to Canonical's goal of Free Software)
The goal is to eventually have a 1-to-1 feature set on both platforms. 
This includes a working state machine on windows as well (to be honest, 
it probably works already, just need to use WinRM instead of ssh to 
bootstrap it), containers using Hyper-V, and anything else that is 
supported by Ubuntu and feasible in Windows.

Thanks,
Gabriel

 containing a windows workload charm I want to view the logs from that
 charm.

 Dave

 On Wed, Aug 27, 2014 at 5:35 AM, Horacio Duran
 horacio.du...@canonical.com wrote:
 Hey, In an effort to move forward with juju's windows integration I have
 summarized what seems to be the core points of this discussion to the best
 of my ability (please excuse me if I missed or misunderstood something).
 The two core points of discussion on this thread are:
 * should we remove all-machines.log: which has been voted against, at least
 for the moment, since it is used for debug-log.
 * how do we support logging in windows: The strongest suggestions here are a
 syslog package by gabriel and logging into MongoDB by Gustavo.

 We do require some decision on the front of windows logging to have a
 complete windows support. Ideally we need senior citizens of juju dev
 community to weight into this in order to get a clear path to follow.

 Here is a summary I made to help myself while following this discussion:

 Nate original suggestion:
 * Remove all-machines.log: Claiming it takes a lot of space and it is not a
 multi platform solution

 Tim, John, Aaaron, etc:
 * all-machines.log is required for debug-log
 * makes it big and it would be nice to rotate it.

 Nate, gabriel:
 * keep all-machines.log
 * use a go-only solution (syslog package with ports from gabriel for
 windows)
 John
 * agrees.

 Nate, gabriel:
 * remove rsyslog from al OSes in favor of one solution that fits all OSes
 * Replace with go only solution.

 Dave:
 * Dont mind about the logs, make it just output and let external tools
 handle logging and rotation.
 * all-machines.log might be a bit bloated and it could contain less data
 that is more useful.
 (Here is the reference to 12factor that will later be attibuted to nate)
 Ian:
 * Agrees with dave, yet we should provide a rolling mechanism.

 Gabriel:
 * Windows does not support capturing stdout as a logging mechanism, it
 requires to explicitly log into the event log.
 * Thinks that using rsyslog to stream logs from agents to state server is
 too much overhead on external tools.
 * Proposes replacing external rsyslog with in app solution for the case of
 streaming logs.
 * Alternative solution, he does not recommend it, to create (and bundle with
 jujud.exe) a wrapper for windows only.

 Gustavo:
 * Present a possible alternative by using a MongoDB capped collection
 which will suit our use cases but does not recommend it because of the idea
 needs maturing on some details.

 Matt:
 * We should provide the option to log to stdout or syslog.

 Kapil:
 * Supports Gustavo's idea of logging in a structured form into Mongo as it
 makes sense to dump structured data with structure instead of serializing it
 to be de-serialized later.
 * We can send also messages to syslog and let OPS people collec them
 themselves.

 Gabriel (summarizing)
 * I will be looking into event log for local windows logging. This will
 probably require writing a package.
 * the syslog change will solve in the sort term, the aggregation issue from
 Windows nodes (when something better comes along, I will personally send 

Re: getting rid of all-machines.log

2014-08-27 Thread Gabriel Samfira
On 27.08.2014 08:12, John Meinel wrote:
...
 I may be misremembering, but at the time that was the preferred approach. But
 then someone said Go's inbuilt syslog APIs were broke, so the compromise was 
 to
 use rsyslog forwarding.

 Does anyone else recall why it may have been said that Go's syslog APIs are 
 broken?

The reconnect logic is broken in all the version's of the syslog api.
The general consensus is that package is a mistake and should not be
used.


I believe there is also an issue where we couldn't format the logs the way we 
wanted to. (The prefix/timestamp are added by the package and cannot be 
configured).


I think that may have been in an older version of Go. For example:

http://paste.ubuntu.com/8158001/

will appear in syslog as:

Aug 27 13:01:36 rossak testing[3812]: hello

An example of log output streamed to all-machines.log using the syslog package:

unit-vanilla2-0[1424]: 2014-08-27 09:34:14 INFO juju.worker.uniter 
uniter.go:324 deploying charm local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.charm 
manifest_deployer.go:126 preparing to deploy charm 
local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.charm 
manifest_deployer.go:102 deploying charm local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.filter 
filter.go:583 no new charm event

I have not looked at the reconnect part though.


Gabriel



John
=:-



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: getting rid of all-machines.log

2014-08-27 Thread Gabriel Samfira
That one is an easy fix in any case. We are using a forked version of the 
syslog package. Removing the pid from the writeString() method should be 
trivial.


Gabriel

On 27.08.2014 13:45, John Meinel wrote:
So at the very least our default logging *doesn't* include the PID, though the 
rest seems sane to me.

machine-0: 2014-04-08 02:00:53 INFO juju.cmd supercommand.go:296 running 
juju-1.19.0-precise-amd64 [gc]
machine-0: 2014-04-08 02:00:53 INFO juju.cmd.jujud machine.go:129 machine agent 
machine-0 start (1.19.0-precise-amd64 [gc])

John
=:-


On Wed, Aug 27, 2014 at 2:13 PM, Gabriel Samfira 
gsamf...@cloudbasesolutions.commailto:gsamf...@cloudbasesolutions.com wrote:
On 27.08.2014 08:12, John Meinel wrote:
...
 I may be misremembering, but at the time that was the preferred approach. But
 then someone said Go's inbuilt syslog APIs were broke, so the compromise was 
 to
 use rsyslog forwarding.

 Does anyone else recall why it may have been said that Go's syslog APIs are 
 broken?

The reconnect logic is broken in all the version's of the syslog api.
The general consensus is that package is a mistake and should not be
used.


I believe there is also an issue where we couldn't format the logs the way we 
wanted to. (The prefix/timestamp are added by the package and cannot be 
configured).


I think that may have been in an older version of Go. For example:

http://paste.ubuntu.com/8158001/

will appear in syslog as:

Aug 27 13:01:36 rossak testing[3812]: hello

An example of log output streamed to all-machines.log using the syslog package:

unit-vanilla2-0[1424]: 2014-08-27 09:34:14 INFO juju.worker.uniter 
uniter.go:324 deploying charm local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.charm 
manifest_deployer.go:126 preparing to deploy charm 
local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.charm 
manifest_deployer.go:102 deploying charm local:win2012hvr2/vanilla2-0
unit-vanilla2-0[1424]: 2014-08-27 09:34:14 DEBUG juju.worker.uniter.filter 
filter.go:583 no new charm event

I have not looked at the reconnect part though.


Gabriel



John
=:-




--
Juju-dev mailing list
Juju-dev@lists.ubuntu.commailto:Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev



-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Thoughts on Dense Container testing

2014-08-27 Thread John A Meinel
So I played around with manually assigning IP addresses to a machine, and
using BTRFS to make the LXC instances cheap in terms of disk space.

I had success bringing up LXC instances that I created directly, I haven't
gotten to the point where I could use Juju for the intermediate steps. See
the attached document for the steps I used to set up several addressable
containers on an instance.

However, I feel pretty good that Container Addressability would actually be
pretty straightforward to achieve with the new Networker. We need to make
APIs for requesting an Address for a new container available, but then we
can configure all of the routing stuff without too much difficulty.

Also of note, is that because we are using MASQUERADE in order to route the
traffic, it doesn't require putting the bridge (br0) directly onto eth0. So
it depends if MaaS will play nicely with routing rules if you assign an IP
address into a container on a machine, will the routes end up routing the
traffic there (I think it will, but we'd have to test to confirm it).

Ideally, I'd rather do the same thing everywhere, rather that have
containers routed one way in MaaS and a different way on EC2.

It may be that in the field we need to not Masquerade, so I'm open to
feedback here.

I wrote this up a bit like how I would want to use dense containers for
scale testing, since you can then deploy actual workloads into each of
these LXCs if you wanted (and had the horsepower :).

I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC
containers and was able to connect to them from another machine running
inside the VPC.

John
=:-
Steps for setting up high-density LXC machine

1) launch machine in a VPC (either bootstrap with defaultVPC or launch manually 
in VPC)
2) add additional IP addresses to the NIC
   in Dashboard this is EC2 dashboard/Network  Security/Network Interfaces, 
select interface, Manage Private IP Addresses, Assign new IP
   need to work out how to script this
3) on machine install lxc, btrfs and allow ip forwarding:
   $ sudo su -
# apt-get update
# apt-get install lxc btrfs-tools
# sysctl -w net.ipv4.ip_forward=1
4) Create a BTRFS block device, and mount it into /var/lib/lxc
# dd if=/dev/zero of=/var/lib/lxc-block bs=1M count=1024
# losetup /dev/loop0 /var/lib/lxc-block
# mkfs -t btrfs /dev/loop0
# mount /dev/loop0 /var/lib/lxc
5) Create the first LXC container, making sure it is set up for BTRFS backing
# lxc-create -B btrfs -n test-lxc-1 -t ubuntu-cloud
6) Before booting for the first time, configure the eth0.cfg with one of the 
static IP addresess from earlier. Read /etc/resolve.conf to find the address of 
your dns server.
# vim /var/lib/lxc/test-lxc-1/rootfs/etc/networking/eth0.cfg
auto eth0
iface eth0 inet static
address CONTAINER_IP
netmask 255.255.255.255
post-up ip route add HOST_IP dev eth0
post-up ip route add default via HOST_IP
dns-nameservers DNS_IP
7) Setup the host to masquerade traffic, and to have a route for all static 
addresses
# ip route add CONTAINER_IP dev lxcbr0
# iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE
8) Start the instance one time which will make sure the Ubuntu user is set up
# lxc-start -n test-lxc-1
  Login ubuntu/ubuntu
  $ sudo shutdown -h now
9) Update .ssh with a authorized-keys so that we can use it. Perhaps step (8) 
can
be better done with a parameter to lxc-create?. Alternatively, maybe we can use
a mkdir -p, but that won't create the ubuntu user with the right skeleton
files.
cp -r ~ubuntu/.ssh/ /var/lib/lxc/test-lxc-1/rootfs/home/ubuntu
10) For each new LXC:
a) Clone the test LXC for a new LXC
# lxc-clone -s -B btrfs test-lxc-1 test-lxc-2
b) Update the CONTAINER_IP in eth0.cfg and set up a route for the new 
container
# vim /var/lib/lxc/test-lxc-2/rootfs/etc/networking/eth0.cfg
s/OLD_CONTAINER_IP/NEW_CONTAINER_IP/
# ip route add NEW_CONTAINER_IP dev lxcbr0
c) Start the container
# lxc-start -n test-lxc-2 -d

d) At this point, you should be able to SSH into the NEW_CONTAINER_IP
as the 'ubuntu' user, which should let you use this with manual 
registration.


If you wanted to use 'juju bootstrap' and 'juju deploy --to lxc:' with this
setup, I believe you could if you had 'default-vpc'. The key is that you would
have to setup the BTRFS loopback mount before deploying anything in LXC, and
you would have to allocate and configure the IP addresses manually. (I believe
Juju is already aware that if /var/lib/lxc is BTRFS it will create the
juju-SERIES-template container in such a way that it can be trivially cloned.)
-- 
Juju-dev mailing list

Re: Thoughts on Dense Container testing

2014-08-27 Thread Kapil Thangavelu
On Wed, Aug 27, 2014 at 9:17 AM, John A Meinel john.mei...@canonical.com
wrote:

 So I played around with manually assigning IP addresses to a machine, and
 using BTRFS to make the LXC instances cheap in terms of disk space.

 I had success bringing up LXC instances that I created directly, I haven't
 gotten to the point where I could use Juju for the intermediate steps. See
 the attached document for the steps I used to set up several addressable
 containers on an instance.

 However, I feel pretty good that Container Addressability would actually
 be pretty straightforward to achieve with the new Networker. We need to
 make APIs for requesting an Address for a new container available, but then
 we can configure all of the routing stuff without too much difficulty.

 Also of note, is that because we are using MASQUERADE in order to route
 the traffic, it doesn't require putting the bridge (br0) directly onto
 eth0. So it depends if MaaS will play nicely with routing rules if you
 assign an IP address into a container on a machine, will the routes end up
 routing the traffic there (I think it will, but we'd have to test to
 confirm it).

 Ideally, I'd rather do the same thing everywhere, rather that have
 containers routed one way in MaaS and a different way on EC2.

 It may be that in the field we need to not Masquerade, so I'm open to
 feedback here.

 I wrote this up a bit like how I would want to use dense containers for
 scale testing, since you can then deploy actual workloads into each of
 these LXCs if you wanted (and had the horsepower :).

 I succeeded in putting 6 IPs on a single m3.medium and running 5 LXC
 containers and was able to connect to them from another machine running
 inside the VPC.



Thanks for exploring this John. I'm excited about utilizing something like
this for regular scale testing on the cheap (10 instances for 1 hr on spot
markets with 200 containers per test ~ 2k machine/unit env). Fwiw, i use
ansible to automate the provisioning and machine setup ( aws/lxc/btrfs/ebs
volume for btrfs) in ec2 via
https://github.com/kapilt/juju-lxc/blob/master/ec2.yml .. There's some
other scripts in there (add.py) for provisioning the container with
userdata (ie. automate key installation and machine setup) which can
obviate/automate several of these steps. Either ebs or instance ephemeral
disk (ssd) is preferable i think to loopback dev for perf testing.  Re
uniform networking handling, it still feels like we're exploring here its
unclear if we have the knowledge base to dictate a common mechanism yet.

cheers,

Kapil
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev