Re: Juju chef cookbook

2014-12-18 Thread Egor Medvedev
Hello again.

 For clarity - when you say generate manual configuration - is this going to 
 be a manual provider environment?

Yes, I mean manual provider environment at this point.

  Where does the bootstrap server live? is this going to be a separate 
 service defined in your infrastructure, controlled by chef?

In described use case I speak about VPS in DO, where chef bootstrapped and we 
run chef-solo with data bags support.
Juju will be installed along with chef and will take control over 
lxc-containers. Right now I'm thinking of local containers only.

In short, I'm trying to control my server with chef (configuration, software), 
and LXC-containers with juju.

I should admit, this case may look strange. But I like juju because of dynamic 
service manipulation. I can manage containers and it's data without additional 
chef instructions.

Now I'm watching your post and video. Nice work.

Thank you. 

--
Best Regards,
Egor


On Wednesday 17 December 2014 at 20:07, Charles Butler wrote:

 A few questions:
 
 
 
 
 
 
 When it comes to deploying the services - what were your thoughts there? Are 
 these going to be data-attributes to the LWRP where you do something like:
 
 juju_deploy mediawiki mysql haproxy.join(,)
 
 - which satisfies the deployment - yet relations would need some more finesse 
 - as relations as charms mature change, evolve, and can sometimes confuse 
 juju when you aren't implicit.
 
 juju_relate(mediawiki, mysql) - would cause a failure as it needs the 
 scoped :db relation since 2 relations share the same interface but yield 
 different configuration options.
 
 
 But it sounds like you'e done your research and you're on the right track.
 
 and as promised: https://www.youtube.com/watch?v=bCvl-TsxVXA 
 
 
 
 
 On Wed, Dec 17, 2014 at 10:13 AM, Egor Medvedev meth...@aylium.net 
 (mailto:meth...@aylium.net) wrote:
  By the way, here is my first algorithm:
  
  1. Install juju on server
  2. Provide LXC-support
  3. Generate manual configuration
  4. Create at least one container for juju services
  5. Then create n+1 machines, described in data bag
  6. Add all of them to environment and make juju-deploy
  
  --
  Best Regards,
  Egor
  
  
  On Wednesday 17 December 2014 at 11:26, Egor Medvedev wrote:
  
   Hello, Charles.
   
   Thanks for your response.
   I will wait for your blog post.
   
   I'm going to play with chef and juju, and maybe it will give some 
   interesting results.
   
   Good luck to you! 
   
   --
   Best Regards,
   Egor
   
   
   On Wednesday 17 December 2014 at 00:41, Charles Butler wrote:
   
Egor,

With regards to Juju being a LWRP in the chef ecosystem, no cookbooks 
have been made thus far that expose juju. We've done some work on the 
opposite end of the spectrum orchestrating chef with Juju. However your 
use case certainly warrants additional exploration. As an already 
established Chef workshop you can build upon your hosts leveraging Juju 
- but you'll be moving into more experimental territory. Additionally 
to just fetching juju you will need to do some tweaking and tuning to 
get reach-ability into your LXC containers from outside the host. I'm 
actively working on a blog post about this very thing.

I'll make sure i follow up on the list when its completed. 

All the best,

Charles

On Mon, Dec 15, 2014 at 2:54 PM, Egor Medvedev meth...@aylium.net 
(mailto:meth...@aylium.net) wrote:
 Hello!
 
 I was looking for chef cookbook, which can operate with juju using 
 HWRP or LWRP.
 Maybe someone have it? Can't find anything on Github or at chef.io 
 (http://chef.io).
 
 Anyway, I want to deploy my server applications with chef, and some 
 web applications with juju in LXC. So, I decided to write a cookbook 
 that will install juju and use it with chef resources. So I can tell 
 it what charms to install and expose.
 
 What do you think about this use case? Is it acceptable?
 
 Thanks! 
 
 --
 Best Regards,
 Egor
 
 http://aylium.net 
 --
 Juju mailing list
 Juju@lists.ubuntu.com (mailto:Juju@lists.ubuntu.com)
 Modify settings or unsubscribe at: 
 https://lists.ubuntu.com/mailman/listinfo/juju
 


-- 
All the best,

Charles Butler charles.but...@canonical.com 
(mailto:charles.but...@canonical.com) - Juju Charmer
Come see the future of datacenter orchestration: http://jujucharms.com






   
   
  
 
 
 -- 
 All the best,
 
 Charles Butler charles.but...@canonical.com 
 (mailto:charles.but...@canonical.com) - Juju Charmer
 Come see the future of datacenter orchestration: http://jujucharms.com
 
 
 


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: feedback about juju after using it for a few months

2014-12-18 Thread Marco Ceppi
On Thu Dec 18 2014 at 1:00:46 AM John Meinel j...@arbash-meinel.com wrote:

 ...


 9. If you want to cancel a deployment that just started you need to keep
 running remove-service forever. Juju will simply ignore you if it's still
 running some special bits of the charm or if you have previously asked it
 to cancel the deployment during its setting up. No errors, no other
 messages are printed. You need to actually open its log to see that it's
 still stuck in a long apt-get installation and you have to wait until the
 right moment to remove-service again. And if your connection is slow, that
 takes time, you'll have to babysit Juju here because it doesn't really
 control its services as I imagined. Somehow apt-get gets what it wants :-)


 You can now force-kill a machine. So you can run `juju destroy-service
 $service` then `juju terminate-machine --force #machine_number`. Just make
 sure that nothing else exists on that machine! I'll raise an issue for
 having a way to add a --force flag to destroying a service so you can just
 say kill this with fire, now plz


 I understand that, but I discovered it's way faster and less typing if I
 simply destroy-environment and bootstrap it again. If you need to force
 kill something every time you need to kill it, then perhaps somethings is
 wrong?


 I agree, something is wrong with the UX here. We need to (and would love
 your feedback) figure out what should happen here. The idea is, if a
 service experiences a hook failure, all events are halted, including the
 destroy event. So the service is marked as dying but it can't die until the
 error is resolved. There are cases, where during unit termination, that you
 may wish to inspect an error. I think adding a `--force` flag to destroy
 service would satisfy what you've outlined, where --force will ignore hook
 errors during the destruction of a service.

 Thanks,
 Marco Ceppi


 IIRC, the reason we support juju destroy-machine --force but not juju
 destroy-unit --force is because in the former case, because the machine is
 no-more Juju has ensured that cleanup of resources really has happened.
 (There are no more machines running that have software running you don't
 want.)
 The difficulty with juju destroy-unit --force is that it doesn't
 necessarily kill the machine, and thus an unclean teardown could easily
 leave the original services running (consider collocated deployments).
 juju destroy-service --force falls into a similar issue, only a bit more
 so since some units may be on shared machines and some may be all by
 themselves.


Right, and I agree. This isn't the best thing for --force at a service or
unit level. What I would like to see instead is the scenario I just typed
deploy on this service and I have three units and I don't want it anymore
or this is a mistake in which case destroy-service --force would execute
the destruction of the service and set juju into a state where when a hook
errors (or if it's in an error state) auto-resolve that and continue with
service destruction. Then the machine can just be reaped with the upcoming
machine reaper stuff and everything moves forward.

That said, I feel like we're doing a little throwing the baby out with the
 bathwater. If you are in a situation where there is just one unit on each
 machine, then destroy-unit --force could be equivalent to destroy-machine
 --force, and that could chain up into destroy-service --force (if all units
 of service are the only thing on their machines, then tear them all down
 ignoring errors and stop the machines).

 John
 =:-

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: vCloud Director Support

2014-12-18 Thread Vahric Muhtaryan
Also Azure is looks like supported , Azure pack for MSPs/ISPs  are also
supported ? 
http://www.microsoft.com/en-us/server-cloud/products/windows-azure-pack/

From:  Vahric Muhtaryan vah...@doruk.net.tr
Date:  Wednesday 17 December 2014 14:23
To:  juju@lists.ubuntu.com
Subject:  vCloud Director Support

Hello All 

MAAS and JUJU is looks like very good product , I would like to ask as an
infrastructure will you add vCloud Director , we are vmwre vcloud air
network partner and I m thinking to integrate the juju with our platform ,
and dev or product manage in this list ?

Regards
Vahric Muhtaryan
-- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe
at: https://lists.ubuntu.com/mailman/listinfo/juju

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: vCloud Director Support

2014-12-18 Thread Kapil Thangavelu
On Wed, Dec 17, 2014 at 7:23 AM, Vahric Muhtaryan vah...@doruk.net.tr
wrote:

 Hello All

 MAAS and JUJU is looks like very good product , I would like to ask as an
 infrastructure will you add vCloud Director , we are vmwre vcloud air
 network partner and I m thinking to integrate the juju with our platform ,
 and dev or product manage in this list ?



There are dev and product managers on this list. a vmware provider is a
substantial development and maintenance effort. Its definitely of interest,
but its not currently in scope on the core roadmap. Contributions for it
would be welcome. Vmware has been working on go api bindings which will
make that work easier https://github.com/vmware/govcloudair and
https://github.com/vmware/govmomi

cheers,

Kapil
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: vCloud Director Support

2014-12-18 Thread Vahric Muhtaryan
Thanks 
Actually I couldnĀ¹t say I am a dev but writing some code in python , is it
enough to help you for to be part of integration and improve the speed ?
Regards
VM 

From:  Alexis Bruemmer alexis.bruem...@canonical.com
Date:  Thursday 18 December 2014 19:55
To:  Kapil Thangavelu kapil.thangav...@canonical.com
Cc:  Vahric Muhtaryan vah...@doruk.net.tr, juju juju@lists.ubuntu.com
Subject:  Re: vCloud Director Support

Actually we are currently working on a plan for VMWare provider support in
juju, but this is very recent and we have no confirmed dates or detail
deliveries at this stage.

Vahric, I would be happy to chat with you further about your use case and
see how our current plans align.

--Alexis

On Thu, Dec 18, 2014 at 9:45 AM, Kapil Thangavelu
kapil.thangav...@canonical.com wrote:
 
 
 On Wed, Dec 17, 2014 at 7:23 AM, Vahric Muhtaryan vah...@doruk.net.tr wrote:
 Hello All 
 
 MAAS and JUJU is looks like very good product , I would like to ask as an
 infrastructure will you add vCloud Director , we are vmwre vcloud air network
 partner and I m thinking to integrate the juju with our platform , and dev or
 product manage in this list ?
 
 
 There are dev and product managers on this list. a vmware provider is a
 substantial development and maintenance effort. Its definitely of interest,
 but its not currently in scope on the core roadmap. Contributions for it would
 be welcome. Vmware has been working on go api bindings which will make that
 work easier https://github.com/vmware/govcloudair and
 https://github.com/vmware/govmomi
 
 cheers,
 
 Kapil
 
 --
 Juju mailing list
 Juju@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju
 


-- 
Alexis Bruemmer
Juju Core Manager, Canonical Ltd.
(503) 686-5018
alexis.bruem...@canonical.com


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Detect destroying a service vs removing a unit?

2014-12-18 Thread William Reade
Yes: goal state could, I think, address this case. I'd really
appreciate more discussion on that in the Feature request: show
running relations in juju status thread -- I remain reasonably
certain that neither goal nor active is quite sufficient in
isolation, but would appreciate confirmation/pushback/whatever.

This case makes me think it'd need a slight extension regardless,
though, in that we don't currently tell units when they themselves are
meant to be shutting down. (Coincidentally, seeing relation-broken in
a peer relation *does* tell you that you're shutting down, but it's
not very helpful because you find out so late.)

(that is: I think we should tell units when they are dying; and also
when their service is dying. This demands both omnipresent state -- an
env var, or something -- and a hook or two to notify of the change in
either; but I'm not sure we can usefully distinguish between
scaling-down and shutting-down without knowing both, even if one of
them is communicated via goal-state)

Cheers
William



On Thu, Dec 18, 2014 at 7:32 AM, Stuart Bishop
stuart.bis...@canonical.com wrote:
 On 18 December 2014 at 12:15, John Meinel j...@arbash-meinel.com wrote:
 Stub- AFAIK there isn't something today, though William might know better.

 William, does your Active/Goal proposal address this? Having a Goal of 0
 units would be a pretty clear indication that the service is shutting down.

 Ooh... and that would be useful for plenty of other things too. For
 example, I can avoid rebalancing the replication ring until the goal
 number of units exists.

 --
 Stuart Bishop stuart.bis...@canonical.com

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Automatic multi-environment collection handling

2014-12-18 Thread Dimiter Naydenov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

All this is great work! Thanks for the write-up as well.

I think I discovered an issue with it - take a look at this bug
http://pad.lv/1403738. It seems machine.SetAgentVersion() should be
handled specially by the multi-env transaction runner, as only after
calling it the upgrade actually starts and the steps to add env-uuids
to state collections are executed.

On 18.12.2014 05:28, Menno Smits wrote:
 I've landed several big changes recently which automate handling
 of multi-environment concerns when accessing Juju's collections.
 These both simplify DB queries and updates as well as reducing the
 risk of unintended data leakage between environments. Although in
 most cases you won't even know that anything has changed, it's
 worth understanding what's been done.
 
 *Collections*
 
 MongoDB queries against collections which contain data for
 multiple environments are now automatically modified to ensure they
 return records for only the environment tied to State being queried
 against. Queries against collections which do not contain data for
 multiple environments pass through untouched.
 
 Some examples ...
 
 machines.FindId(2) becomes machines.FindId(uuid:2).One(doc)
 
 machines.Find(bson.D{{series, trusty}}} becomes 
 machines.Find(bson.D{{series, trusty}, {env-uuid, uuid}})
 
 
 machines.Find(bson.D{{_id, 4}}} becomes 
 machines.Find(bson.D{{_id, uuid:4}, {env-uuid, uuid}})
 
 
 Where uuid is the environment UUID of the State instance the 
 collection was obtained from (using getCollection()).
 
 The Remove, RemoveId and RemoveAll methods on collections also
 have similar handling and the collection Count method returns only
 the number of records in the collection for a single environment.
 
 The main benefit of this is that you don't need to remember to wrap
 ids in State.docID() calls or remember to add the env-uuid field
 to queries. In fact, I recommend you leave them out to reduce noise
 from code that does DB queries.
 
 There are some limited cases where you might really need to query
 across multiple environments or don't want the automatic munging in
 place for some reason. For these scenarios you can get hold of a
 *mgo.Collection by calling State.getRawCollection(). This is
 currently only being used by a few database migration steps.
 
 Note that query selectors using MongoDB operators with the _id
 field will be left untouched. In these cases you need to know that
 there's a UUID prefix on the _id and handle it yourself. For
 example, to query all the machines with ids starting with 4 you
 might consider doing:
 
 machines.Find(bson.D{{_id, bson.D{$regex: ^4.* which is
 transformed to: machines.Find(bson.D{{_id, bson.D{$regex:
 ^4.*, {env-uuid, uuid}})
 
 Note how the _id selector is left alone but the env-uuid selector
 is still added. It's left up to the developer to account for the 
 environment UUID in _id regex (the regex above won't work as is).
 
 
 *Transactions*
 
 Changes have also been made for automatically modifying
 transaction operations to account for multi-environment
 collections.
 
 For example:
 
 st.runTransaction([]txn.Op{{ C: machinesC, Id: 1 Remove: true, },
 { C: machinesC, Id: 2, Insert: bson.D{ {series, trusty}, }, 
 }, { C: machinesC, Id: 3, Insert: machineDoc{ Series: trusty, 
 }, }, { C: otherC, Id: foo, Insert: bson.D{}, }})
 
 automatically becomes:
 
 st.runTransaction([]txn.Op{{ C: machinesC, Id: uuid:1, Remove:
 true, }, { C: machinesC, Id: uuid:2, Insert: bson.D{ {_id,
 uuid:2}, {env-uuid, uuid}, {series, trusty}, }, }, { 
 C: machinesC, Id: uuid:3, Insert: machineDoc{ DocID:
 uuid:3, EnvUUID: uuid, Series: trusty, } }, { C: otherC, 
 Id: foo, Insert: bson.D{}, }})
 
 Note how the environment UUID is prefixed onto ids for operations
 for multi-environment collections. Also see how the _id and
 env-uuid field on documents defined using bson.D or structs (bson.M
 supported too) are automatically populated. A panic will occur if
 you provide the environment UUID but it doesn't match what was
 expected as this indicates a likely bug.
 
 Any document updates are made in place so that the caller sees them
 once the transaction completes. This makes it safe for the caller
 to a document struct used with a transaction operation for further
 work - the struct will match what was written to the DB. Note that
 if a struct is passed by value and needs updating, a panic will
 occur. This won't normally be a problem as we tend to use pointers
 to document structs with transaction operations, and the panic is a
 helpful indication that the document provided isn't
 multi-environment safe.
 
 Note that only the Id and Insert fields of txn.Op are touched. The 
 Update and Assert fields are left alone.
 
 In some cases you may need to run a transaction without invoking 
 automatic multi-environment munging. State now has a rawTxnRunner()
 and runRawTransaction() methods for the rare situations where this
 is 

Re: git clients vulnerability found

2014-12-18 Thread David Cheney
Fortunately all of us run an operating system with a case sensitive
file system, right ?

On Fri, Dec 19, 2014 at 8:32 AM, Horacio Duran
horacio.du...@canonical.com wrote:
 Heads up people
 https://github.com/blog/1938-vulnerability-announced-update-your-git-clients
 apparently there is a vulnerability in git client that might be exploited
 via pushing certain snippets to the repos.

 --
 Juju-dev mailing list
 Juju-dev@lists.ubuntu.com
 Modify settings or unsubscribe at:
 https://lists.ubuntu.com/mailman/listinfo/juju-dev


-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: git clients vulnerability found

2014-12-18 Thread Horacio Duran
I dont know, some might run osx (also we have a client for windows and osx
so I dont discard that anyone might be using those tools now and then)

On Thu, Dec 18, 2014 at 6:39 PM, David Cheney david.che...@canonical.com
wrote:

 Fortunately all of us run an operating system with a case sensitive
 file system, right ?

 On Fri, Dec 19, 2014 at 8:32 AM, Horacio Duran
 horacio.du...@canonical.com wrote:
  Heads up people
 
 https://github.com/blog/1938-vulnerability-announced-update-your-git-clients
  apparently there is a vulnerability in git client that might be exploited
  via pushing certain snippets to the repos.
 
  --
  Juju-dev mailing list
  Juju-dev@lists.ubuntu.com
  Modify settings or unsubscribe at:
  https://lists.ubuntu.com/mailman/listinfo/juju-dev
 

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic multi-environment collection handling

2014-12-18 Thread Menno Smits
Just following up. This was fixed earlier on today and the various CI
upgrade jobs are now passing. I've marked the ticket as Fix Released so
that this issue no longer blocks merges.

On 19 December 2014 at 09:40, Menno Smits menno.sm...@canonical.com wrote:

 On 19 December 2014 at 06:02, Dimiter Naydenov 
 dimiter.nayde...@canonical.com wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 All this is great work! Thanks for the write-up as well.

 I think I discovered an issue with it - take a look at this bug
 http://pad.lv/1403738. It seems machine.SetAgentVersion() should be
 handled specially by the multi-env transaction runner, as only after
 calling it the upgrade actually starts and the steps to add env-uuids
 to state collections are executed.


 Sorry - I neglected to do a manual upgrade test before pushing this
 change. Ensuring that code that runs before database migrations have
 occurred still works as it should has been pain point for us while doing
 the multi-environment work.

 I will get this sorted. Thanks for saving me some time by doing the
 initial analysis.

 - Menno




-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Automatic multi-environment collection handling

2014-12-18 Thread Dimiter Naydenov
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thank you for the quick response!

On 19.12.2014 06:43, Menno Smits wrote:
 Just following up. This was fixed earlier on today and the various
 CI upgrade jobs are now passing. I've marked the ticket as Fix
 Released so that this issue no longer blocks merges.
 
 On 19 December 2014 at 09:40, Menno Smits
 menno.sm...@canonical.com mailto:menno.sm...@canonical.com
 wrote:
 
 On 19 December 2014 at 06:02, Dimiter Naydenov 
 dimiter.nayde...@canonical.com 
 mailto:dimiter.nayde...@canonical.com wrote:
 
 -BEGIN PGP SIGNED MESSAGE- Hash: SHA1
 
 All this is great work! Thanks for the write-up as well.
 
 I think I discovered an issue with it - take a look at this bug 
 http://pad.lv/1403738. It seems machine.SetAgentVersion() should
 be handled specially by the multi-env transaction runner, as only
 after calling it the upgrade actually starts and the steps to add 
 env-uuids to state collections are executed.
 
 
 Sorry - I neglected to do a manual upgrade test before pushing
 this change. Ensuring that code that runs before database
 migrations have occurred still works as it should has been pain
 point for us while doing the multi-environment work.
 
 I will get this sorted. Thanks for saving me some time by doing
 the initial analysis.
 
 - Menno
 
 
 
 


- -- 
Dimiter Naydenov dimiter.nayde...@canonical.com
juju-core team
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUk9YYAAoJENzxV2TbLzHwUjEIAIpketDU8EPg6V3fV9DD24w5
u4G2Il9k8dhfScBZLTVcP5neLPYwQsOh3DHea3CVxyQDZnbfDWqtVVjnffTo/o0C
hvohN1CBXxyL/jlm1YcfTEAPb0GqUgjJJPMcJD/I7qWzdj4qpHR806RCcfQlPpj/
Rk2CCEYFbL3le2v0uIVEDSq4MbaWsTjX/JBNej9kqkNtmjo02YD4Vq/QQVbOLoYy
d1x1VQ8mEjd8kl6Yjg6paxEiZejD0ikta+5MS3MNSYgQ4MMZIZ9IQUV9BwWCnCjV
XcKVWaHK7twptNUigq+p6AGVnznmES1gXvjS708bSBv1drk3qGx45QC2owDEe6Q=
=Vi/f
-END PGP SIGNATURE-

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev