Re: go test github.com/juju/juju/state takes > 10 minutes to run

2016-05-17 Thread Curtis Hovey-Canonical
On Tue, May 17, 2016 at 2:14 AM, David Cheney
 wrote:
> My environment has not changed since 14.04, so I'm probably not
> running mongo 3.2.
>
> On Tue, May 17, 2016 at 4:13 PM, Cheryl Jennings
>  wrote:
>> Are you using mongo 3.2?  (see bug
>> https://bugs.launchpad.net/juju-core/+bug/1573294)
>>
>> On Mon, May 16, 2016 at 9:52 PM, David Cheney 
>> wrote:
>>>
>>> Testing this package takes 16 minutes on my machine*; it sure didn't
>>> use to take this long.
>>>
>>> What happened ?
>>>
>>> * yes, you have to raise the _10 minute_ timeout to make this test run.

I reported https://bugs.launchpad.net/juju-core/+bug/1582731 . This
issue is new in the last week.

-- 
Curtis Hovey
Canonical Cloud Development and Operations
http://launchpad.net/~sinzui

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Today I submitted 5 PR's to be merged, 3 failed because mongo shat itself

2016-05-17 Thread David Cheney
100x more webscale

On Wed, May 18, 2016 at 11:02 AM, Horacio Duran
 wrote:
> For now we are trying to go around mongo issues that make the tests 100x
> slower (yes one hundred) once this is fixed we should start using mongo 3.2
> exclusively since 2.4 iirc is EOL or near. The issue lies in the new storage
> engine, which we could skip if mmapv1 ( the old one) wasn't also nearing EOL
> I am currently on the phone but if You want more details I can dig up the
> bug with details of what I am talking about.
>
>
> On Tuesday, 17 May 2016, David Cheney  wrote:
>>
>> What's the plan for mongo 3.2 ? Will we be required to support 2.x
>> versions for the foreseeable future, or is there a possibility to make
>> it a build or run time failure if mongo < 3.2 is installed on the host
>> ?
>>
>> On Wed, May 18, 2016 at 9:01 AM, Martin Packman
>>  wrote:
>> > On 17/05/2016, Curtis Hovey-Canonical  wrote:
>> >>
>> >> The juju-mongo2.6 package will be be preferred by juju 1.2.5 in xenial
>> >> and without other changes, 2.4 will be used by all other 1.25 series.
>> >
>> > This isn't yet true, there's a bug open for it:
>> >
>> > "Use juju-mongodb2.6 for 1.25 on xenial"
>> > 
>> >
>> > I had made the packaging change, but without juju code changes as well
>> > it just went and installed the old (2.4) juju-mongodb anyway when
>> > setting up a state server.
>> >
>> > Martin
>> >
>> > --
>> > Juju-dev mailing list
>> > Juju-dev@lists.ubuntu.com
>> > Modify settings or unsubscribe at:
>> > https://lists.ubuntu.com/mailman/listinfo/juju-dev
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Today I submitted 5 PR's to be merged, 3 failed because mongo shat itself

2016-05-17 Thread Horacio Duran
For now we are trying to go around mongo issues that make the tests 100x
slower (yes one hundred) once this is fixed we should start using mongo 3.2
exclusively since 2.4 iirc is EOL or near. The issue lies in the new
storage engine, which we could skip if mmapv1 ( the old one) wasn't also
nearing EOL I am currently on the phone but if You want more details I can
dig up the bug with details of what I am talking about.

On Tuesday, 17 May 2016, David Cheney  wrote:

> What's the plan for mongo 3.2 ? Will we be required to support 2.x
> versions for the foreseeable future, or is there a possibility to make
> it a build or run time failure if mongo < 3.2 is installed on the host
> ?
>
> On Wed, May 18, 2016 at 9:01 AM, Martin Packman
> > wrote:
> > On 17/05/2016, Curtis Hovey-Canonical  > wrote:
> >>
> >> The juju-mongo2.6 package will be be preferred by juju 1.2.5 in xenial
> >> and without other changes, 2.4 will be used by all other 1.25 series.
> >
> > This isn't yet true, there's a bug open for it:
> >
> > "Use juju-mongodb2.6 for 1.25 on xenial"
> > 
> >
> > I had made the packaging change, but without juju code changes as well
> > it just went and installed the old (2.4) juju-mongodb anyway when
> > setting up a state server.
> >
> > Martin
> >
> > --
> > Juju-dev mailing list
> > Juju-dev@lists.ubuntu.com 
> > Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com 
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: Today I submitted 5 PR's to be merged, 3 failed because mongo shat itself

2016-05-17 Thread David Cheney
What's the plan for mongo 3.2 ? Will we be required to support 2.x
versions for the foreseeable future, or is there a possibility to make
it a build or run time failure if mongo < 3.2 is installed on the host
?

On Wed, May 18, 2016 at 9:01 AM, Martin Packman
 wrote:
> On 17/05/2016, Curtis Hovey-Canonical  wrote:
>>
>> The juju-mongo2.6 package will be be preferred by juju 1.2.5 in xenial
>> and without other changes, 2.4 will be used by all other 1.25 series.
>
> This isn't yet true, there's a bug open for it:
>
> "Use juju-mongodb2.6 for 1.25 on xenial"
> 
>
> I had made the packaging change, but without juju code changes as well
> it just went and installed the old (2.4) juju-mongodb anyway when
> setting up a state server.
>
> Martin
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at: 
> https://lists.ubuntu.com/mailman/listinfo/juju-dev

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: The mean time for a CI run has risen to 33 minutes

2016-05-17 Thread David Cheney
The change to Go 1.6 will have increased the _compile_ time for the
tests by 3x, but compared to the time of running them, this shouldn't
be the main contributor.

Go 1.7 will bring significantly improved compile and link times (the
linking times are much better than any previous version of Go)

On Wed, May 18, 2016 at 9:06 AM, Martin Packman
 wrote:
> On 16/05/2016, David Cheney  wrote:
>> This got significantly worse in the last 6 weeks. What happened ?
>
> Either the juju tests are slower, or trusty on aws is slower. It's a
> fresh cloud instance each run, and still trusty because we switched to
> xenial and the lxd tests failed due to lack of isolation. Could change
> back to xenial now Curtis added some lxd setup support the the
> makefile I think, but that is unlikely to help speed at all.
>
> Martin

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: The mean time for a CI run has risen to 33 minutes

2016-05-17 Thread Martin Packman
On 16/05/2016, David Cheney  wrote:
> This got significantly worse in the last 6 weeks. What happened ?

Either the juju tests are slower, or trusty on aws is slower. It's a
fresh cloud instance each run, and still trusty because we switched to
xenial and the lxd tests failed due to lack of isolation. Could change
back to xenial now Curtis added some lxd setup support the the
makefile I think, but that is unlikely to help speed at all.

Martin

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


[Review Queue] ibm-java

2016-05-17 Thread Kevin Monroe
Matt and I took a look at the ibm-java charm today.

We had trouble deploying this as it requires a 3rd party to host the
installer (something which will be a thing of the past with juju
resources!).  We sorted our issues and were able to verify this charm
deployed and worked successfully.

Working through this, we had suggestions on smoothing the rough edges and
put together the following MP:

https://code.launchpad.net/~kwmonroe/charms/trusty/ibm-java/may-2016/+merge/294984

We look forward to these changes being accepted and making ibm-java another
alternative for java relations in the juju ecosystem.

Thanks!
-Kevin Monroe
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: apache projects with interest

2016-05-17 Thread Tom Barber
Thanks Mark,

I know Antonio is going to work that internally, and we can certainly help
with getting stuff on it and trying to cajole more communities in
participating.

I think having a dedicated apache page is a good idea though, because its
easier for other apache communities to see who's already collaborating.
Because of the way the ASF works, there isn't a central portal where on the
ASF we could easily dump a nice looking list of who's working on Juju
stuff, so doing it on a Canonical side, makes it an easier sell for us ASF
folk who are talking to different ASF projects trying to get some
collaboration out of them.

Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 17 May 2016 at 15:10, Mark Shuttleworth  wrote:

> On 17/05/16 02:24, Tom Barber wrote:
> > Okay so I've been asking around as you all know and we're considering
> this
> > apache specific Juju Charms page so I figured it would be useful to
> roundup
> > which communities I have spoken to who have shown definite interest in
> > collaboration.
>
> Just to loop in Alejandra, who would be able to coordinate a dedicated
> jujucharms.com/apache page, or additions to the /big-data page.
>
> Mark
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: apache projects with interest

2016-05-17 Thread Mark Shuttleworth
On 17/05/16 02:24, Tom Barber wrote:
> Okay so I've been asking around as you all know and we're considering this
> apache specific Juju Charms page so I figured it would be useful to roundup
> which communities I have spoken to who have shown definite interest in
> collaboration.

Just to loop in Alejandra, who would be able to coordinate a dedicated
jujucharms.com/apache page, or additions to the /big-data page.

Mark


-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Drill performance

2016-05-17 Thread Tom Barber
Yeah Druid is on my todo as well. Samuel intoduced me to his druid contact
about charming it up and then he went quiet. Would be good to get into the
platform so Saiku can leverage it.

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 17 May 2016 at 14:11, Konstantinos Tsakalozos <
kos.tsakalo...@canonical.com> wrote:

> Hi Merlijn,
>
> Knowing that you are into data streaming with storm, have you looked at
> Druid (http://druid.io/druid.html)? It might be a good fit for your use
> cases.
>
> Cheers,
> Konstantinos
>
> On Tue, May 17, 2016 at 2:45 PM, Merlijn Sebrechts <
> merlijn.sebrec...@gmail.com> wrote:
>
>> Thanks Tom! We'll contact them.
>>
>>
>>
>> Kind regards
>> Merlijn Sebrechts
>>
>> 2016-05-17 11:44 GMT+02:00 Tom Barber :
>>
>>> Hey Merlijn
>>>
>>> I've not scaled up to 200GB but we did do a 20-30GB HDFS test with
>>> adequate performance and load being spread over drill bits. I guys on the
>>> drill mailing list are pretty good at resolving performance issues though
>>> so you should certainly chat to them, and with backing from the new Drill
>>> startup, MapR tech, Dell and a bunch of other firms, there is a decent
>>> amount of development resource on the platform to getting stuff fixed.
>>>
>>> That said, I'm sure there are other solutions that run faster, Impala
>>> etc, also I come from an OLAP background which is why I hooked up with the
>>> Kylin guys as that would give you an alternative entry point.
>>>
>>> Another reason for drill is the data federation and non hadoop support,
>>> for example I could spin up HDFS, Mongo, and MySQL and have drill hook up
>>> to all 3 of them at the same time and do:
>>>
>>> select * from HDFS.mytable a,MONGODB.mytable b,MySQL.mytable c where
>>> a.c1 = b.c1, b.c2=c.c1
>>>
>>> and have it return a nice federated query, which is pretty powerful.
>>>
>>> Of course with all this tech YMMV, but personally I've had decent
>>> results with it.
>>>
>>> Tom
>>>
>>> --
>>>
>>> Director Meteorite.bi - Saiku Analytics Founder
>>> Tel: +44(0)5603641316
>>>
>>> (Thanks to the Saiku community we reached our Kickstart
>>> 
>>> goal, but you can always help by sponsoring the project
>>> )
>>>
>>> On 17 May 2016 at 10:37, Merlijn Sebrechts 
>>> wrote:
>>>
 Hi Tom


 Slightly off-topic but have you ever worked with drill? We did some
 tests with a 200GB and 100MB dataset in an hdfs cluster and the performance
 we're seeing is so bad drill is unusable for us..

 Some initial debugging revealed that drill isn't able to distribute the
 workload over the cluster. The entire query runs on one server... Have you
 been able to get better performance out of it?



 Kind regards
 Merlijn


 Op dinsdag 17 mei 2016 heeft Tom Barber  het
 volgende geschreven:
 > Okay so I've been asking around as you all know and we're considering
 this apache specific Juju Charms page so I figured it would be useful to
 roundup which communities I have spoken to who have shown definite interest
 in collaboration.
 > We have:
 > Apache Bigtop (we all know about)
 > Apache Zeppelin (we all know about)
 > Apache Karaf
 > Apache Nutch
 > Apache OODT
 > Apache Joshua (Incubating)
 > Apache Kylin
 > I'm sure there will be more, and probably some I've just forgotten
 about or other people spoke to, but I think thats a pretty good start.
 > As me and Kevin also discussed Drill is also a pretty important one
 from a personal perspective as it offers the best (IMHO) route to getting
 SQL over a bunch of your NOSQL charms with minimal effort, which then helps
 Saiku and any other BI tooling you guys get into the platform. Its great
 having all the big data stuff, but we need ways for end users to get this
 stuff back out!
 >
 > Tom
 > --
 > Director Meteorite.bi - Saiku Analytics Founder
 > Tel: +44(0)5603641316
 > (Thanks to the Saiku community we reached our Kickstart goal, but you
 can always help by sponsoring the project)

>>>
>>>
>>
>> --
>> Juju mailing list
>> Juju@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju
>>
>>
>
>
> --
> Konstantinos Tsakalozos
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Drill performance

2016-05-17 Thread Konstantinos Tsakalozos
Hi Merlijn,

Knowing that you are into data streaming with storm, have you looked at
Druid (http://druid.io/druid.html)? It might be a good fit for your use
cases.

Cheers,
Konstantinos

On Tue, May 17, 2016 at 2:45 PM, Merlijn Sebrechts <
merlijn.sebrec...@gmail.com> wrote:

> Thanks Tom! We'll contact them.
>
>
>
> Kind regards
> Merlijn Sebrechts
>
> 2016-05-17 11:44 GMT+02:00 Tom Barber :
>
>> Hey Merlijn
>>
>> I've not scaled up to 200GB but we did do a 20-30GB HDFS test with
>> adequate performance and load being spread over drill bits. I guys on the
>> drill mailing list are pretty good at resolving performance issues though
>> so you should certainly chat to them, and with backing from the new Drill
>> startup, MapR tech, Dell and a bunch of other firms, there is a decent
>> amount of development resource on the platform to getting stuff fixed.
>>
>> That said, I'm sure there are other solutions that run faster, Impala
>> etc, also I come from an OLAP background which is why I hooked up with the
>> Kylin guys as that would give you an alternative entry point.
>>
>> Another reason for drill is the data federation and non hadoop support,
>> for example I could spin up HDFS, Mongo, and MySQL and have drill hook up
>> to all 3 of them at the same time and do:
>>
>> select * from HDFS.mytable a,MONGODB.mytable b,MySQL.mytable c where a.c1
>> = b.c1, b.c2=c.c1
>>
>> and have it return a nice federated query, which is pretty powerful.
>>
>> Of course with all this tech YMMV, but personally I've had decent results
>> with it.
>>
>> Tom
>>
>> --
>>
>> Director Meteorite.bi - Saiku Analytics Founder
>> Tel: +44(0)5603641316
>>
>> (Thanks to the Saiku community we reached our Kickstart
>> 
>> goal, but you can always help by sponsoring the project
>> )
>>
>> On 17 May 2016 at 10:37, Merlijn Sebrechts 
>> wrote:
>>
>>> Hi Tom
>>>
>>>
>>> Slightly off-topic but have you ever worked with drill? We did some
>>> tests with a 200GB and 100MB dataset in an hdfs cluster and the performance
>>> we're seeing is so bad drill is unusable for us..
>>>
>>> Some initial debugging revealed that drill isn't able to distribute the
>>> workload over the cluster. The entire query runs on one server... Have you
>>> been able to get better performance out of it?
>>>
>>>
>>>
>>> Kind regards
>>> Merlijn
>>>
>>>
>>> Op dinsdag 17 mei 2016 heeft Tom Barber  het
>>> volgende geschreven:
>>> > Okay so I've been asking around as you all know and we're considering
>>> this apache specific Juju Charms page so I figured it would be useful to
>>> roundup which communities I have spoken to who have shown definite interest
>>> in collaboration.
>>> > We have:
>>> > Apache Bigtop (we all know about)
>>> > Apache Zeppelin (we all know about)
>>> > Apache Karaf
>>> > Apache Nutch
>>> > Apache OODT
>>> > Apache Joshua (Incubating)
>>> > Apache Kylin
>>> > I'm sure there will be more, and probably some I've just forgotten
>>> about or other people spoke to, but I think thats a pretty good start.
>>> > As me and Kevin also discussed Drill is also a pretty important one
>>> from a personal perspective as it offers the best (IMHO) route to getting
>>> SQL over a bunch of your NOSQL charms with minimal effort, which then helps
>>> Saiku and any other BI tooling you guys get into the platform. Its great
>>> having all the big data stuff, but we need ways for end users to get this
>>> stuff back out!
>>> >
>>> > Tom
>>> > --
>>> > Director Meteorite.bi - Saiku Analytics Founder
>>> > Tel: +44(0)5603641316
>>> > (Thanks to the Saiku community we reached our Kickstart goal, but you
>>> can always help by sponsoring the project)
>>>
>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Konstantinos Tsakalozos
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: apache projects with interest

2016-05-17 Thread Merlijn Sebrechts
Any word from the Flink, Samza and Storm communities?

We have some work that the Storm community might be interested in. We have
created a proof-of-concept that allows you to model Storm topologies in
Juju using the Juju gui. This has the potential to lower the barrier of
entry to Storm quite a bit.

2016-05-17 11:24 GMT+02:00 Tom Barber :

> Okay so I've been asking around as you all know and we're considering this
> apache specific Juju Charms page so I figured it would be useful to roundup
> which communities I have spoken to who have shown definite interest in
> collaboration.
>
> We have:
>
> Apache Bigtop (we all know about)
> Apache Zeppelin (we all know about)
> Apache Karaf
> Apache Nutch
> Apache OODT
> Apache Joshua (Incubating)
> Apache Kylin
>
> I'm sure there will be more, and probably some I've just forgotten about
> or other people spoke to, but I think thats a pretty good start.
>
> As me and Kevin also discussed Drill is also a pretty important one from a
> personal perspective as it offers the best (IMHO) route to getting SQL over
> a bunch of your NOSQL charms with minimal effort, which then helps Saiku
> and any other BI tooling you guys get into the platform. Its great having
> all the big data stuff, but we need ways for end users to get this stuff
> back out!
>
>
> Tom
>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Drill performance

2016-05-17 Thread Merlijn Sebrechts
Thanks Tom! We'll contact them.



Kind regards
Merlijn Sebrechts

2016-05-17 11:44 GMT+02:00 Tom Barber :

> Hey Merlijn
>
> I've not scaled up to 200GB but we did do a 20-30GB HDFS test with
> adequate performance and load being spread over drill bits. I guys on the
> drill mailing list are pretty good at resolving performance issues though
> so you should certainly chat to them, and with backing from the new Drill
> startup, MapR tech, Dell and a bunch of other firms, there is a decent
> amount of development resource on the platform to getting stuff fixed.
>
> That said, I'm sure there are other solutions that run faster, Impala etc,
> also I come from an OLAP background which is why I hooked up with the Kylin
> guys as that would give you an alternative entry point.
>
> Another reason for drill is the data federation and non hadoop support,
> for example I could spin up HDFS, Mongo, and MySQL and have drill hook up
> to all 3 of them at the same time and do:
>
> select * from HDFS.mytable a,MONGODB.mytable b,MySQL.mytable c where a.c1
> = b.c1, b.c2=c.c1
>
> and have it return a nice federated query, which is pretty powerful.
>
> Of course with all this tech YMMV, but personally I've had decent results
> with it.
>
> Tom
>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> On 17 May 2016 at 10:37, Merlijn Sebrechts 
> wrote:
>
>> Hi Tom
>>
>>
>> Slightly off-topic but have you ever worked with drill? We did some tests
>> with a 200GB and 100MB dataset in an hdfs cluster and the performance we're
>> seeing is so bad drill is unusable for us..
>>
>> Some initial debugging revealed that drill isn't able to distribute the
>> workload over the cluster. The entire query runs on one server... Have you
>> been able to get better performance out of it?
>>
>>
>>
>> Kind regards
>> Merlijn
>>
>>
>> Op dinsdag 17 mei 2016 heeft Tom Barber  het
>> volgende geschreven:
>> > Okay so I've been asking around as you all know and we're considering
>> this apache specific Juju Charms page so I figured it would be useful to
>> roundup which communities I have spoken to who have shown definite interest
>> in collaboration.
>> > We have:
>> > Apache Bigtop (we all know about)
>> > Apache Zeppelin (we all know about)
>> > Apache Karaf
>> > Apache Nutch
>> > Apache OODT
>> > Apache Joshua (Incubating)
>> > Apache Kylin
>> > I'm sure there will be more, and probably some I've just forgotten
>> about or other people spoke to, but I think thats a pretty good start.
>> > As me and Kevin also discussed Drill is also a pretty important one
>> from a personal perspective as it offers the best (IMHO) route to getting
>> SQL over a bunch of your NOSQL charms with minimal effort, which then helps
>> Saiku and any other BI tooling you guys get into the platform. Its great
>> having all the big data stuff, but we need ways for end users to get this
>> stuff back out!
>> >
>> > Tom
>> > --
>> > Director Meteorite.bi - Saiku Analytics Founder
>> > Tel: +44(0)5603641316
>> > (Thanks to the Saiku community we reached our Kickstart goal, but you
>> can always help by sponsoring the project)
>>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Promulgated charms (production readiness)

2016-05-17 Thread Simon Davy
On Mon, May 16, 2016 at 2:49 PM, Tim Van Steenburgh
 wrote:
> Right, but NRPE can be related to any charm too. My point was just that the
> charm doesn't need to explicitly support monitoring.

It totally does, IMO.

process count, disk, mem usage are all important, and should be
available out of the box.

But alerts (driven by monitoring) are all about specific context.

When I'm alerted, I want as specific info as possible as to what is
wrong and hints as to why.  Generic machine monitoring provides little
context, and if that's all you had, would increase your MTTR as you go
fish.

I want detailed, application specific, early alerts that can only be
written by those with application knowledge. These belong in the
charm, and need to be written/maintained by the charm experts.

I've been banging on about this idea for while, but in my head, it
makes sense to promote the idea of app-specific health checks (a la
snappy) in to juju proper, rather than a userspace solution with
layers. Then, you *don't* need specific relation support in your charm
- you just need to write a generic set of health checks/scripts.

Then, these checks are available to run as an action (we do this
pre/post each deploy), or show via juju status, or via the GUI[1]. A
monitoring service can just relate to the charm with the default
relation[2], and get a rich app specific set of checks that it can
convert to its own format and process. No need for relations for each
specific monitoring tool you wish to support. Makes monitoring a 1st
class juju citizen.

Juju could totally own this space, and it's a compelling one.
Monitoring is a mess, and needs integrating with everything all the
time. If we do 80% of that integration for our users, I think that
would play very well with operations folks. And I don't think the
tools in the DISCO[3] orchestration space can do this as effectively -
they by design do not have a central place to consolidate this kind of
integration.


[1] Want a demo that will wow a devops crowd, IMO? Deploy a full demo
system, with monitoring exposed in GUI out of the box. I've said it
before (and been laughed at :), but the GUI could be an amazing
monitoring tool. We might even use it in production ;P

[2] or even more magically, just deploy a monitoring service like
nagios unrelated in the environment, and have it speak with the
model's controller to fetch checks from all machines. Implicit
relations to all, which for monitoring is maybe what you want?

[3] Docker Inspired Slice of COmputer,
https://plus.google.com/+MarkShuttleworthCanonical/posts/W6LScydwS89


-- 
Simon

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: apache projects with interest

2016-05-17 Thread Konstantinos Tsakalozos
Thank you for the feedback and directions. Very helpful, as it will help us
steer our focus in the upcoming months.

Much appreciated,
Konstantinos


On Tue, May 17, 2016 at 12:57 PM, Tom Barber 
wrote:

> Yeah Apache Beam would probably be quite good as well from a Big Data
> modelling language POV as it came from the lovely folks at Google.
>
> --
>
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
>
> (Thanks to the Saiku community we reached our Kickstart
> 
> goal, but you can always help by sponsoring the project
> )
>
> On 17 May 2016 at 10:53, Kapil Thangavelu  wrote:
>
>> One more of interest for big data workloads, apache apex for stream
>> analytics
>> On Tue, May 17, 2016 at 5:38 AM Merlijn Sebrechts <
>> merlijn.sebrec...@gmail.com> wrote:
>>
>>> Hi Tom
>>>
>>>
>>> Slightly off-topic but have you ever worked with drill? We did some
>>> tests with a 200GB and 100MB dataset in an hdfs cluster and the performance
>>> we're seeing is so bad drill is unusable for us..
>>>
>>> Some initial debugging revealed that drill isn't able to distribute the
>>> workload over the cluster. The entire query runs on one server... Have you
>>> been able to get better performance out of it?
>>>
>>>
>>>
>>> Kind regards
>>> Merlijn
>>>
>>> Op dinsdag 17 mei 2016 heeft Tom Barber  het
>>> volgende geschreven:
>>> > Okay so I've been asking around as you all know and we're considering
>>> this apache specific Juju Charms page so I figured it would be useful to
>>> roundup which communities I have spoken to who have shown definite interest
>>> in collaboration.
>>> > We have:
>>> > Apache Bigtop (we all know about)
>>> > Apache Zeppelin (we all know about)
>>> > Apache Karaf
>>> > Apache Nutch
>>> > Apache OODT
>>> > Apache Joshua (Incubating)
>>> > Apache Kylin
>>> > I'm sure there will be more, and probably some I've just forgotten
>>> about or other people spoke to, but I think thats a pretty good start.
>>> > As me and Kevin also discussed Drill is also a pretty important one
>>> from a personal perspective as it offers the best (IMHO) route to getting
>>> SQL over a bunch of your NOSQL charms with minimal effort, which then helps
>>> Saiku and any other BI tooling you guys get into the platform. Its great
>>> having all the big data stuff, but we need ways for end users to get this
>>> stuff back out!
>>> >
>>> > Tom
>>> > --
>>> > Director Meteorite.bi - Saiku Analytics Founder
>>> > Tel: +44(0)5603641316
>>> > (Thanks to the Saiku community we reached our Kickstart goal, but you
>>> can always help by sponsoring the project) --
>>> Juju mailing list
>>> Juju@lists.ubuntu.com
>>> Modify settings or unsubscribe at:
>>> https://lists.ubuntu.com/mailman/listinfo/juju
>>>
>>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
>


-- 
Konstantinos Tsakalozos
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Juju upgrade-juju error on 1.25.0 in production

2016-05-17 Thread Darryl Weaver
Hi,

I have seen some very odd behaviour with Juju 1.25.0.
For some reason the Juju state server has gone into an upgrade mode and
reduced Juju's functionality to the bare minimum.

It is odd, because no upgrade was requested and now although Juju state
server reports that Juju is upgrading there does not appear to be any
attempt to upgrade the juju tools version that is currently installed.
Trying to use:
juju upgrade-juju --reset-previous-upgrade
Reports that an upgrade is in progress.

The output from Juju status only shows all units and machines as version
1.25.0 and does not report any newer version that it is attempting to
upgrade to.

Due to all the units spewing log file lines due to Juju state server being
unavailable, we have had to manually stop all Juju daemons on all units and
the juju state server just to stop the log files from filling up and
rotating constantly.  Unfortunately, this large log file output and
rotation means that the original cause of the problem is no longer in the
remaining log files to analyse.

My question is how does juju actually decide that it had received an
upgrade request and is this in mongodb somewhere or the config files on the
state server?
I'd like to check where the upgrade is flagged and preferably recover by
removing the upgrade request so Juju can be started again on the state
server and all units connect to it.

Another alternative is how to make Juju actually upgrade, but I do not know
which version Juju thinks it should upgrade to to manually download the
juju tools and recreate the links to the newer version of the tools
manually.  If that is the way forward where can I check the version Juju is
requesting and is it just a case of relinking the directory in
/var/lib/juju/tools/ or are there other steps required too?

Thanks
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: apache projects with interest

2016-05-17 Thread Merlijn Sebrechts
Hi Tom


Slightly off-topic but have you ever worked with drill? We did some tests
with a 200GB and 100MB dataset in an hdfs cluster and the performance we're
seeing is so bad drill is unusable for us..

Some initial debugging revealed that drill isn't able to distribute the
workload over the cluster. The entire query runs on one server... Have you
been able to get better performance out of it?



Kind regards
Merlijn

Op dinsdag 17 mei 2016 heeft Tom Barber  het
volgende geschreven:
> Okay so I've been asking around as you all know and we're considering
this apache specific Juju Charms page so I figured it would be useful to
roundup which communities I have spoken to who have shown definite interest
in collaboration.
> We have:
> Apache Bigtop (we all know about)
> Apache Zeppelin (we all know about)
> Apache Karaf
> Apache Nutch
> Apache OODT
> Apache Joshua (Incubating)
> Apache Kylin
> I'm sure there will be more, and probably some I've just forgotten about
or other people spoke to, but I think thats a pretty good start.
> As me and Kevin also discussed Drill is also a pretty important one from
a personal perspective as it offers the best (IMHO) route to getting SQL
over a bunch of your NOSQL charms with minimal effort, which then helps
Saiku and any other BI tooling you guys get into the platform. Its great
having all the big data stuff, but we need ways for end users to get this
stuff back out!
>
> Tom
> --
> Director Meteorite.bi - Saiku Analytics Founder
> Tel: +44(0)5603641316
> (Thanks to the Saiku community we reached our Kickstart goal, but you can
always help by sponsoring the project)
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


apache projects with interest

2016-05-17 Thread Tom Barber
Okay so I've been asking around as you all know and we're considering this
apache specific Juju Charms page so I figured it would be useful to roundup
which communities I have spoken to who have shown definite interest in
collaboration.

We have:

Apache Bigtop (we all know about)
Apache Zeppelin (we all know about)
Apache Karaf
Apache Nutch
Apache OODT
Apache Joshua (Incubating)
Apache Kylin

I'm sure there will be more, and probably some I've just forgotten about or
other people spoke to, but I think thats a pretty good start.

As me and Kevin also discussed Drill is also a pretty important one from a
personal perspective as it offers the best (IMHO) route to getting SQL over
a bunch of your NOSQL charms with minimal effort, which then helps Saiku
and any other BI tooling you guys get into the platform. Its great having
all the big data stuff, but we need ways for end users to get this stuff
back out!


Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Promulgated charms (production readiness)

2016-05-17 Thread Tom Barber
I agree with a lot of that, I don't puport to be a monitoring expert, but
at the same time, as a service writer I know what ports it functions on, I
know what services it provides and I know how to tell if it falls over.

Similarly as a consumer, I'd also like to TFDI and draw some lines and have
at least some functional monitoring in place for services I don't
understand. which is more than "is my node active", but I agree, its not
necessarily the complete solution, but its better than nothing especially
on stuff im unfamiliar with but need to deploy.

But there is also a clear need for users to be able to tweak and deploy
their own monitoring out of band, which for me introduces an interesting
juju design quirk... the platform is designed for users to be able to
design services quickly and "easily". If I were deploying puppet modules
for example, you just git clone everything you need to a modules directory,
hack it around to fit your needs and deploy, Juju doesn't follow that
paradigm where stuff stays in the charm store apart from local deployment
copies, unless you want to fork it. So you are right there needs to be a
way of doing this that is separate to charms. But I also think there should
be a "preferred way" of rolling this stuff out, because to people who don't
understand how it works fully, rolling some custom script, when everything
else is controlled by juju gui seems to break away from what Juju is good
at.

So I guess in summary, I agree about out of band changes and extensions to
logging, but I still think charms should provide "extended logging" because
the developers understand the services they are building.

Tom

--

Director Meteorite.bi - Saiku Analytics Founder
Tel: +44(0)5603641316

(Thanks to the Saiku community we reached our Kickstart

goal, but you can always help by sponsoring the project
)

On 17 May 2016 at 09:42, Samuel Cozannet 
wrote:

> Just bringing in a bit of work I've been doing with a few monitoring
> (/logging) solutions such as Zabbix, Telegraf, Fluentd...
>
> I have taken the opposite approach as what is mostly proposed here. I'm
> from a more ops background, which means my devs usually had no clue
> whatsoever of how I would manage their stuff. Also, even if they did most
> of the job, I would probably need my own ops features.
> So I had a bunch of questions for them, would deploy their stuff for them,
> add the magic ingredient, and everyone would be happy.
>
> My point is if I was a CTO somewhere using Juju right now, I wouldn't
> expect my developers to actively write monitoring or logging entry points.
> Also, I would expect an ops team to redo it anyway.
> So I would rather create a "mynamespace-basic-layer", essentially
> extending the base layer with the tooling I need (eventually even
> incorporating Config management).
> At this point I would expect my monitoring and logging **installed**
>
> Then I use a "self assessment" listing the charms installed locally
> function like :
>
> function charm::lib::self_assessment() {
> [ -z ${JUJU_CONTEXT_ID+x} ] && \
> echo 0 || \
> {
> METADATA="$(find "${JUJU_CHARM_DIR}/../.." -name "metadata.yaml")"
> for FILE in ${METADATA}
> do
> CHARM+=" $(cat "${FILE}" | grep 'name' | head -n1 | cut -f2 -d' ')"
> done
> }
> echo "${CHARM}"
> }
>
> Essentially, I'm giving my supporting charms the ability to understand the
> local environment at the unit level and adapt, even without an explicit
> relation.
>
> I store all monitoring/logging templates centrally which gives me the
> ability to update them out of band, as you would for an antivirus DB. If I
> was to store them in-charm, I'd need a charm upgrade to update them, which
> can be cumbersome, especially if that is in the application charm.
>
> In the end, I am building very intelligent supporting charms for
> monitoring and logging, that understand the target logic based on their own
> knowledge of the charm world, and adapt and evolve over time.
> Anyone and everyone can then improve the templates, so it's even more
> goodness even from people not using Juju.
>
> My first experiment with Zabbix just used self assessment, added that to
> the agent metadata, and the server would have a specific list of templates,
> react to auto-discovery of the agents by looking up the metadata and
> associate the proper templates automagically. It would also create groups
> of machines, and eventually autoscaling rules.
>
> Now I'm doing the same with fluentd (logging), storing the list of
> templates in github and the agent downloads the templates it needs at
> install time, and also for Telegraf (InfluxDB)
>
> From a user perspective, it's really "let's add monitoting", creating a
> "juju add-relation all" feature like :
>
> juju status --format json | jq '.services | keys[]' | tr -d '"' | xargs -I

Re: Promulgated charms (production readiness)

2016-05-17 Thread Samuel Cozannet
Just bringing in a bit of work I've been doing with a few monitoring
(/logging) solutions such as Zabbix, Telegraf, Fluentd...

I have taken the opposite approach as what is mostly proposed here. I'm
from a more ops background, which means my devs usually had no clue
whatsoever of how I would manage their stuff. Also, even if they did most
of the job, I would probably need my own ops features.
So I had a bunch of questions for them, would deploy their stuff for them,
add the magic ingredient, and everyone would be happy.

My point is if I was a CTO somewhere using Juju right now, I wouldn't
expect my developers to actively write monitoring or logging entry points.
Also, I would expect an ops team to redo it anyway.
So I would rather create a "mynamespace-basic-layer", essentially extending
the base layer with the tooling I need (eventually even incorporating
Config management).
At this point I would expect my monitoring and logging **installed**

Then I use a "self assessment" listing the charms installed locally
function like :

function charm::lib::self_assessment() {
[ -z ${JUJU_CONTEXT_ID+x} ] && \
echo 0 || \
{
METADATA="$(find "${JUJU_CHARM_DIR}/../.." -name "metadata.yaml")"
for FILE in ${METADATA}
do
CHARM+=" $(cat "${FILE}" | grep 'name' | head -n1 | cut -f2 -d' ')"
done
}
echo "${CHARM}"
}

Essentially, I'm giving my supporting charms the ability to understand the
local environment at the unit level and adapt, even without an explicit
relation.

I store all monitoring/logging templates centrally which gives me the
ability to update them out of band, as you would for an antivirus DB. If I
was to store them in-charm, I'd need a charm upgrade to update them, which
can be cumbersome, especially if that is in the application charm.

In the end, I am building very intelligent supporting charms for monitoring
and logging, that understand the target logic based on their own knowledge
of the charm world, and adapt and evolve over time.
Anyone and everyone can then improve the templates, so it's even more
goodness even from people not using Juju.

My first experiment with Zabbix just used self assessment, added that to
the agent metadata, and the server would have a specific list of templates,
react to auto-discovery of the agents by looking up the metadata and
associate the proper templates automagically. It would also create groups
of machines, and eventually autoscaling rules.

Now I'm doing the same with fluentd (logging), storing the list of
templates in github and the agent downloads the templates it needs at
install time, and also for Telegraf (InfluxDB)

>From a user perspective, it's really "let's add monitoting", creating a
"juju add-relation all" feature like :

juju status --format json | jq '.services | keys[]' | tr -d '"' | xargs -I
'{}' juju add-relation ntp {}

and off you go...

Thoughts?

++
Sam














--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu   / Canonical UK LTD  / Juju

samuel.cozan...@canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23
[image: View Samuel Cozannet's profile on LinkedIn]


On Mon, May 16, 2016 at 10:10 PM, Marco Ceppi 
wrote:

> I think a layer:nrpe isn't the right choice, but a layer:monitoring might
> be. Esp with the layer/interface/subordinate model now, it seems that we
> could actualize an abstract means to declare what to monitor and have
> nrpe/zabbix-agent/promethous translate that.
>
> Marco
>
>
> On Mon, May 16, 2016 at 11:19 AM Cory Johns 
> wrote:
>
>> I think this is a strong argument for creating an interface:nrpe layer to
>> make supporting this as easy as possible.  There was also discussion a long
>> time ago about creating a translation layer of sorts for supporting
>> multiple different monitoring solutions (like in the Puppet example).  I
>> think with layers and layer APIs that's now more possible than ever.
>>
>> Once we have a simplified interface, I do think that the review process
>> should strongly recommend that monitoring support be included, though I
>> don't think we'll be able to make it a hard requirement.
>>
>> On Mon, May 16, 2016 at 10:06 AM, Tom Barber 
>> wrote:
>>
>>> NRPE can be related as well, this is true. Maybe I misstated it a bit,
>>> I'll blame the jetlag ;)
>>>
>>> Put it this way, if a user is implementing a to-be promulgated charm, as
>>> a minimum (for those who expose such a thing) why not ensure the port is up
>>> and listening, not just the host ping, for those who have capability
>>> already in Nagios for a more in depth NPRE make sure its available for
>>> those who want to monitor it, procrunning as well for alarms.
>>>
>>> My point being, I guess, is considering how much Juju tries to do at an
>>> application 

Today I submitted 5 PR's to be merged, 3 failed because mongo shat itself

2016-05-17 Thread David Cheney
What is the story with mongo ? It's constantly causing builds to fail
CI because of it's complete shitness.

I've heard that some people have moved to mongo 3.2 which fixes the
problem, but as CI clearly is running the old rubbish version, this
clearly isn't a problem which can be called fixed.

What is the solution ? Will mongo 2.6 be deprecated entirely and the
build fail it the mongo version installed is < 3.2 ?

Thanks

Dave

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


The mean time for a CI run has risen to 33 minutes

2016-05-17 Thread David Cheney
This got significantly worse in the last 6 weeks. What happened ?
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: go test github.com/juju/juju/state takes > 10 minutes to run

2016-05-17 Thread David Cheney
My environment has not changed since 14.04, so I'm probably not
running mongo 3.2.

On Tue, May 17, 2016 at 4:13 PM, Cheryl Jennings
 wrote:
> Are you using mongo 3.2?  (see bug
> https://bugs.launchpad.net/juju-core/+bug/1573294)
>
> On Mon, May 16, 2016 at 9:52 PM, David Cheney 
> wrote:
>>
>> Testing this package takes 16 minutes on my machine*; it sure didn't
>> use to take this long.
>>
>> What happened ?
>>
>> * yes, you have to raise the _10 minute_ timeout to make this test run.
>>
>> --
>> Juju-dev mailing list
>> Juju-dev@lists.ubuntu.com
>> Modify settings or unsubscribe at:
>> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
>

-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev


Re: go test github.com/juju/juju/state takes > 10 minutes to run

2016-05-17 Thread Cheryl Jennings
Are you using mongo 3.2?  (see bug
https://bugs.launchpad.net/juju-core/+bug/1573294)

On Mon, May 16, 2016 at 9:52 PM, David Cheney 
wrote:

> Testing this package takes 16 minutes on my machine*; it sure didn't
> use to take this long.
>
> What happened ?
>
> * yes, you have to raise the _10 minute_ timeout to make this test run.
>
> --
> Juju-dev mailing list
> Juju-dev@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju-dev
>
-- 
Juju-dev mailing list
Juju-dev@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju-dev