Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-10 Thread Jeremy Stanley
On 2014-04-09 10:49:20 -0700 (-0700), Clint Byrum wrote:
 Excerpts from Ruslan Kamaldinov's message of 2014-04-09 10:24:48 -0700:
  But, other concerns were expressed in the past. Let me quote
  Jeremy Stanley (from https://review.openstack.org/#/c/66884/):
   This will need to be maintained in Ubuntu (and backported to
   12.04 in Ubuntu Cloud Archive or if necessary a PPA managed by
   the same package maintenance team taking care of it in later
   Ubuntu releases). We don't install test requirements
   system-wide on our long-running test slaves unless we can be
   assured of security support from the Linux distribution
   vendor.
[...]

Keep in mind that at the time I said that, we had not completed our
transition to all single-use Jenkins slaves managed by nodepool. My
analysis of the risk is thus shifted slightly now. I'm not opposed
to official OpenStack projects needing arbitrary files cached
locally on our nodepool images so that they can make use of them on
demand without incurring download-related stability penalties (we
already do this for a lot of the packages and other bits DevStack
needs, for example test images). I'm also not worried by cached
packages on the images which are used by some jobs for official
projects without being installed by default. We've now got the
ability for some jobs to opt out of sudo restrictions, and so they
could in theory install these cached packages immediately prior to
running their own tests. I mainly just want to make sure we're not
preinstalling packages we can't trust unconditionally on all test
systems.

 This is a huge philosophical question for OpenStack in general. Do
 we want to recommend things that we won't, ourselves, use in our
 infrastructure?
[...]
 Basically the support model of the distro isn't compatible with
 the support model of these databases.
[...]

This is where most of my current concerns with the situation are.
OpenStack is something run on servers, which operators are almost
always going to want under some sort of stable package management
from a distribution with a trust chain and security support. I think
we should be testing things in the sorts of ways we expect them to
be deployed in production by operators, whenever possible.

The current development communities around these databases don't
sound like they've yet reached the maturity where they're turning
out stable, long-term-supported and modularized software which would
be commonly found in those sorts of environments (nor would I feel
comfortable recommending them as a production solution until that
changes). This is not meant in a disparaging way--it's common that
projects have a high rate of churn early on as larger design
decisions are made, APIs are still being fleshed out, solutions are
tried and deemed untenable, et cetera and forward momentum trumps
deployability/stability/maintainability.

And to some degree, yes, this is the pot calling the kettle black
since OpenStack itself can't seem to muster enough developers
interested in keeping stable release branches maintained and
testable for more than ~9 months before we're forced to EOL them due
to neglect. The irony is not lost on me. ;)
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-09 Thread Ruslan Kamaldinov
On Tue, Apr 8, 2014 at 8:42 PM, Sean Dague s...@dague.net wrote:
 I think it's important to understand what we mean by stable in the
 gate. It means that the end point is 99.% available. And that it's
 up or down status is largely under our control.

 Things that are not stable by this definition which we've moved away
 from for the gate:
  * github.com - one of the reasons for git.openstack.org
  * pypi.python.org - one of the reasons for our own pypi mirror
  * upstream distro mirrors (we use cloud specific mirrors, which even
 then do fail some times, more than we'd like)

 Fedora.org is not stable by this measure either. Downloading an iso from
 fedora.org fails 5% of the time in the gate.

 I'm sure the Hortonworks folks are good folks, but by our standards of
 reliability, no one stacks up. And an outage on their behalf means that
 any project which gates on it will be blocked from merging any code
 until it's addressed. If Ceilometer wants to take that risk in their
 check queue (and be potentially blocked) that might be one thing, and we
 could talk about that. But we definitely can't co-gate and block all of
 openstack because of a hortonworks outage (which will happen, especially
 if we download packages from them 600 - 1000 times a day).

A natural solution for this would be a local to infra package mirror for
HBase, Ceilometer, Mongo and all the dependencies not present in upstream
Ubuntu. It seems straightforward from the technical point of view. It'll help
to keep the Gate invulnerable to any outages in 3-rd party mirrors. Of course,
someone has to signup to create scripts for that mirror and support it in the
future.

But, other concerns were expressed in the past. Let me quote Jeremy Stanley
(from https://review.openstack.org/#/c/66884/):
 This will need to be maintained in Ubuntu (and backported to 12.04 in Ubuntu
 Cloud Archive or if necessary a PPA managed by the same package maintenance
 team taking care of it in later Ubuntu releases). We don't install test
 requirements system-wide on our long-running test slaves unless we can be
 assured of security support from the Linux distribution vendor.

There is no easy workaround here. Traditionally this kind of software is
installed from vendor-supported mirrors and distributions. And they're the
ones who maintain and provide security updates from Hadoop/HBase packages.
In case of Ceilometer, I think that importance of having real tests on real
databases is more important than the requirement for the packages to have
security support from a Linux distribution.

Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-09 Thread Clint Byrum
Excerpts from Ruslan Kamaldinov's message of 2014-04-09 10:24:48 -0700:
 On Tue, Apr 8, 2014 at 8:42 PM, Sean Dague s...@dague.net wrote:
  I think it's important to understand what we mean by stable in the
  gate. It means that the end point is 99.% available. And that it's
  up or down status is largely under our control.
 
  Things that are not stable by this definition which we've moved away
  from for the gate:
   * github.com - one of the reasons for git.openstack.org
   * pypi.python.org - one of the reasons for our own pypi mirror
   * upstream distro mirrors (we use cloud specific mirrors, which even
  then do fail some times, more than we'd like)
 
  Fedora.org is not stable by this measure either. Downloading an iso from
  fedora.org fails 5% of the time in the gate.
 
  I'm sure the Hortonworks folks are good folks, but by our standards of
  reliability, no one stacks up. And an outage on their behalf means that
  any project which gates on it will be blocked from merging any code
  until it's addressed. If Ceilometer wants to take that risk in their
  check queue (and be potentially blocked) that might be one thing, and we
  could talk about that. But we definitely can't co-gate and block all of
  openstack because of a hortonworks outage (which will happen, especially
  if we download packages from them 600 - 1000 times a day).
 
 A natural solution for this would be a local to infra package mirror for
 HBase, Ceilometer, Mongo and all the dependencies not present in upstream
 Ubuntu. It seems straightforward from the technical point of view. It'll help
 to keep the Gate invulnerable to any outages in 3-rd party mirrors. Of course,
 someone has to signup to create scripts for that mirror and support it in the
 future.
 
 But, other concerns were expressed in the past. Let me quote Jeremy Stanley
 (from https://review.openstack.org/#/c/66884/):
  This will need to be maintained in Ubuntu (and backported to 12.04 in Ubuntu
  Cloud Archive or if necessary a PPA managed by the same package maintenance
  team taking care of it in later Ubuntu releases). We don't install test
  requirements system-wide on our long-running test slaves unless we can be
  assured of security support from the Linux distribution vendor.
 
 There is no easy workaround here. Traditionally this kind of software is
 installed from vendor-supported mirrors and distributions. And they're the
 ones who maintain and provide security updates from Hadoop/HBase packages.
 In case of Ceilometer, I think that importance of having real tests on real
 databases is more important than the requirement for the packages to have
 security support from a Linux distribution.

This is a huge philosophical question for OpenStack in general. Do we
want to recommend things that we won't, ourselves, use in our
infrastructure?

I think for the most part we've taken a middle of the road approach
where we make sure the default backends and drivers are things that
_are_ supported in distros, and are things we're able to use. We also
let in the crazy-sauce backend drivers for those who are willing to run
3rd-party testing for them.

So I think what is needed here is for MagnetoDB to have at least one
backend that _is_ supported in distro, and legally friendly to OpenStack
users. Unfortunately:

* HBase - not in any distro I could find (removed from Debian actually)
* Cassandra - not in any distro
* MongoDB - let's not have that license discussion again

Now, this is no simple matter. When I attempted to package Cassandra for
Ubuntu 3 years ago, there was zero interest upstream in supporting it
without embedding certain java libraries. The attempts at extracting the
embedded libraries resulted in failed tests, patches, and an endless
series of why are you doing this? type questions from upstream.
Basically the support model of the distro isn't compatible with the
support model of these databases.

What I think would have to happen, would be infra would need to be
willing to reach out and have a direct relationship with upstream for
Cassandra and HBase, and we would need to be willing to ask OpenStack
users to do the same. Otherwise, I don't think MagnetoDB could ever be
integrated with either of them as the default driver.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-08 Thread Nadya Privalova
Hi,
Yep, it would be great to have HBase installed on gating for Ceilometer.
Now we use self-written mocked HBase to test functionality. But HBase
backend is becoming more complex and it's really hard to add 'new features'
in mocked HBase. Hortonworks is the main and the largest contributor in
Hadoop eco-system so I think that their repos are very stable. But actually
both variants are acceptable for me.
I'd like to note that now Ceilometer doesn't work with Cassandra. But there
are several blueprints about it
https://blueprints.launchpad.net/ceilometer/+spec/cassandra-driver and
Magneto-related one
https://blueprints.launchpad.net/ceilometer/+spec/support-magnetodb . So
Cassandra will be important for Ceilometer on gating too.
Besides, I'd like to note that all these NoSQL solutions are very likely to
be used in production in future (if compare with SQL I mean). And I think
that all of us are interested in testing things that will be used in real
life.

Looking forward infra and devstack teams` inputs.

Thanks,
Nadya


On Tue, Apr 8, 2014 at 7:18 PM, Ilya Sviridov isviri...@mirantis.comwrote:

 Hello infra and devstack,


 I would like to start thread about adding of nosql databases support to
 devstack for development and gating purposes.

 Currently there is necessity of HBase and Cassandra in MagnetoDB project
 for running tempest tests.

 We have implemented Cassandra as part of MagnetoDB devstack integration (
 https://github.com/stackforge/magnetodb/tree/master/contrib/devstack) and
 started working on HBase now (
 https://blueprints.launchpad.net/magnetodb/+spec/devstack-add-hbase).

 From other side, HBase and Cassandra are supported as database backends in
 Ceilometer and it can be useful for development and gating to have it in
 devstack.

 So, it looks like common task for both projects and eventually will be
 integrated to devstack, so I'm suggesting to start that discussion in order
 push ahead with it.

 Cassandra and HBase are both Java applications, so come with JDK as
 dependency. It is proved we can use OpenJDK available in debian repos.

 The database itself are distributed in two ways:

 - as debian packages build and hosted by software vendors
  HBase deb http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x HDP
 main
  Cassandra deb http://debian.datastax.com/community  stable main
 - as tar.gz hosted on Apache Download Mirrors
  HBase  http://www.apache.org/dyn/closer.cgi/hbase/
  Cassandra http://www.apache.org/dyn/closer.cgi/cassandra/

 The distributions provided by Apache Foundation looks more reliable, but I
 heard, that third party sources can be not stable enough to introduce them
 as dependencies in devstack gating.

 I have registered BP in devstack project about adding HBase
 https://blueprints.launchpad.net/devstack/+spec/add-hbase-to-devstack and
 we have started working on it.

 Please share your thoughts about it to help make it real.
 Thank you.


 Have a nice day,
 Ilya Sviridov
 isviridov @ FreeNode

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][Ceilometer][MagnetoDB] HBase database in devstack

2014-04-08 Thread Sean Dague
I think it's important to understand what we mean by stable in the
gate. It means that the end point is 99.% available. And that it's
up or down status is largely under our control.

Things that are not stable by this definition which we've moved away
from for the gate:
 * github.com - one of the reasons for git.openstack.org
 * pypi.python.org - one of the reasons for our own pypi mirror
 * upstream distro mirrors (we use cloud specific mirrors, which even
then do fail some times, more than we'd like)

Fedora.org is not stable by this measure either. Downloading an iso from
fedora.org fails 5% of the time in the gate.

I'm sure the Hortonworks folks are good folks, but by our standards of
reliability, no one stacks up. And an outage on their behalf means that
any project which gates on it will be blocked from merging any code
until it's addressed. If Ceilometer wants to take that risk in their
check queue (and be potentially blocked) that might be one thing, and we
could talk about that. But we definitely can't co-gate and block all of
openstack because of a hortonworks outage (which will happen, especially
if we download packages from them 600 - 1000 times a day).

-Sean

On 04/08/2014 12:14 PM, Nadya Privalova wrote:
 Hi,
 Yep, it would be great to have HBase installed on gating for Ceilometer.
 Now we use self-written mocked HBase to test functionality. But HBase
 backend is becoming more complex and it's really hard to add 'new
 features' in mocked HBase. Hortonworks is the main and the largest
 contributor in Hadoop eco-system so I think that their repos are very
 stable. But actually both variants are acceptable for me.
 I'd like to note that now Ceilometer doesn't work with Cassandra. But
 there are several blueprints about it
 https://blueprints.launchpad.net/ceilometer/+spec/cassandra-driver and
 Magneto-related one
 https://blueprints.launchpad.net/ceilometer/+spec/support-magnetodb . So
 Cassandra will be important for Ceilometer on gating too.
 Besides, I'd like to note that all these NoSQL solutions are very likely
 to be used in production in future (if compare with SQL I mean). And I
 think that all of us are interested in testing things that will be used
 in real life.
 
 Looking forward infra and devstack teams` inputs.
 
 Thanks,
 Nadya
 
 
 On Tue, Apr 8, 2014 at 7:18 PM, Ilya Sviridov isviri...@mirantis.com
 mailto:isviri...@mirantis.com wrote:
 
 Hello infra and devstack,
 
 
 I would like to start thread about adding of nosql databases support
 to devstack for development and gating purposes.
 
 Currently there is necessity of HBase and Cassandra in MagnetoDB
 project for running tempest tests.
 
 We have implemented Cassandra as part of MagnetoDB devstack
 integration
 (https://github.com/stackforge/magnetodb/tree/master/contrib/devstack)
 and started working on HBase now
 (https://blueprints.launchpad.net/magnetodb/+spec/devstack-add-hbase).
 
 From other side, HBase and Cassandra are supported as database
 backends in Ceilometer and it can be useful for development and
 gating to have it in devstack.
 
 So, it looks like common task for both projects and eventually will
 be integrated to devstack, so I’m suggesting to start that
 discussion in order push ahead with it.
 
 Cassandra and HBase are both Java applications, so come with JDK as
 dependency. It is proved we can use OpenJDK available in debian repos.
 
 The database itself are distributed in two ways:
 
 - as debian packages build and hosted by software vendors
  HBase deb http://public-repo-1.hortonworks.com/HDP/ubuntu12/2.x
 HDP main
  Cassandra deb http://debian.datastax.com/community  stable main
 - as tar.gz hosted on Apache Download Mirrors
  HBase  http://www.apache.org/dyn/closer.cgi/hbase/
  Cassandra http://www.apache.org/dyn/closer.cgi/cassandra/
 
 The distributions provided by Apache Foundation looks more reliable,
 but I heard, that third party sources can be not stable enough to
 introduce them as dependencies in devstack gating.
 
 I have registered BP in devstack project about adding HBase
 https://blueprints.launchpad.net/devstack/+spec/add-hbase-to-devstack and
 we have started working on it.
 
 Please share your thoughts about it to help make it real.
 Thank you.
 
 
 Have a nice day,
 Ilya Sviridov
 isviridov @ FreeNode
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com