[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-04-25 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
+ User Stories
+ --
+ 
+ Marcos has been tasked with setting up a new Hadoop compute cluster to
+ support log analysis of this companies website activity; He is able to
+ quickly and easily deploy Apache Hadoop on his companies internal Cloud
+ using Juju with packages that come directly from the Canonical partner
+ archive (as mandated by his companies IT policy).
+ 
+ - Implemented - charms for hadoop, zookeeper, hbase and hive avaliable
+ in the Juju charm store albiet packages come from a PPA rather than
+ partner for this release.
+ 
+ Natalie wants to setup a Hadoop cluster; she's not able to use Juju and
+ a public cloud due to the existing infrastructure policies and design in
+ place at her organisation but she is able to deploy the required
+ packages directly from the Canonical partner archive.
+ 
+ - Implemented - packages work outside of charms albiet packages come
+ from a PPA rather than partner for this release.
+ 
+ --
+ 
  Status 20120411
  
  Hadoop 1.0.2 uploaded to dev archive.
  
  Charms for hbase, hadoop and hive written for Ubuntu precise.
  
  Status 20120603
  
  Testing packages avaliable for hadoop, hbase, pig and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/testing
  
  sudo add-apt-repository ppa:hadoop-ubuntu/testing
  
  Juju charms avaliable for hadoop, hbase and zookeeper:
  
  lp:~charmers/charms/precise/zookeeper/trunk
  lp:~charmers/charms/precise/hadoop/trunk
  lp:~charmers/charms/precise/hbase/trunk
  
  Note that by default the zookeeper charm will use the zookeeper packages from 
the main archive not the PPA - see the charm config.yaml file for details on 
how to use the
  PPA's with these charms.
  
  Dev PPA also now hosting backports of all packages for Ubuntu 11.10.
  
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.2
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-04-11 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
+ Status 20120411
+ 
+ Hadoop 1.0.2 uploaded to dev archive.
+ 
+ Charms for hbase, hadoop and hive written for Ubuntu precise.
+ 
  Status 20120603
  
  Testing packages avaliable for hadoop, hbase, pig and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/testing
  
  sudo add-apt-repository ppa:hadoop-ubuntu/testing
  
  Juju charms avaliable for hadoop, hbase and zookeeper:
  
  lp:~charmers/charms/precise/zookeeper/trunk
  lp:~charmers/charms/precise/hadoop/trunk
  lp:~charmers/charms/precise/hbase/trunk
  
  Note that by default the zookeeper charm will use the zookeeper packages from 
the main archive not the PPA - see the charm config.yaml file for details on 
how to use the
  PPA's with these charms.
  
  Dev PPA also now hosting backports of all packages for Ubuntu 11.10.
  
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
- hadoop-1.0.1
+ hadoop-1.0.2
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): POSTPONED
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
  Juju charm for hcatalog (will depend on the hcatalog package): POSTPONED
- [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): TODO
- Support iterating packaging as charms are developed (5): INPROGRESS
- Test deployment and packaging: INPROGRESS
+ Support 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-03-26 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20120603
  
  Testing packages avaliable for hadoop, hbase, pig and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/testing
  
  sudo add-apt-repository ppa:hadoop-ubuntu/testing
  
  Juju charms avaliable for hadoop, hbase and zookeeper:
  
  lp:~charmers/charms/precise/zookeeper/trunk
  lp:~charmers/charms/precise/hadoop/trunk
  lp:~charmers/charms/precise/hbase/trunk
  
  Note that by default the zookeeper charm will use the zookeeper packages from 
the main archive not the PPA - see the charm config.yaml file for details on 
how to use the
  PPA's with these charms.
  
  Dev PPA also now hosting backports of all packages for Ubuntu 11.10.
  
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.1
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
- Package hcatalog for partner (3): TODO
+ Package hcatalog for partner (3): POSTPONED
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
- Juju charm for hcatalog (will depend on the hcatalog package): TODO
+ Juju charm for hcatalog (will depend on the hcatalog package): POSTPONED
  [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): INPROGRESS
  Test deployment and packaging: INPROGRESS

-- 
Ubuntu Server - Hadoop

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-03-07 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20120603
  
  Testing packages avaliable for hadoop, hbase, pig and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/testing
  
  sudo add-apt-repository ppa:hadoop-ubuntu/testing
  
  Juju charms avaliable for hadoop, hbase and zookeeper:
  
  lp:~charmers/charms/precise/zookeeper/trunk
  lp:~charmers/charms/precise/hadoop/trunk
  lp:~charmers/charms/precise/hbase/trunk
  
  Note that by default the zookeeper charm will use the zookeeper packages from 
the main archive not the PPA - see the charm config.yaml file for details on 
how to use the
  PPA's with these charms.
  
  Dev PPA also now hosting backports of all packages for Ubuntu 11.10.
  
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.1
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
- [james-page] Juju charm for Hadoop: INPROGRESS
- [james-page] Juju charm for HBase (will depend on the hbase package): 
INPROGRESS
- [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
- [negronjl] Juju charm for pig (will depend on the pig package): TODO
- [negronjl] Juju charm for hive (will depend on the hive package): TODO
+ Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Update existing hadoop charm for HDP packaging (will add option to 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-03-06 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
- hadoop-1.0.0
+ hadoop-1.0.1
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
- 
- Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
- A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
  [james-page] Juju charm for Hadoop: INPROGRESS
  [james-page] Juju charm for HBase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): TODO
- Support iterating packaging as charms are developed (5): TODO
- Test deployment and packaging: TODO
- Provide support for charming work and review packaging installs: TODO
+ Support iterating packaging as charms are developed (5): INPROGRESS
+ Test deployment and packaging: INPROGRESS

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-03-06 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
+ Status 20120603
+ 
+ Testing packages avaliable for hadoop, hbase, pig and hadoop-zookeeper
+ in http://launchpad.net/~hadoop-ubuntu/+archive/testing
+ 
+ sudo add-apt-repository ppa:hadoop-ubuntu/testing
+ 
+ Juju charms avaliable for hadoop, hbase and zookeeper:
+ 
+ lp:~charmers/charms/precise/zookeeper/trunk
+ lp:~charmers/charms/precise/hadoop/trunk
+ lp:~charmers/charms/precise/hbase/trunk
+ 
+ Note that by default the zookeeper charm will use the zookeeper packages from 
the main archive not the PPA - see the charm config.yaml file for details on 
how to use the
+ PPA's with these charms.
+ 
+ Dev PPA also now hosting backports of all packages for Ubuntu 11.10.
+ 
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.1
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
  [james-page] Juju charm for Hadoop: INPROGRESS
  [james-page] Juju charm for HBase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-28 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.0
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
- [james-page] Juju charm for Hadoop HDFS: INPROGRESS
- [james-page] Juju charm for Hadoop MapReduce: TODO
+ [james-page] Juju charm for Hadoop: INPROGRESS
  [james-page] Juju charm for HBase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-24 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-1.0.0
  hcatalog-0.2.0
  pig-0.9.1
  hbase-0.92.0
  hive-0.8.1
  zookeeper-3.4.3
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
+ [james-page] Juju charm for Hadoop HDFS: INPROGRESS
+ [james-page] Juju charm for Hadoop MapReduce: TODO
+ [james-page] Juju charm for HBase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
- [james-page] Juju charm for hbase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
- [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
+ [negronjl] Update existing hadoop charm for HDP packaging (will add option to 
select between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-22 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
- hadoop-0.20.205.0
+ hadoop-1.0.0
  hcatalog-0.2.0
- pig-0.9.0 (0.9.1)
- hbase-0.90.4
- hive-0.7.1 (? 0.8.1 is out and should be compatible with hcatalog 0.2.0)
- zookeeper-3.3.2 (3.3.4)
+ pig-0.9.1
+ hbase-0.92.0
+ hive-0.8.1
+ zookeeper-3.4.3
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
  [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
- [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
+ [james-page] Juju charm for hbase (will depend on the hbase package): 
INPROGRESS
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-21 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0 (0.9.1)
  hbase-0.90.4
  hive-0.7.1 (? 0.8.1 is out and should be compatible with hcatalog 0.2.0)
  zookeeper-3.3.2 (3.3.4)
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: POSTPONED
- [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
INPROGRESS
+ [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
DONE
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-14 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
  in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0 (0.9.1)
  hbase-0.90.4
  hive-0.7.1 (? 0.8.1 is out and should be compatible with hcatalog 0.2.0)
  zookeeper-3.3.2 (3.3.4)
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
- Partner Archive upload and review: TODO
- [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
+ Partner Archive upload and review: POSTPONED
+ [james-page] Juju charm for zookeeper (will depend on the zookeeper package): 
INPROGRESS
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-01 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig and hbase in
  http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
- pig-0.9.0
+ pig-0.9.0 (0.9.1)
  hbase-0.90.4
- hive-0.7.1
- # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
- zookeeper-3.3.2
+ hive-0.7.1 (? 0.8.1 is out and should be compatible with hcatalog 0.2.0)
+ zookeeper-3.3.2 (3.3.4)
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
- Needs confirmation (conflicts with archive):
+ Packages that target partner CANNOT have the same name as packages in
+ the main archive so zookeeper will be:
  
- zookeeper
+ hadoop-zookeeper
+ 
+ This will prevent and conflicts with the zookeeper package in the main
+ archive.
+ 
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
- Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
- [james-page] Package zookeeper for partner (2): TODO
+ Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
+ [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-02-01 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
- Dev packages avaliable for hadoop, hive, pig and hbase in
- http://launchpad.net/~hadoop-ubuntu/+archive/dev
+ Dev packages avaliable for hadoop, hive, pig, hbase and hadoop-zookeeper
+ in http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0 (0.9.1)
  hbase-0.90.4
  hive-0.7.1 (? 0.8.1 is out and should be compatible with hcatalog 0.2.0)
  zookeeper-3.3.2 (3.3.4)
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Packages that target partner CANNOT have the same name as packages in
  the main archive so zookeeper will be:
  
  hadoop-zookeeper
  
  This will prevent and conflicts with the zookeeper package in the main
  archive.
- 
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): DONE
  [james-page] Package zookeeper for partner (2): DONE
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-27 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig and hbase in
  http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1
  # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
+ [james-page] Package zookeeper for partner (2): TODO
  [james-page] Package hadoop for partner (2): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
- [james-page] Package zookeeper for partner (2): INPROGRESS
  Package hcatalog for partner (3): TODO
  [negronjl] Package pig for partner (2): DONE
  [james-page] Package hbase for partner (2): DONE
  [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-26 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Status 20122501
  
  Dev packages avaliable for hadoop, hive, pig and hbase in
  http://launchpad.net/~hadoop-ubuntu/+archive/dev
  
  sudo add-apt-repository ppa:hadoop-ubuntu/dev
  
  
  
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1
  # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
- [james-page] Package hadoop for partner (5): DONE
- [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
- Package zookeeper for partner (3): TODO
- Package hcatalog for partner (10): TODO
- [negronjl] Package pig for partner (3): DONE
- [james-page] Package hbase for partner (3): INPROGRESS
- [negronjl] Package hive for partner (3): DONE
+ [james-page] Package hadoop for partner (2): DONE
+ [james-page] Rebuild native component during package rebuild for hadoop (3): 
DONE
+ [james-page] Package zookeeper for partner (2): INPROGRESS
+ Package hcatalog for partner (3): TODO
+ [negronjl] Package pig for partner (2): DONE
+ [james-page] Package hbase for partner (2): DONE
+ [negronjl] Package hive for partner (2): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for 

[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-25 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
- hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
+ hive-0.7.1
+ # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
+ 
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): DONE
  [james-page] Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-25 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1
  # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): DONE
- [james-page] Package hbase for partner (3): TODO
+ [james-page] Package hbase for partner (3): INPROGRESS
  [negronjl] Package hive for partner (3): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-25 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
+ Status 20122501
+ 
+ Dev packages avaliable for hadoop, hive, pig and hbase in
+ http://launchpad.net/~hadoop-ubuntu/+archive/dev
+ 
+ sudo add-apt-repository ppa:hadoop-ubuntu/dev
+ 
+ 
+ 
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1
  # the script I got from Hortonworks labels this as hive-0.7.1+ which includes 
fixes to hive. - Yes - we might need to look at that - we should be able to 
overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): DONE
  [james-page] Package hbase for partner (3): INPROGRESS
  [negronjl] Package hive for partner (3): DONE
  [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-24 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): DONE
  [james-page] Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): DONE
+ [james-page] Work out solution for JAVA_HOME detection/override (1): DONE
  Partner Archive upload and review: TODO
- Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-23 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): DONE
  [james-page] Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): DONE
+ Partner Archive upload and review: TODO
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-19 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): INPROGRESS
- Rebuild native component during package rebuild for hadoop (5): TODO
+ [james-page] Rebuild native component during package rebuild for hadoop (5): 
INPROGRESS
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): INPROGRESS
  Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): INPROGRESS
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-19 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
- [james-page] Package hadoop for partner (5): INPROGRESS
- [james-page] Rebuild native component during package rebuild for hadoop (5): 
INPROGRESS
+ [james-page] Package hadoop for partner (5): DONE
+ [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): INPROGRESS
  Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): INPROGRESS
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-19 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): INPROGRESS
- Package hbase for partner (3): TODO
+ [james-page] Package hbase for partner (3): TODO
  [negronjl] Package hive for partner (3): INPROGRESS
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-19 Thread Juan L. Negron
Blueprint changed by Juan L. Negron:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): DONE
  [james-page] Rebuild native component during package rebuild for hadoop (5): 
DONE
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
- [negronjl] Package pig for partner (3): INPROGRESS
+ [negronjl] Package pig for partner (3): DONE
  [james-page] Package hbase for partner (3): TODO
- [negronjl] Package hive for partner (3): INPROGRESS
+ [negronjl] Package hive for partner (3): DONE
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-17 Thread James Page
Blueprint changed by James Page:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
- hdp-hadoop
- hdp-hcatalog
- hdp-pig
- hdp-hbase
- hdp-hive
- hdp-zookeeper
- * The hdp prefix comes from Hortonworks terminology: Hortonworks Data 
Platform (though they are obviously reinforcing connectedness to hadoop itself.)
+ hadoop
+ hcatalog
+ pig
+ hbase
+ hive
+ 
+ Needs confirmation (conflicts with archive):
+ 
+ zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): INPROGRESS
  Rebuild native component during package rebuild for hadoop (5): TODO
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): INPROGRESS
  Package hbase for partner (3): TODO
  Package hive for partner (3): TODO
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Blueprint servercloud-p-hdp-hadoop] Ubuntu Server - Hadoop

2012-01-17 Thread Juan L. Negron
Blueprint changed by Juan L. Negron:

Whiteboard changed:
  Target versions of Hadoop components:
  
  hadoop-0.20.205.0
  hcatalog-0.2.0
  pig-0.9.0
  hbase-0.90.4
  hive-0.7.1 # the script I got from Hortonworks labels this as hive-0.7.1+ 
which includes fixes to hive. - Yes - we might need to look at that - we should 
be able to overlay the fixes if need be without rebuilding the entire project
  zookeeper-3.3.2
  Q: Their website indicates they will use ambari as something 
devops/juju-like. Clearly we want to use juju but have they made any ambari 
assumptions?
  A:  No idea - one to ask Matt
  
  Package naming:
  
  hadoop
  hcatalog
  pig
  hbase
  hive
  
  Needs confirmation (conflicts with archive):
  
  zookeeper
  
  Assumptions:
  
  Packaging will use bigtop as a base.
   - reuse of control file structure will help support bigtop packages in juju 
charms as and when they support precise.
  Java will NOT be rebuilt during the package build process.
  Native libraries will be rebuilt during the package build process.
  Q: What does this mean for the hive embedded in hcatalog (via 
hcatalog-0.2.0/hive/external)? I'm guessing we just use the hdp-hive package we 
create.
  Patches may be required for native build components.
  Debconf configuration in packages is useful for Juju charms and should be 
applied where appropriate.
  --- Debconf currently in use:  namenode, jobtracker and hdfs_dir
  Binary distributions will be used from upstream (no source build by default).
  Q: Exactly what does this mean in this context? Binary distributions of ...  
Are these the jar files inside the hortonworks/apache source (e.g. 
./hive-0.7.1+/lib/javaewah-0.3.jar?
  A: So most of the apache projects ship a binary distribution tarball which 
includes all of the compile Java etc so we should not build from source - just 
re-use these jars etc...
  
  Challenges:
  
  hcatalog not packaged in bigtop trunk (need to check branches)
  JAVA_HOME and use of openjdk-6/7?  JAVA_HOME detection - bigtop use a 
specific package for this (bigtop-utils) which tries to guess - might be worth 
re-using.
  java-package as recommended approach to re-packaging Oracle java for Ubuntu 
(generates Debian packaging from upstream binary distro).
  Kerberos security - probably actually needs to be a charm configuration 
option to secure a cluster.
  Q:  For Kerberos, do we have the necessary dependencies to get this done?  If 
so, I can do the charm work.j
  Q:  Re: Kerberos.  We may need to really look into this as this will end up 
affecting not only hadoop but, all of the other parts ( hive, pig, etc. ) as 
well.  This can really complicate the packaging/charming as well so, we may not 
have time to get it all done in this cycle.  Just my thoughts.
  MultiArch native libraries in hdp-hadoop package - should not must.
  Hive contains patches for hcatalog support - we need to work these in somehow 
(maybe a overlay to the binary re-distribution).
  
  hcatalog:
  External dependency on mysql-connector? and maybe mysql?
  No binary distribution - only source so will need to be built locally to 
produce release tarbal.
  
  Work items:
  Resolve conflict between archive zookeeper and partner zookeeper (1): TODO
  [james-page] Package hadoop for partner (5): INPROGRESS
  Rebuild native component during package rebuild for hadoop (5): TODO
  Package zookeeper for partner (3): TODO
  Package hcatalog for partner (10): TODO
  [negronjl] Package pig for partner (3): INPROGRESS
  Package hbase for partner (3): TODO
- Package hive for partner (3): TODO
+ [negronjl] Package hive for partner (3): INPROGRESS
  Work out solution for JAVA_HOME detection/override (1): TODO
  [negronjl] Juju charm for zookeeper (will depend on the zookeeper package): 
TODO
  [negronjl] Juju charm for hcatalog (will depend on the hcatalog package): TODO
  [negronjl] Juju charm for pig (will depend on the pig package): TODO
  [negronjl] Juju charm for hbase (will depend on the hbase package): TODO
  [negronjl] Juju charm for hive (will depend on the hive package): TODO
  [negronjl] Update hadoop charm for HDP packaging (will add option to select 
between the existing hadoop and the new hdp-hadoop package): TODO
  Support iterating packaging as charms are developed (5): TODO
  Test deployment and packaging: TODO
  Provide support for charming work and review packaging installs: TODO

-- 
Ubuntu Server - Hadoop
https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-hdp-hadoop

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs