On 02/03/2013 08:34 AM, Rohit Yadav wrote:
On Sun, Feb 3, 2013 at 12:54 PM, Hugo Trippaers
<htrippa...@schubergphilis.com> wrote:
Heya all,
As Devid already mentioned we met up at Build-A-Cloud-Day in Ghent with Wido,
Noa and a few other folks. During this day (with lots of nice talks) we had a
chance to sit down and discuss packaging. Something that Noa, Wido and myself
were planning to do for a long time. Some other people joined in the discussion
and with the help of the available black board (the event was held in a school
;-) ) we managed to sync our ideas on packaging the 4.1 release.
With our ideas synced we thought it time to bring our ideas to the list and ask
for feedback from the community. We are also using our current time at FOSDEM
to sync up with some of the people from the distros to see what they think of
the new ideas, it would still be nice to have CloudStack packaged up and
shipped with some of the distributions out there.
So the main goals of redoing packaging are getting rid of ant and waf
completely. A secondary goal is to create a reference set of packages which in
itself should be enough to get anyone going with CloudStack, but will hardly
depend on the underlying distro. Real distro specific stuff should be handled
by packagers from those distros. We just aim to provide a reference
implementation.
Our goal is to have a reference implementation that will install on the
following list of operating systems: CentOS 6.3, Ubuntu 12.04 and Fedora 18.
This means that it will probably install and run on a lot more, but this is the
set that we will test against (i'm using a jenkins system at the office to
automatically build and install and these images will be used for the tests).
Next we will remove as much system dependencies as possible, so we will use
maven to gather the dependencies and make sure that they are packaged and
shipped with the RPMs. This makes for slightly bigger packages, but reduced the
overhead of having to check each operating system and run the risk of version
mismatched with versions of jar files present on the distro.
We also intend to change the name of the packages to cloudstack to make it
perfectly clear what somebody is installing, this will also affect the location
of several files like configuration files and scripts, but we plan to include
symlinks for backwards compatibility. The feasibility of this will obviously be
tested in the packaging street my collegues are building for me.
Awesome!
The planned packages for now are cloudstack-management, cloudstack-agent,
cloudstack-common, cloudstack-cli, cloudstack-docs, cloudstack-awsapi and
cloudstack-usage. You might already have seen these names in some of the
checkins.
Alright, we will also have to implement upgrade paths and
s/cloud-/cloudstack-/g in a lot of scripts.
I this the best we to demonstrate is by showing code, instead of a lengthy
email ;-) All packages will have a directory in /usr/share/cloudstack-%{name}
Why not /usr/share/cloudstack/%{name}? cloudstack-cli would have to be
installed like any other python app, in /usr/*path to python 2.6 or
2.7 dir*/site-packages/cloudmonkey, or there will be some other format
of installation?
Because of how distros do things. Packages should not share a directory
like /usr/share/cloudstack, so they all need their own directory for stuff.
Otherwise you can get "directory not empty" issues when removing.
and the main jar will be located there and any dependencies will be located in
the lib directory beneath that location. With the exception of management which
will be created as an exploded webapplication archive in the same directory.
Scripts will be located in /usr/share/cloudstack-common/scripts and symlinks
will be made to the previous locations for backwards compatibility.
+1 I want that
Why not put eveything under /usr/share/java/cloudstack/* and using
recursive dir traversal create classpath reusable in all scripts?
Then you'll be adding a lot to the classpath which you don't actually need.
So there will indeed be some redundant storage on systems, but the
amount of data is small.
Towards 4.2 we want to make more changes, but we can't do everything at
once.
Now it's just setting up a machine (VM) with 4.0 in it. Try the upgrade,
learn from it, revert the VM to 4.0 and try again until it works.
Snapshots ftw!
Wido
I think these are the highlights of what we intend to do for release 4.1. We
have a lot of plans for subsequenst release and on how to get us into distros.
But for now we thought it prudent to focus on getting packages for the 4.1
release as son as possible and focus on other improvement later.
Awesome, keep us posted.
Regards.
@Wido, @Noa, @David did i miss anything important?
Cheers,
Hugo