On Fri, Sep 24, 2010 at 9:08 AM, Alex Boisvert <[email protected]> wrote:
> Hi Donald,

It is Peter :)

> 2) adding download_from to an Artifact;  this could be useful if only
> one/some of many artifacts come from a different repo and we don't want to
> pay the latency tax by querying this repo for all artifacts.

Precisely the use case. Several of the dependencies I am working with
are located in only one repository so some of the projects I have
written with buildr have ~14 repository definitions.

> As for mirror_to, it seems to be largely duplicated by repositories.remoteor
> download_from (assuming we add it).   Given the download information is
> typically in the same buildfile, I'm not sure when this would be useful.

The main thing it is useful for is automating mirroring. So we tend to
have a local web server repository that has all the artifacts mirrored
from the internet. I agree it is probably not as useful as the other
features given that some people manage their repositories using things
like nexus.

> I'm not clear on the value of uploading to different repositories.  People
> who need artifacts replicated typically set up repositories such that they
> mirror (a subset or all) of each other.   Granted, it doesn't add much
> complexity but it doesn't seem like a widely needed features.  (People
> reading this, feel free to jump in if you think it's useful).

While it is a demand in my workplace I could imagine that it is not
widely useful ... then again it is not much more complex ;)

> Stepping back a little bit, I'm wondering if adding metadata to artifacts is
> a good approach.  The alternative is to place artifacts in arrays or hashes
> and manage these as sets, e.g.,
>
> public_artifacts = [ list, of, artifacts, to, publish, to, public, repos ]

While this is possible I tend to store all dependencies in the
build.yaml file. This format is much more amenable to machine reading
and processing. So if there was a way to easily define groups of sets
of artifacts in this file then I could definitely be convinced to use
this approach.

> # This seemed to be another of your use-case
> task :replicate_artifacts do
>  artifacts.each do |a|
>    a.download :repository => 'http://bigco.com/repo'  # not available today
>    a.upload :url => 'http://example.com/repo', :username => 'foo'
>  end
> end

I could imagine an approach like this being useful for packages - not
so sure about artifacts if they are managed in build.yaml. My instinct
would be to add them to the artifact base class or mixin (IIRC
ActsAsArtifact) so you could do something like the following

artifact(:mydep).download_from('http://example.com/internal')

project 'foo' do
   compile.with :mydep
   package(:jar).upload_to('http://example.com/repo')
end

> Have you considered this approach?  I'd be curious to hear if/why you think
> using metadata is a better way to go.

I would be reasonably happy with that approach if you could define
groups of artifacts in build.yaml

I guess the main reason I was looking at metadata attributes is that
there is lots of other information that I want to store against an
artifact so I could automate other parts of the build process.

i.e. store a versioning policy and the last non snapshot version. That
way you could guess the next version based on whether the last release
is binary compatible with the current release. Ensure that the
artifact compiled against and tested against the last non snapshot
release of all the dependencies. Make sure the package adheres to
versioning policies (i.e. make sure the version does correct things
under OSGi). By keeping this information in build.yaml it easy to
write a release plugin that does all the magic required to automate
this and then updates build.yaml after a release occurs.

-- 
Cheers,

Peter Donald

Reply via email to