Re: distributed/centralized META.yml data

2007-10-30 Thread Andy Lester
 The trouble with ad-hoc is just that it tends 
 to *never* get formalized (i.e. never gets centrally documented, 
 becomes discoverable, appears in books, etc.)

The trouble with planned soluttions is that it tends to *never* get
implemented.  Lots of talking, lots of ideas, precious little
implementation.

I'm not afraid of iterative solutions.  It doesn't bother me to think
that I might implement something today, just to get it out there, and
then have to revert it.

 Don't worry about what anybody else is going to do. The best way to
 predict the future is to invent it.
 --Alan Kay

I think what's missing most of the time, certainly in discussions like
this, is the part of invent where a usable solution winds up existing.

xoa

-- 
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance


Re: distributed/centralized META.yml data

2007-10-30 Thread Ovid
- Original Message 
 From: Andy Lester [EMAIL PROTECTED]
 To: Eric Wilhelm [EMAIL PROTECTED]
 Cc: module-authors@perl.org
 Sent: Tuesday, October 30, 2007 2:39:01 PM
 Subject: Re: distributed/centralized META.yml data
 
 The trouble with ad-hoc is just that it tends 
  to *never* get formalized (i.e. never gets centrally documented, 
  becomes discoverable, appears in books, etc.)
 
 The trouble with planned soluttions is that it tends to *never* get
 implemented.  Lots of talking, lots of ideas, precious little
 implementation.

I agree completely with this.  I see it all the time and it's a huge 
frustration.
 
 I'm not afraid of iterative solutions.  It doesn't bother me to think
 that I might implement something today, just to get it out there, and
 then have to revert it.

That being said, iterative solutions are fine on a personal project or when 
working on code for a company.  They can be far more problematic when they're 
planned for a huge, real-world user base.  If they start out good, lots of 
people can buy into the first solution and remain stuck on a suboptimal path.  
Anyone remember how COBOL was supposed to be an intermediate step and a 
proper language released later on?  Grace Hopper reportedly said that she would 
have done things much differently if she new the next version was never coming 
out.

(I could also make snarky comments about the crap we call SQL and how the 
buy-in to this suboptimal solution has effectively destroyed a lot of real 
database work, but I won't :)

Frankly, I'd suggest waiting a while before pulling the trigger and then if 
nothing manifests (which I'm guessing it won't), then go ahead.

Cheers,
Ovid
--
Buy the book  - http://www.oreilly.com/catalog/perlhks/
Perl and CGI  - http://users.easystreet.com/ovid/cgi_course/
Personal blog - http://publius-ovidius.livejournal.com/
Tech blog - http://use.perl.org/~Ovid/journal/




Re: distributed/centralized META.yml data

2007-10-30 Thread Andy Lester


On Oct 30, 2007, at 9:54 AM, Ovid wrote:

Frankly, I'd suggest waiting a while before pulling the trigger and  
then if nothing manifests (which I'm guessing it won't), then go  
ahead.



The risk is infinitesimal.  I put some fields in a META.yml, and ask  
Graham to see if he can support them.  If a better solution comes  
along, then we change to it.


I'm baffled by the fear we have of changing things, of making a first  
stab at something for fear that we might have to change it again.


--
Andy Lester = [EMAIL PROTECTED] = www.petdance.com = AIM:petdance






Re: distributed/centralized META.yml data

2007-10-30 Thread Eric Wilhelm
# from Andy Lester
# on Tuesday 30 October 2007 07:39:

 The trouble with ad-hoc is just that it tends
 to *never* get formalized (i.e. never gets centrally documented,
 becomes discoverable, appears in books, etc.)

The trouble with planned soluttions is that it tends to *never* get
implemented.

By ad-hoc, I was referring mostly to the distributed nature of the 
META.yml living in the tarball regarding Aristotle's pondering of a 
centralized system.  It's about coordination schemes, formalization, 
and discoverability -- not iterative development.

Note that the META.yml *fields* are primarily not ad-hoc.  This is yet 
another set of pros/cons/caveats in itself.

But, I'm also thinking about the scattered metadata in various 
*.perl.org sites.  I think the progress vector for making that metadata 
more usefully organized and accessible probably involves META.yml in 
some way.

--Eric
-- 
software:  a hypothetical exercise which happens to compile.
---
http://scratchcomputing.com
---


Re: distributed/centralized META.yml data

2007-10-29 Thread Eric Wilhelm
# from A. Pagaltzis
# on Monday 29 October 2007 15:11:

Clearly this info should live somewhere and search.cpan should
use it, but META.yml is the wrong place. It belongs somewhere
unversioned.

+1, and see-also: some kind of common API between all of the meta-ish 
foo.perl.org sites.

  http://www.nntp.perl.org/group/perl.module.build/2007/07/msg778.html

I think META.yml can play a part in that, particularly in fostering 
distributed pioneering.  The trouble with ad-hoc is just that it tends 
to *never* get formalized (i.e. never gets centrally documented, 
becomes discoverable, appears in books, etc.)

Of course, the trouble with centralization is that it can resist, 
discourage, or stifle change.  Plus, it is typically subject to 
the wisdom and latency of committees.

The concern is “distance of metadata” I guess – it shouldn’t be
too onerous for automatic tools working against the FTP, such as
CPAN.pm, to get at this data, even though it lives outside the
distribution.

It seems like something more along the lines of web services plus sync 
would be better suited to distributed implementation.  For example, 
meta.perl.org could be queried anonymously and edited by the author, 
but auto-filled (or even maybe over-written) by META.yml.

The nice thing about just stick META.yml in your distro is that it can 
be supported by shipped tools (e.g. Module::Build, Module::Starter, 
etc.)  This gives a nice low barrier to entry, and doesn't require as 
much opt-in or active engagement as e.g. editing something in a web 
form.  Also, it comes with the tarball and is therefore not subject to 
network failure, it mirrors well, etc -- all of those nifty qualities 
have to be traded-off to get external updateable-ness, especially if 
your solution is not built-in to the centralized mirroring scheme (i.e. 
PAUSE.)

Unfortunately, supporting multiple info sources (META.yml, plus a 
web-editable database somewhere and/or additional inputs such as 
cpanforum, etc) probably means attaching a version to the data and 
deciding which overrides which.  Typically, the data source which 
doesn't require the author to know about external interfaces is the 
easiest one to get rolling -- i.e. META.yml.

What if the tarball is newer than the last modification to the online 
data?  Do fields from META.yml still get overridden by the online data?  
Should the meta.perl.org service try to extract/update data from 
META.yml?  (Maybe just upon sign-up/request from the author?)

Perhaps META.yml explicitly delegates a URL as a definitive metadata 
source?  (Meaning (probably) that values for any META.yml fields are 
superseded if they appear in the online query result.)  Provided a 
machine-discoverable web API, multiple implementations could co-exist.

And there's also the consideration that some data should/could be 
per-author rather than per-dist:

  http://www.nntp.perl.org/group/perl.qa/2007/03/msg8050.html

--Eric
-- 
Don't worry about what anybody else is going to do. The best way to
predict the future is to invent it.
--Alan Kay
---
http://scratchcomputing.com
---