Dana Hudes <[EMAIL PROTECTED]> writes:

> On Tue, 4 Jan 2005, Scott W Gifford wrote:
>
>> A private emailer wrote:
>> 
>> [...]
>> 
>> > Even better isn't all this on a USPS server? Whatever tool you use
>> > to grab their server database, include it and do that as part of the
>> > build process or perhaps offer it as an option , the alternative
>> > being to go to the USPS server every time.
>> 
>> Three things I don't like about that.

[...]

>> Third, if you cache CPAN modules for installation to many machines,
>> this will bypass the cache.
>> 
>
> the alternative is stale data. The USPS server is authoritative.
[...]
> I guess it comes down to how often the ZIP codes change.
[...]

The file we have now is from the 1999 US census, so it seems likely
we'll get a new one with the census, every 10 years.  I think it's
unlikely the data format (a dBase file with a Word document describing
the fields) will be the same in 2009, and I don't think there's any
guarantee about what the URL will be when it is published; the current
one is:

    http://www.census.gov/geo/www/tiger/zip1999.html

So I don't think we have any real options as far as keeping data
up-to-date automatically.  If the data were published in a
standardized place in a standardized format, I'd be more inclined to
agree about the evils of stale data.

> You can always offer a tool in the scripts/ directory of your code
> distro to build the database from the Internet. That addresses your
> cache issue.

I was thinking of somebody using something like CPAN::Mini to create a
local cache of CPAN modules.  I thought there was a CPAN::Cache module
that somehow downloaded only one copy for a group of machines in the
same place to save bandwidth, but maybe that was just a rather dull
dream.  In any event, using a script to download it doesn't help with
caching for either of these circumstances, although of course it will
with a Web cache.

----ScottG.

Reply via email to