On Fri, Jul 31, 2009 at 8:15 PM, John Adams <j...@twitter.com> wrote:

>
>
> On Jul 31, 2009, at 4:04 PM, Andrew Badera wrote:
>
>  but why not go with 128 bit decimal/floating point precision datatypes to
>> begin with, and never have this issue? if anyone says "overhead" I'm gonna
>> whack 'em like a popup weasel. in this day and age of CPU cycles and RAM,
>> you might as well go big or go home.
>>
>
>
> Because none of us will be alive in the year 58,821.
>
> -john
>
>
But that makes a lot of assumptions. What if say Twitter starts integrating
laconi.ca or something else drastically different along those lines?

While I'm not a fan of inefficiency, if minor, minor, minor inefficiency
guarantees peace of mind, then why the heck not?

And my same statement could apply to 64, if you want to look at it that way.
Why not start at 64 instead of 16? Everything I build model in SQL Server
these days is ID'd with both a numeric(18,0) and a uniqueidentifier --
covering my bases, giving me peace of mind, reducing volatility down the
line.

--ab

Reply via email to