Hi!

I was thinking about this from a conversation I had with JimW today on IRC.

Lets leave it along for the time being. We already send it over the wire as a string. Once we can get UDT loadable, lets move it out to that.

Getting TIMESTAMP and DATETIME with sub-second resolution is more important.

Dropping CHAR is sounding good... JimW got my thinking about how we might be able to just go with STRING and BYTE, and skip all other types.

Cheers,
        -Brian

On Jul 22, 2008, at 1:58 PM, Monty Taylor wrote:

Brian Aker wrote:
Hi!

On Jul 22, 2008, at 9:09 AM, Sheeri K. Cabral wrote:

I'd thought DECIMAL had already been, well, decimated by Brian -- he'd
sent a message about throwing it away on Jul 15th....

The code around the old is completely gone. The question is should we
return to the previous version of DECIMAL.

I think we should not go back to the previous version. The only reason
to have decimal at all is to have arbitrary precision numbers that you
can do math on. If we're going to slurp it around as a string, then I
think we should just kill it and have people store their self-encoded
decimal strings in arbitraty binary columns. With decent pluggable UDFs, then there is no reason someone couldn't write a UDF that could perform
math on a chunk of bytes that they have encoded themselves using some
decimal encoding lib.

To us it's bytes.

So, I guess that brings me down on the side of just getting rid of it
all together.

Monty

--
_______________________________________________________
Brian "Krow" Aker, brian at tangent.org
Seattle, Washington
http://krow.net/                     <-- Me
http://tangent.org/                <-- Software
_______________________________________________________
You can't grep a dead tree.




_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to