On Thu, 4 Apr 2019 11:21:41 -0400 Joshua Wise <joshuathomasw...@gmail.com> wrote:
> > On the other hand, what table has a floating point number in its > > key? > > > > How do you even express the value of such a key for an exact > > match? > > Well I imagine it can be very useful for range queries. Imagine > Julian dates, coordinate points, rankings, etc. Julian dates are integers. The tm structure is all integers, too. I suppose you could store lat/lon as floating point. It's exactly the kind of data that calls out of a tm-like structure, though, because officially there are 60 minutes in a degree, and 60 seconds in a minute. Just as with time, the governing authorities use a non-decimal notation; decimal fractions of a degree are mere computational convienience. And, again, it's not part of the key. In financial analysis, range queries over large datasets are common. If it's not a range of dates, it's a range of returns/price/earning/capitalization over time. Yet Microsoft SQL Server never suggested we use anything other than IEEE to store the data. Perhaps that's because, more often than not, floating point data are manipulated as part of the query. If you're joining the table to itself to select price change over time to compute, say, variance, the absolute magnitude of the data are uninteresting. You find the stocks by date, subtract the prices and compute the variance, in IEEE format, of course, because that's what the CPU supports. Then you sort and filter the top quintile, or whatever. In such a case, the overhead of floating-point conversion will be significant: twice for every row, overhead that is nonexistent today. I'm skeptical of the claimed advantage. The downside is clear. If the advantage can be shown, its use would be specialized. OTOH, a compiete BCD implementation would be ... interesting. --jkl _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users