On 4/17/15 1:10 PM, Ray Cote wrote:
(Not an IEEE floating point expert, but...) I've learned the hard way to
never rely on comparing two floating point numbers for equality -- and
that's what you are doing if you join on them as primary keys. If you
must use the underlying numeric data for
On Fri, Apr 17, 2015 at 11:56 AM, David G. Johnston
david.g.johns...@gmail.com wrote:
MD
I'm not sure what you mean by doubles. Do you mean bigint data type, or do
you mean use two columns for a primary key? Either way it's pretty simple.
MD
If you mean a bigint, then probably
On Fri, Apr 17, 2015 at 8:45 AM, Melvin Davidson melvin6...@gmail.com
wrote:
On Fri, Apr 17, 2015 at 11:34 AM, Kynn Jones kyn...@gmail.com wrote:
One consideration that is complication the choice of primary key
is wanting to have the ability to store chunks of the data
table (not the
I have some data in the form of a matrix of doubles (~2 million
rows, ~400 columns) that I would like to store in a Pg table,
along with the associated table of metadata (same number of rows,
~30 columns, almost all text). This is large enough to make
working with it from flat files unwieldy.
First, please ALWAYS include the version and O/S, even with basic questions.
I'm not sure what you mean by doubles. Do you mean bigint data type, or do
you mean use two columns for a primary key? Either way it's pretty simple.
If you mean a bigint, then probably best to use serial data type,
On Fri, Apr 17, 2015 at 10:34 AM, Kynn Jones kyn...@gmail.com wrote:
I have some data in the form of a matrix of doubles (~2 million
rows, ~400 columns) that I would like to store in a Pg table,
along with the associated table of metadata (same number of rows,
~30 columns, almost all text).
On Apr 17, 2015 8:35 AM, Kynn Jones kyn...@gmail.com wrote:
(The only reason for wanting to transfer this data to a Pg table
is the hope that it will be easier to work with it by using SQL
800 million 8-byte numbers doesn't seem totally unreasonable for
python/R/Matlab, if you have a lot of