saved
me this work?
2. Can I submit my scripts and have the msvc tools reviewed/updated?
Thanks,
Dave Huber
This electronic mail message is intended exclusively for the individual(s) or
entity to which it is addressed. This message, together with any attachment
0023 0004
0023
I can make some assumptions to interpet 0004 0023 as a 32-bit
length followed by length bytes (length = 4, data = 0x23 = 35), but I have no
clue how to interpret the leading 20 bytes of this data.
I appreciate any help I can get.
Thanks,
Dave Huber
need to occur, the file needs to be read sequentially
anyway.
Any amount of help would be gladly excepted, even if it's pointing me to
another thread or somewhere in the manual. Thanks,
Dave Huber
This electronic mail message is intended exclusively
PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] bulk inserts
On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
Using COPY is out of the question as the file is not formatted for
that and since other operations need to occur, the file needs to be
read sequentially anyway
All I have to say is wow! COPY works sooo much faster than the iterative method
I was using. Even after having to read the entire binary file and reformat the
data into the binary format that postgres needs it is an order of magnitude
faster than using a prepared INSERT. At least that's what my
I am inserting 250 rows of data (~2kbytes/row) every 5 seconds into a table
(the primary key is a big serial). I need to be able to limit the size of the
table to prevent filling up the disk. Is there a way to setup the table to do
this automatically or do I have to periodically figure out how
A colleague gave me the following query to run:
DELETE FROM data_log_20msec_table WHERE (log_id IN (SELECT log_id FROM
data_log_20msec_table ORDER BY log_id DESC OFFSET 1000))
log_id is the primary key (big serial)
data_log is the table described below
This query keeps the most recent 10
into this
partitioned table bit.
Thanks,
Dave
-Original Message-
From: John R Pierce [mailto:pie...@hogranch.com]
Sent: Wednesday, October 07, 2009 12:01 PM
To: Dave Huber
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] automated row deletion
Dave Huber wrote:
A colleague gave me
Is it possible to execute a CREATE OR REPLACE FUNCTION with another function or
even have a function modify itself?
Dave
This electronic mail message is intended exclusively for the individual(s) or
entity to which it is addressed. This message, together with
Surely, there are valid cases of having a function create a function.
Suppose (just off the top of my head), you create a helper function
that generates triggers on a table for record archiving.
My application is for archiving. I'm using partitioned tables (each 10
records) to keep a
Does anybody have a snippet where they use PQgetCopyData? I must be calling it
wrong as it keep crashing my program. I've attached my code below. I am writing
this for a Code Interface Node in LabVIEW.
Thanks,
Dave
MgErr CINRun(LStrHandle conninfo, LStrHandle copystr, TD1Hdl resultValues) {
Where is it blowing up?
I'm sorry, I wasn't clear. It bombs on the PQgetCopyData call. If I comment out
the entire while loop, the program runs fine. If I simply comment out the
contents of the while loop...kablooey!
Dave
This electronic mail message is
Tom,
Thanks for the help. Setting buffer to a char * fixed the crashing problem.
Now, I have a different issue. The result from PQgetCopyData is always 1 for
every row of data returned. Does this not work for return data WITH BINARY?
If I issue the same copy command to a file instead of STDOUT
13 matches
Mail list logo