Hi Eugene,
Now I need to import the patch into the database, and produce another file as
- if the passed series field exists in the database, then return ID:series
- otherwise insert a new row to the table and generate new ID and return
ID:series
for each row in the source file.
I think
On Apr 17, 2015 8:35 AM, Kynn Jones kyn...@gmail.com wrote:
(The only reason for wanting to transfer this data to a Pg table
is the hope that it will be easier to work with it by using SQL
800 million 8-byte numbers doesn't seem totally unreasonable for
python/R/Matlab, if you have a lot of
On Tue, Jul 5, 2016 at 3:28 PM, David G. Johnston
<david.g.johns...@gmail.com> wrote:
> On Tue, Jul 5, 2016 at 5:37 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> Paul Jungwirth <p...@illuminatedcomputing.com> writes:
>> > The problem is this (tried on 9.3 and 9.
On Tue, Jul 5, 2016 at 10:17 PM, Paul A Jungwirth
<p...@illuminatedcomputing.com> wrote:
> db=> create type inetrange;
Here is a follow-up question for creating inet ranges. Is there any
way to prevent someone from doing this?:
db=> select inetrange('1.2.3.4',
'2001:0db8:
On Mon, Jun 26, 2017 at 12:47 PM, Adrian Klaver
<adrian.kla...@aklaver.com> wrote:
> On 06/26/2017 12:03 PM, Paul Jungwirth wrote:
>> Perhaps
>> you should see what is line 85 when you do `\sf words_skip_game` (rather
>> than line 85 in your own source code).
&
I'm working on a problem where partitioning seems to be the right
approach, but we would need a lot of partitions (say 10k or 100k).
Everywhere I read that after ~100 child tables you experience
problems. I have a few questions about that:
1. Is it true that the only disadvantage to 10k children
> It's going to suck big-time :-(.
Ha ha that's what I thought, but thank you for confirming. :-)
> We ended up keeping
> the time series data outside the DB; I doubt the conclusion would be
> different today.
Interesting. That seems a little radical to me, but I'll consider it
more seriously
I'm considering a table structure where I'd be continuously appending
to long arrays of floats (10 million elements or more). Keeping the
data in arrays gives me much faster SELECT performance vs keeping it
in millions of rows.
But since these arrays keep growing, I'm wondering about the UPDATE
The docs say that a Datum can be 4 bytes or 8 depending on the machine:
https://www.postgresql.org/docs/9.5/static/sql-createtype.html
Is a Datum always 8 bytes for 64-bit architectures?
And if so, can my C extension skip a loop like this when compiling
there, and just do a memcpy (or even a
On Fri, Sep 22, 2017 at 8:05 PM, Pavel Stehule wrote:
> yes, it is 8 bytes on 64-bit.
Thanks!
> I don't think so it is good idea to write 64bit only extensions.
I agree, but how about this?:
if (FLOAT8PASSBYVAL) {
datums = (Datum *)floats;
} else {
On Fri, Sep 22, 2017 at 7:52 PM, Paul A Jungwirth
<p...@illuminatedcomputing.com> wrote:
> Is a Datum always 8 bytes for 64-bit architectures?
Never mind, I found this in `pg_config.h`:
/* float8, int8, and related values are passed by value if 'true', by
reference
On Sat, Sep 23, 2017 at 9:40 AM, Tom Lane wrote:
> I wonder whether you're using up-to-date Postgres headers (ones
> where Float8GetDatum is a static inline function).
I'm building against 9.6.3 on both machines. I'm not doing anything
special to change the compilation
On Fri, Sep 22, 2017 at 8:38 PM, Tom Lane wrote:
> "Premature optimization is the root of all evil". Do you have good reason
> to think that it's worth your time to write unsafe/unportable code? Do
> you know that your compiler doesn't turn Float8GetDatum into a no-op
>
On Wed, Oct 18, 2017 at 8:05 AM, Andrus wrote:
> pg_dump.exe -b -f b.backup -Fc -h -U admin -p 5432 mydb
>
> causes error
>
> pg_dump: too many command-line arguments (first is "-p")
Don't you need a hostname after -h? I think right now pg_dump thinks
your hostname is "-U",
14 matches
Mail list logo