I'm creating a data queue on top of postgres and I'm wondering if I've made
an incorrect assumption about isolation or synchronization or some similar
issue.
Every item in the queue is given a unique ID from a sequence.
CREATE TABLE data_queue
(
sequence_num BIGINT PRIMARY KEY,
sender_key B
Hello all,
I'm creating a data queue on top of postgres and I'm wondering if I've made
an incorrect assumption about isolation or synchronization or some similar
issue.
Every item in the queue is given a unique ID from a sequence.
CREATE TABLE data_queue
(
sequence_num BIGINT PRIMARY KEY,
I've been googling, but haven't found a good answer to what I should do if I
want to store time series in Postgres.
My current solution is store serialized (compressed) blobs of data.
(So for example store 1 day worth of 1 minute samples (~1440 samples) stored
as one row in a bytea. (Plus meta data
I found this link:
http://postgresql.1045698.n5.nabble.com/BUG-5507-missing-chunk-number-0-for-toast-value-X-in-pg-toast-X-td2126674.html
And am also experiencing the same issue... More anecdotal evidence that this
is a bug:
We recently(couple days ago) completely recreated the DB, and sta
Looks like the recommended settings are using the virtio interface,
cache=none, and raw partitions (not qcow2).
Anyone else run into any problems with kvm or virtio?
We currently have a setup using qcow2, virtio, and the default cache
settings, and experienced some data corruption (not preceded by
I'm noticing some interesting behavior around timestamp and extract epoch,
and it appears that I'm getting a timezone applied somewhere.
Specifically, If I do:
select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME
ZONE ); == 1264924800
select EXTRACT( EPOCH FROM '2010-04-01 00:0
Looks like a quick search says I need to specify the timezone...
On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres
wrote:
>
> I'm noticing some interesting behavior around timestamp and extract epoch,
> and it appears that I'm getting a timezone applied somewhere.
>
&
no.. still confused.
I assume it's storing everythign in UTC.. did I need to specify a timezone
when I inserted?
On Thu, Mar 17, 2011 at 11:24 AM, bubba postgres
wrote:
> Looks like a quick search says I need to specify the timezone...
>
>
> On Thu, Mar 17, 2011 at 11:22 A
ok got it.
select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME
ZONE at time zone 'utc' );
On Thu, Mar 17, 2011 at 11:32 AM, bubba postgres
wrote:
> no.. still confused.
> I assume it's storing everythign in UTC.. did I need to speci
Is this the correct behavior? It seems like if I specify the utc offset it
should be 0, not 16.. It seems to be the opposite behavior from extract
epoch.
select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as defhour,
extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST'
0-01-01 00:00:00' at time zone
'utc' ) at time zone 'utc' ) does not...
Can anyone explain this?
On Thu, Mar 17, 2011 at 5:05 PM, bubba postgres wrote:
> Is this the correct behavior? It seems like if I specify the utc offset it
> should be 0, not 16.. It seems to
Hello!
In my current application I am sending a lot of data to/from the DB with
JDBC, and specifically arrays of Double. (or even Double[][]).
Alas, my current implementation I converts everything into a string
representation, which blows my memory out of the water and drops my
transaction rate way
After trying out the JDBC4 driver in DBCP, I see that
Connection.createArray(...) still just creates a big string under the
covers. Is that the expected behavior? Am I doing it wrong?
On Thu, Mar 31, 2011 at 8:04 PM, bubba postgres wrote:
> Hello!
> In my current application I am sending
In this discussion there was a lot of talk of transport compression in
Postgres, (also specifically wondering about JDBC as well) did anything
ever come of that discussion?
http://postgresql.1045698.n5.nabble.com/Compression-on-SSL-links-td2261205.html
I would just like to get some clarification from the list on how to do a
pg_dump on the slave in the face of "canceling statement due to conflict
with recovery".
The following links seem to indicate that If I start an idle transaction on
the master I should be able to do the pg_dump, but I tried th
was more to it than that.
On Thu, May 12, 2011 at 5:08 PM, Andrew Sullivan wrote:
> On Thu, May 12, 2011 at 11:26:38AM -0700, bubba postgres wrote:
> > I would just like to get some clarification from the list on how to do a
> > pg_dump on the slave in the face of "canc
So, what are the gotcha's around manipulating Arrays in stored procs?
It seems reasonable that an array_cat /etc would cause the creation of a new
array, but does mutating an existing array also create a copy?
I have an overloaded DB and I see several IDLE connections that are using
significant CPU.. (Not Idle in transaction)
Why would an idle process be eating so much cpu? Or is it not actually idle?
Here is an example from pg_top:
last pid: 11821; load avg: 6.11, 6.32, 7.64; up 1+21:05:31
Are there any optimizations around check constraints such that they will not
be evaluated if constituent columns are not updated?
Regards,
-JD
This is the reverse of what I thought I would find.
In short my check constraint is extracting the epoch from a start timestamp,
and an end timestamp to get the number of seconds difference.
It then uses this number to check the array_upper() of an array to make sure
it's the proper size
The SQL
After some changes on my DB I notice that where I used to have a large
pg_toast_X table, I now have a large pg_largeobject table.
Can't find an explanation of the difference between the two, could someone
enlighten me?
No takers?
Some background I've changed my TOAST type from EXTENDED to MAIN.
On Thu, Jul 28, 2011 at 10:50 AM, bubba postgres
wrote:
>
> After some changes on my DB I notice that where I used to have a large
> pg_toast_X table, I now have a large pg_largeobject table.
&
22 matches
Mail list logo