Hello,
Your constraint column is of 'timestamp with time zone' and the
query uses 'date'. The comparison between them is an operator
based on a static, non-immutable function so constraint exclusion
doesn't work.
SELECT o.oprname, o.oprcode, p.provolatile
FROM pg_operator o join pg_proc p on
I am trying to get some slow query information and the results from my
query are truncated at 2047 characters. Some of my queries are very long
so they get truncated in the select. Is there a way around this?
Here is my query.
WITH query_stats AS (
SELECT
query::text,
Hi,
19 Jan 2015 06:58:21 -0500, François Beausoleil franc...@teksol.info wrote in
cc2fd572-320a-4225-b98c-48f209925...@teksol.info
Le 2015-01-18 à 20:58, James Sewell james.sew...@lisasoft.com a écrit :
Hello,
I am using partitioning with around 100 sub-tables. Each sub-table is
Thanks for that tip. I'll check it out.
On Tue, Jan 20, 2015 at 3:12 PM, Melvin Davidson melvin6...@gmail.com
wrote:
Since you are on 9.3. you might want to consider using PgBadger as a
better way to get information about slow queries.
http://sourceforge.net/projects/pgbadger/
On Mon, Jan
On 1/19/2015 4:58 PM, Robert DiFalco wrote:
Hometowns get selected and possibly inserted in unpredictable ways
even from multiple concurrent sessions. The only way I could figure
out how to solve it was to force each INSERT hometowns to be in its
own transaction.
So you fixed it - good. In
On Mon, Jan 19, 2015 at 2:37 PM, Tim Uckun timuc...@gmail.com wrote:
I am trying to get some slow query information and the results from my query
are truncated at 2047 characters. Some of my queries are very long so they
get truncated in the select. Is there a way around this?
Yes. This is
Ok thanks. I am still on 9.3 so I'll adjust that setting.
On Tue, Jan 20, 2015 at 12:15 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
On Mon, Jan 19, 2015 at 2:37 PM, Tim Uckun timuc...@gmail.com wrote:
I am trying to get some slow query information and the results from my
query
Since you are on 9.3. you might want to consider using PgBadger as a better
way to get information about slow queries.
http://sourceforge.net/projects/pgbadger/
On Mon, Jan 19, 2015 at 6:41 PM, Tim Uckun timuc...@gmail.com wrote:
Ok thanks. I am still on 9.3 so I'll adjust that setting.
On
Yeah definitely not staying resident once read, although the machine does
gave 256GB of memory so some will persist in the OS cache.
Actually this brings up another question, if I have an unique integer index
of 2.gb what percentage would I expect to read for a value that was higher
or lower than
Hi,
At Mon, 19 Jan 2015 00:14:55 -0800, John R Pierce pie...@hogranch.com wrote
in 54bcbcff.5040...@hogranch.com
On 1/18/2015 11:59 PM, James Sewell wrote:
Actually this brings up another question, if I have an unique integer
index of 2.gb http://2.gb what percentage would I expect to
On 1/18/2015 11:59 PM, James Sewell wrote:
Actually this brings up another question, if I have an unique integer
index of 2.gb http://2.gb what percentage would I expect to read for
a value that was higher or lower than all index values?
a couple 8k blocks. its a b-tree.
--
john r pierce
This is really a theoretical/anecdotal question, as I'm not at a scale yet
where this would measurable. I want to investigate while this is fresh in my
mind...
I recall reading that unless a row has columns that are TOASTed, an `UPDATE` is
essentially an `INSERT + DELETE`, with the previous
Hometowns get selected and possibly inserted in unpredictable ways even
from multiple concurrent sessions. The only way I could figure out how to
solve it was to force each INSERT hometowns to be in its own transaction.
On Mon, Jan 19, 2015 at 1:56 PM, Robert DiFalco robert.difa...@gmail.com
I don't think an advisory lock would remove the deadlock.
On Sun, Jan 18, 2015 at 10:33 PM, Roxanne Reid-Bennett r...@tara-lu.com
wrote:
On 1/16/2015 2:41 AM, Jim Nasby wrote:
On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
try this: (if you still get deadlocks, uncomment the advisory
Hi
I'm pretty sure PostgreSQL can handle this.
But since you asked with a theoretic background,
it's probably worthwhile to look at column stores (like [1]).
-S.
[*] http://citusdata.github.io/cstore_fdw/
2015-01-19 22:47 GMT+01:00 Jonathan Vanasco postg...@2xlp.com:
This is really a
I don't understand having both UUID and time stamp in your PK? The first is by
defn. unique and the second might be.
Sent from my iPhone
On Jan 19, 2015, at 6:12 AM, Spiros Ioannou siv...@inaccess.com wrote:
Hello group,
we have a timeseries table, and we tried to partition it by month
Hello group,
we have a timeseries table, and we tried to partition it by month (with
pg_partman). It seems the query planner always reads all tables regardless
of WHERE, except when WHERE is equality.
*the parent table:*
ifms_db=# \dS measurement_events
Table
On 4 January 2015 at 02:52, John Casey john.ca...@innovisors.com wrote:
I'm still experiencing similar problems. I'm not certain what parameter you
are referring to when you say 'ehost'. Otherwise, I did want to clarify a
couple of things. I have tried several combinations, each one fails in
Sorry,
Yeah, scanning children reading highest/lowest values for each of them
would make it faster than the first single query.
Mmm..no. It has nothing different from querieng on the parent table. Please
Ignore the silly thing.
--
Kyotaro Horiguchi
Le 2015-01-18 à 20:58, James Sewell james.sew...@lisasoft.com a écrit :
Hello,
I am using partitioning with around 100 sub-tables. Each sub-table is around
11GB and partitioned on the 'id' column. I have an index on the id column on
each sub-table.
Is it possible to get a query like
Robert DiFalco robert.difa...@gmail.com wrote:
I have several tables that I use for logging and real-time stats. These are
not
critical and since they are a bottleneck I want transactions against them to
always be asynchronous. Is there a way to specify this at a table level or do
I
have
Andreas, I think UNLOGGED would be something different but I'm not totally
clear. However, it seems to me that an unlogged table would simply
disappear (be truncated) after a server crash. That means instead of maybe
loosing a record or two that I could loose a ton or records. But maybe my
I have several tables that I use for logging and real-time stats. These are
not critical and since they are a bottleneck I want transactions against
them to always be asynchronous. Is there a way to specify this at a table
level or do I have to make sure to call set synchronous_commit='off' every
On Mon, Jan 19, 2015 at 10:03 AM, Robert DiFalco robert.difa...@gmail.com
wrote:
I have several tables that I use for logging and real-time stats. These
are not critical and since they are a bottleneck I want transactions
against them to always be asynchronous. Is there a way to specify this
24 matches
Mail list logo