rictions.
That doesn't seem to be your problem, at least not yet.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
sum complaint
not a version-number complaint if started against a 9.4.x pg_control.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Thomas Kellerer <spam_ea...@gmx.net> writes:
> Tom Lane schrieb am 15.12.2016 um 16:20:
>> Hmm. It might shed some light if you put "echo" in front of that
>> to see what gets printed:
>> $ echo pg_dump -d postgres -t "\"Statuses\""
>>
>
> https://www.postgresql.org/docs/9.5/static/storage-toast.html
>>
>
> "The TOAST management code is triggered only when a row value to be stored
> in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The
> TOAST code will compress and/or move field values out-of-line until the
2016-12-15 16:23 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
> On 12/15/2016 07:17 AM, Tom DalPozzo wrote:
Hi,
>
> it's not clear to me when tuple data (max 1000 bytes total) get
>
> compressed on disk and when not.
>
> I tried with pg_column_size to find
s were found
Hmm. It might shed some light if you put "echo" in front of that
to see what gets printed:
$ echo pg_dump -d postgres -t "\"Statuses\""
pg_dump -d postgres -t "Statuses"
regards, tom lane
--
Sent via pgsql-general ma
Hi,
it's not clear to me when tuple data (max 1000 bytes total) get compressed
on disk and when not.
I tried with pg_column_size to find the tuple's size but I get ambiguous
results.
It seems to work but sometimes the tuples seem compressed and sometime not.
I tried both with constant data and
pg_typeof(-1::uint4), 1::uint4, pg_typeof(1::uint4);
:: binds tighter than minus, so you would need to write these like
"(-1)::uint4" to get the behavior you're expecting. See
https://www.postgresql.org/docs/9.5/static/sql-syntax-lexical.html#SQL-PRECEDENCE
rega
Thomas Kellerer <spam_ea...@gmx.net> writes:
> Tom Lane schrieb am 13.12.2016 um 18:02:
>> These cases work for me. Maybe your shell is doing something weird
>> with the quotes?
> Hmm, that's the default bash from CentOS 6 (don't know the exact version)
I'm using bash fr
Thanks for the report!
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ork for me. Maybe your shell is doing something weird
with the quotes?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
int)), j(y) as (values (2::int)) select
> max(x) from (select x from i union all select y from j) b;
> ERROR: could not find plan for CTE "i"
Yup, sure looks like a bug to me, especially since it seems to work as
expected before 9.5. No idea offhand what broke it.
f-line values incurs no
TOAST costs if none of the out-of-line values change.
I don't remember offhand what corner cases might exist to prompt the
weasel wording "normally". Maybe that just reflects the possibility
that one of the newly updated values would need toasting.
> and I can assume some others ) to properly read your messages.
>
> If you want to discourage people replying to you, keep doing the two above.
>
> On Sat, Dec 10, 2016 at 3:25 PM, Tom DalPozzo <t.dalpo...@gmail.com>
> wrote:
> > you're right, VACUUM FULL recovered the spa
2016-12-10 18:33 GMT+01:00 Francisco Olarte <fola...@peoplecall.com>:
> Tom:
>
> On Sat, Dec 10, 2016 at 6:01 PM, Tom DalPozzo <t.dalpo...@gmail.com>
> wrote:
> > As for crash proof, I meant that once my client app is told that her
> update
> > request
2016-12-10 18:10 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>:
>
> > On Dec 10, 2016, at 10:01 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:
> >
> > 2016-12-10 16:36 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>:
> >
> > > On Dec 10, 2016, a
2016-12-10 15:41 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
> On 12/10/2016 04:21 AM, Tom DalPozzo wrote:
>
>> Hi,
>> my release is 9.5.4.
>> a took a look over it. I guessed that counting could be slow because it
>> needs to read everything and
2016-12-10 16:36 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>:
>
> > On Dec 10, 2016, at 7:27 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:
> >
> > Hi,
> > I'd like to do that! But my DB must be crash proof! Very high
> reliability is a must.
&g
Hi,
I'd like to do that! But my DB must be crash proof! Very high reliability
is a must.
I also use sycn replication.
Regards
Pupillo
2016-12-10 16:04 GMT+01:00 Rob Sargent <robjsarg...@gmail.com>:
>
> > On Dec 10, 2016, at 6:25 AM, Tom DalPozzo <t.dalpo...@gmail.com&
millions updates per day, hence (50millions * 100
bytes *2 fields updated) 10Gbytes net per day.
I'm afraid it's not possible, according to my results.
Reagrds
Pupillo
2016-12-10 13:38 GMT+01:00 Francisco Olarte <fola...@peoplecall.com>:
> Hi Tom
>
> On Sat, Dec 10, 2016
an index, but I'm reading
all the rows).
Regards
Pupillo
2016-12-09 17:16 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>:
> On 12/09/2016 08:03 AM, Tom DalPozzo wrote:
> > Hi,
> > I did two tests:
> > TEST 1
> > 1 I created a table ("Table&quo
Hi,
I've a table ('stato') with an indexed bigint ('Id') and 5 bytea fields
('d0','d1',...,'d4').
I populated the table with 1 rows; each d.. field inizialized with 20
bytes.
Reported table size is 1.5MB. OK.
Now, for 1000 times, I update 2000 different rows each time, changing d0
filed
ver.
(I'm a bit suspicious that this might be related to commit 2b3a8b20c,
which went into 9.3.6, but it's probably premature to blame that.)
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.po
Hi,
I did two tests:
TEST 1
1 I created a table ("Table") with two fields, one ("Id") is a bigint and
the other ("Data") is a bytea. Also created an index on Id.
2 Populated the table with 1 rows, in which the bigint is incremental
and bytea is 1000 bytes long.
3 Executed SELECT COUNT(*) FROM
array_value = Eg[i]
> END LOOP;
> But Eg[i] is assigning null to array_value
I think you want
array_value = Eg[i][array_lower(Eg, 2):array_upper(Eg, 2)]
As of 9.6 you could use the shorthand
array_value = Eg[i][:]
regards, tom lan
to connect using a socket.
Also, you can use -h /path/to/socket/dir to specify connecting
using a socket file in a specific directory.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
so I can't offer any detailed help, but
I believe that's possible.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
bout which OOM API is preferred
or deprecated. It's probably just cosmetic anyway, so I'd say ignore it.
9.5 and up have a better design for this, in which the behavior is
determined on the fly rather than being hard-wired at build time.
regards, tom lane
--
Sent via pgsql-gene
even more baffling
> behavior if I were using an xterm: with a blank PAGER your output
> would disappear only if the select exceeded a certain number of
> lines...
Yeah, that was exactly the behavior I was seeing before fixing it
(the fix is pushed btw).
regards, to
Hi,
I tried both ways: they're ok.
Also, multiple VALUES in one INSERT is actually better as performance.
Thanks again
Pupillo
2016-12-06 19:49 GMT+01:00 Tom Lane <t...@sss.pgh.pa.us>:
> [ please keep the list cc'd ]
>
> Tom DalPozzo <t.dalpo...@gmail.com> writes:
>
doing that; see
https://wiki.postgresql.org/wiki/Systemd
or the longer discussion at
https://www.postgresql.org/message-id/flat/57828C31.5060409%40gmail.com
or a couple of other discussions you can find by searching the PG mail
archives for "systemd semaphores".
regard
irrelevant here, because it
can easily be shown that psql doesn't behave nicely if PAGER is set to
empty when it does try to use the pager.
regards, tom lane
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 261652a..9915731 100644
*** a/doc/s
e enough that you need to write/read
it in chunks rather than all at once, the large-object APIs are what
you want.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ER is empty or all white space
then we should not try to use it as a shell command; we can either
treat the case as "pager off" or as "use default pager". Everything
else we can leave to the invoked shell to complain about.
Comments?
regards, tom lan
[ please keep the list cc'd ]
Tom DalPozzo <t.dalpo...@gmail.com> writes:
> To be honest, I didn't know or I forgot about multiple VALUES in one
> command! Thanks for reminding!
> As for the PQexecParams, should I specify something in const Oid
> *paramTypes parameter? Or just
Tom DalPozzo <t.dalpo...@gmail.com> writes:
> I've a table in which a field is BYTEA, as I need to store around 200 raw
> bytes in this field.
> I need to perform many many INSERT starting from a common C array and, in
> order to get good performance, I want to do many of them
Hi,
I've a table in which a field is BYTEA, as I need to store around 200 raw
bytes in this field.
I need to perform many many INSERT starting from a common C array and, in
order to get good performance, I want to do many of them in a single BEGIN
COMMIT block.
What is the best choice from
e remove from the popen call, but if it's not
unreasonably hairy we should do it.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo
tivity).
Regards
2016-12-05 20:02 GMT+01:00 Jeff Janes <jeff.ja...@gmail.com>:
> On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:
>
>> Hi,
>> I've two tables, t1 and t2, both with one bigint id indexed field and one
>> 256 char data fi
n* everything works.
Ah! So, most likely, there is something wrong with the local installation
of "more", or whatever the environment variable PAGER is set to. If you
say "more somefile", does it behave reasonably? Check "echo $PAGER"
as well.
--- I don't remember
at the moment what are the user-visible cases where this happens.)
You can find probably more than you want to know about deadlock handling
in src/backend/storage/lmgr/README.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general
von...@2ndquadrant.com>:
> On Fri, 2016-12-02 at 13:45 -0800, Adrian Klaver wrote:
> >
> > On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> > >
> > >
> > > Hi,
> > > I've two tables, t1 and t2, both with one bigint id indexed field
> > > and
&g
e was an index matching one
but not the other. Right now we're very stupid and only consider x,y,
but if there were room to consider more than one set of target pathkeys
it would be fairly simple to make that better.
regards, tom lane
--
Sent via pgsql-general mail
ed around that
> limit using arrays or multiple tables).
Yeah, this is a bug, but fortunately the fix is pretty trivial. See
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da05d0ebc637a84ba41a172b32552557ebad199f
regards, tom lane
--
Sent via pg
o confirm
which ones behave normally and which don't. Of course, the other two
would have to be told --host=/tmp to talk to the handbuilt server.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
statement parameter so
that the WHERE condition never succeeds. You could set log_statement=all
and look in the postmaster log to see what the server thinks it's getting.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
rompt change to match?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hmm ... a different take on that is that maybe psql is crashing
because it's linking to an ABI-incompatible libpq. You should try
"ldd" on the psql executable and see if it's resolving the libpq
dependency to the copy you intended.
regards, tom lane
--
Sent via pg
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one
256 char data field; t1 has always got 1 row, while t2 is increasing as
explained in the following.
My pqlib client countinously updates one row in t1 (every time targeting a
different row) and inserts a new row in
t through to_tsquery(),
though likely that would just have a different set of failure modes
with queries where you do wish stemming would occur.
The problem with "no" seems to be the same.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-gen
compatibility to worry about.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
te
what behavior of your application is causing that and take steps to
mitigate it.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
George <pinkisntw...@gmail.com> writes:
> On Thu, Dec 1, 2016 at 6:58 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> What
>> do you get for
>> select * from pg_stats where tablename = 'wg3ppbm_userpartner';
>> and likewise for wg3ppbm_partner?
> It is a w
to duplicate that behavior, without success. What PG version is
that, exactly? Have you vacuumed and/or analyzed those two tables? What
do you get for
select * from pg_stats where tablename = 'wg3ppbm_userpartner';
and likewise for wg3ppbm_partner?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
staddr option was added specifically so there is a way
to make connections without relying on DNS.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
r the
> function is installed on. An example:
Personally, I'd try to convert everything to new style, and put a shim
function into pre-9.6 deployments only. This seems to work:
create function to_regclass(text) returns regclass
language sql as 'select to_regclass($1::cstring)';
I'm working on a patch for that). Whether the OP's particular
query is being hit by that is impossible to tell, though, since there
isn't any actual RLS usage in the doubtless-oversimplified example.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-gener
o percent of a table's rows for an indexscan
on it to be faster than a seqscan.
I think however that the "half" may be a default estimate occasioned
by the other tables being empty and therefore not having any statistics.
Another rule of thumb is that the plans you get for tiny tables hav
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
thers.
There are other scenarios where file access would occur, of course, but
this seems like a plausible explanation.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t; blocked by process 2234.
> Process 2234: INSERT INTO table1 ( id , unique_id )VALUES( '1', '4')
> Process 4111: REINDEX TABLE table1
Offhand I would not expect those two commands to deadlock in isolation.
Are they parts of larger transactions that take additional locks?
nything was said about changes that
cause breakage of extensions. But we try hard to avoid such changes
in minor releases, particularly in long-stable branches.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chan
e parameter.
Don't know how hard it might be to arm-wrestle ActiveRecord into doing
it like that :-(
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I'm not sure what you're going to do if it doesn't,
though, since you still haven't got enough data to do an INSERT.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ch leads me to wonder if the parallel calls are likely to be fighting
over inserting/updating the same row in the group_history partition
tables. Or are you certain that they should be hitting different rows?
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-g
query string includes
> parameters instead of values.
It looks like Poul is hoping to use a parameter as a table name,
which doesn't work. Parameters can only represent values (constants).
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postg
Poul Kristensen <bcc5...@gmail.com> writes:
> When I use this in my code I get
> "undefined reference to `PQexecParms'
> when compiling.
IIRC, it's PQexecParams not PQexecParms
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-ge
a quick test with a non-polymorphic definition
just to see, but I doubt it's a promising avenue.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
rgot to ANALYZE, or the random sample came up quite a bit different).
And I'm a little suspicious that these tests weren't all done with the
same work_mem setting.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to yo
rror catcher. In the case at hand, the given INSERT request fails due
to not-null constraints that are unrelated to what the ON CONFLICT clause
tests for.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscr
alternatives here ---
you could just ignore the errors relating to plpgsql, but if you have
C-language functions that you need to migrate, there is no way to
install those without superuser privileges.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general
umpall -g" to be helpful.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
causes lots of comparisons for common values.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
finitions to not assume
that the underlying tables have primary keys. It looks like in
view_temp_export_geo_recherche_extra_sites_projets you need to add
c.official_language_id to the GROUP BY, and similarly in
view_temp_export_geo_recherche_offtrad_sites.
regards, tom lane
localhost --clean -d db1 "D:\temp.dump"
> fails.
[ squint... ] That does look like it ought to work. I wonder if postgis
is doing something weird? But again, it's impossible to debug this at
this level of detail. If you can make a test case I'd be happy to look
into it.
m the dump, it might still be the case that A can't be restored
before C).
If you think neither of those cases apply, please provide a self-contained
test case.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
current_setting(). In a libpq client it's also directly available
from PQparameterStatus().
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ks.)
BTW, it still looks like the performance is being significantly hurt by
inadequate work_mem.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
That WILL break your table.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
s c limit 10;
or even
with dummy as (select * from pg_class c limit 10)
insert into test select * from dummy;
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jeff Janes <jeff.ja...@gmail.com> writes:
> On Thu, Nov 10, 2016 at 7:11 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> If you are using that contrib module, and it's capturing this operator
>> reference, that would probably explain the bad estimate. You could
>
ator(pg_catalog.@>)"),
or you could upgrade to 9.6 (don't forget to do ALTER EXTENSION ... UPDATE
afterwards).
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
f the extra
LockRows nodes. It's not something we'd think of back-patching, though.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ation logic until 9.6.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n
up-to-date stats. I speculate that you need to ANALYZE this table.
If there are a lot of distinct possible values in the arrays, increasing
the statistics target for the column might be needed.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-genera
way with all that as long as your application
isn't doing anything that makes it matter critically which semantics
get applied while the changeover is being made.
But test on a scratch database ...
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-genera
2016 at 9:21 PM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:
> > I know that, but with neither database activity or chekpoint, it doesn't
> > force anything. The fact is that there are checkpoints being executed
> every
> > checkpoint_timeout, and I don't understand why as
-11-07 13:14 GMT+01:00 amul sul <sula...@gmail.com>:
> On Mon, Nov 7, 2016 at 4:20 PM, Tom DalPozzo <t.dalpo...@gmail.com> wrote:
> > Hi,
> > I have:
> > checkpoint_timeout = 2min
> > wal_level = archive
> > archive_mode=on
> > archive_timeout = 30
Hi,
I have:
checkpoint_timeout = 2min
wal_level = archive
archive_mode=on
archive_timeout = 30
With NO dbase activity, I see the WAL being modified every 2min (and,
consequently, one WAL file archived every 2min too ).
Is it right? I read: "If no WAL has been written since the previous
bservacao, (select regexp_matches(observacao, '\d\d/\d\d/\d\d\d\d'))
from ...
As of v10 there will be a less confusing solution: use regexp_match()
instead.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I don't think plpgsql coped
with that very well before 9.1.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
7AM +0100, Tom DalPozzo wrote:
> > What am I missing?
>
> David already explained, but you might want to read also:
> https://www.depesz.com/2011/07/14/write-ahead-log-
> understanding-postgresql-conf-checkpoint_segments-
> checkpoint_timeout-checkpoint_warning/
>
> depesz
Hi,
I found, in pg_xlog dir, several WAL segment files with old modification
timestamp but with their names greater than more recent files.
Ex.:
000100C6 modified today
000100DE modified yesterday
I thought it could not be possible.
I'm doing some tests
etter if it performed the join before the
> policy check.
Join cases with RLS aren't optimized very well at the moment. There's
work afoot to improve this - see
https://www.postgresql.org/message-id/flat/8185.1477432701%40sss.pgh.pa.us
- but it won't be in production before v10.
identifyier. I've no idea what DBMS they're using (but suspect Oracle). My
> web search used in-effective terms so I could not find the answer there.
>What does U1 represent?
Unique constraint, perhaps? I'm just guessing.
regards, tom lane
--
Sent via pg
erformance, but have started a thread about it on -hackers:
https://www.postgresql.org/message-id/6315.1477677885%40sss.pgh.pa.us
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
representation. We've not seen enough info
about your requirements to suggest just how, though.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Steve Clark <steve.cl...@netwolves.com> writes:
> On 10/28/2016 09:48 AM, Tom Lane wrote:
>> Retrying might be a usable band-aid, but really this is an application
>> logic error. The code that is trying to do "lock table t_unit in
>> exclusive mode" mu
This still reads the
"games" CTE twice; but since you're evaluating that CTE elsewhere in the
query, I think the additional follower node isn't worth trying to get
rid of.
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresq
501 - 600 of 14105 matches
Mail list logo