This is documented in section 8.1.2 in the manual. (
https://www.postgresql.org/docs/9.6/static/datatype-numeric.html)
NUMERIC rounds away from zero.
IEEE 754 based data types (FLOAT, DOUBLE PRECISION) round to the closest
even number.
On Thu, Jun 1, 2017 at 6:26 PM, Louis Battuello
Hi,
I have a psql script relying on variables passed in on the command line
with -v.
Is there any way to assign a default value in case the -v option is
forgotten?
Here is an example with pgtap:
select diag('should be printed only in verbose mode') where :testverbose;
This can be called with
ALTER TABLE command the n14_4 column is shown
with 12 places after the dot. If I just update atttypmod, it's still only 4
places.
Why is that so? I checked ctid. The ALTER TABLE version does not actually
update the tuple.
On Tue, Jan 24, 2017 at 11:48 AM, Albe Laurenz <laurenz.a...@wien.gv.at
Hi,
we have a large table and want to change the type of one column
from NUMERIC(14,4) to NUMERIC(24,12). If the new type is just NUMERIC
without any boundaries, the operation is fast. If (24,12) is specified, it
takes ages.
I think it takes so long because the database wants to check that all
This hex string decodes to something sensible:
$ perl -le 'print pack "H*", shift'
246c69626469722f757466385f616e645f69736f383835395f31
$libdir/utf8_and_iso8859_1
Maybe it rings a bell.
On Thu, Jan 5, 2017 at 7:57 PM, BRUSSER Michael
wrote:
> I see this with
Hi,
is there a perl module that allows to speak the streaming replication
protocol? Can DBD::Pg do that anyhow?
I think I could just pipe from pg_recvlogical. But would be cool to have it
directly in Perl.
Thanks,
Torsten
I use this:
create extension pageinspect;
SELECT count(*) AS pages_read
FROM (
SELECT c.oid::regclass::text AS rel,
f.fork,
ser.i AS blocknr,
page_header(get_raw_page(c.oid::regclass::text,
f.fork,
Did you try DISTINCT ON?
postgres=# table x;
id | qid | uid
+-+
1 | 25 | 1
2 | 25 | 1
3 | 25 | 1
4 | 26 | 1
5 | 26 | 1
6 | 27 | 1
7 | 27 | 1
8 | 25 | 2
9 | 25 | 2
10 | 25 | 2
11 | 26 | 2
12 | 26 | 2
On Wed, Dec 14, 2016 at 5:59 AM, Michael Paquier <michael.paqu...@gmail.com>
wrote:
> On Wed, Dec 14, 2016 at 5:00 AM, Torsten Förtsch <tfoertsch...@gmail.com>
> wrote:
> > one of the major enhancements in 96 is skipping completely frozen pages
> in
> > vacu
Hi,
one of the major enhancements in 96 is skipping completely frozen pages in
vacuum freeze. I assume that requires a special bit on the page.
If I upgrade from 93 using pg_upgrade, that is re-using the data files, can
it still do that? Or do I have to recreate the table?
Thanks,
Torsten
Thanks Tom
On Tue, Dec 13, 2016 at 7:22 PM, Tom Lane wrote:
> I wrote:
> > Yup, sure looks like a bug to me, especially since it seems to work as
> > expected before 9.5. No idea offhand what broke it.
>
> The answer is, I broke it, through some ill-advised neatnik-ism :-(,
Thanks for confirming.
Here are a few more examples that also work:
with i(x) as (values (1::int)) select x from (select x from i union all
select 3::int) b order by x desc limit 1;
with i(x) as (values (1::int)) select max(x) from (select x from i union
select 3::int) b;
It also works with
Hi,
this is a stock PGDG 9.6:
postgres=# with i(x) as (values (1::int)), j(y) as (values (2::int)) select
x from (select x from i union all select y from j) b;
x
---
1
2
(2 rows)
postgres=# with i(x) as (values (1::int)), j(y) as (values (2::int)) select
max(x) from (select x from i union all
On Mon, Dec 12, 2016 at 12:37 PM, Albe Laurenz <laurenz.a...@wien.gv.at>
wrote:
> Torsten Förtsch wrote:
> > if I do something like this:
> >
> > BEGIN;
> > UPDATE tbl SET data='something' WHERE pkey='selector';
> > UPDATE tbl SET data=NULL WHERE pke
Hi,
if I do something like this:
BEGIN;
UPDATE tbl SET data='something' WHERE pkey='selector';
UPDATE tbl SET data=NULL WHERE pkey='selector';
COMMIT;
Given 'selector' actually exists, I get a separate WAL entry for each of
the updates. My question is, does the first update actually hit the
On Sun, Dec 11, 2016 at 12:10 AM, Andres Freund <and...@anarazel.de> wrote:
> On 2016-12-10 22:34:02 +0100, Torsten Förtsch wrote:
> > I am working on a logical decoding output plugin. I am only interested in
> > the changes in one particular table.
> >
> > One
Hi,
I am working on a logical decoding output plugin. I am only interested in
the changes in one particular table.
One way to check for the table would be to compare class_form->relname
and get_namespace_name(class_form->relnamespace). But I'd much prefer to
just compare the OID of the table.
Think I found it. classid 1262 is pg_database and I seem to remember that
NOTIFY takes that lock. I dropped pg_notify from my function and got
immediately >3500 tx/sec.
On Wed, Dec 7, 2016 at 11:31 AM, Torsten Förtsch <tfoertsch...@gmail.com>
wrote:
> On Wed, Dec 7, 2016 at 11:21
On Wed, Dec 7, 2016 at 11:21 AM, Torsten Förtsch <tfoertsch...@gmail.com>
wrote:
> Hi,
>
> I need to tune my database for a high update rate of a single small table.
> A little simplified it looks like this:
>
> CREATE TABLE temp_agg(
> topic TEXT PRIMARY KEY,
&
Hi,
I need to tune my database for a high update rate of a single small table.
A little simplified it looks like this:
CREATE TABLE temp_agg(
topic TEXT PRIMARY KEY,
tstmp TIMESTAMP,
cnt BIGINT,
sum NUMERIC
)
The table has 500 rows.
A transaction looks simplified like this:
1) select
Hi,
I am in the process of reviewing our configs for a number of 9.3 databases
and found a replica with hot_standby_feedback=on. I remember when we set it
long ago we were fighting cancelled queries. I also remember that it never
really worked for us. In the end we set up 2 replicas, one suitable
On Tue, Nov 22, 2016 at 10:34 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:
> On 11/22/2016 12:51 PM, Torsten Förtsch wrote:
>
>>
>> Now, I have a database with very low write activity. Archive_command is
>> called about once per hour to archive one segment.
Hi,
I am a bit confused about min_wal_size and max_wal_size. Previously, there
was this formula to estimate the max number of WAL segment files in
pg_xlog/ (https://www.postgresql.org/docs/9.4/static/wal-configuration.html
):
(2 + checkpoint_completion_target) * checkpoint_segments + 1 or
On 30/11/15 22:50, Steve Crawford wrote:
> Do you have any clients connected that are idle in transaction?
No, the backend database is a streaming replica. Also, as I said, the
backend process is in "idle" state not "idle in transaction".
backend_start| 2015-11-30 18:58:30.639962+00
Hi,
I am not sure if this is the right place to ask this question. If not,
please point me to it.
I am trying out the new pgbouncer (latest git). "SHOW SERVERS" is
telling me 2 connections in "active" state. Both show
connect_time=2015-11-30 18:58:30. Request_time is 19:01:35 and 20:56:36.
Both
On 15/10/15 14:32, Ramesh T wrote:
> select position('-' in '123-987-123')
> position
> ---
> 4
> But I want second occurrence,
> position
> -
> 8
>
> plz any help..?
For instance:
# select char_length(substring('123-987-123' from '^[^-]*-[^-]*-'));
char_length
-
8
On 29/07/15 21:13, CS DBA wrote:
The documentation for pg_stat_activity lists this column:
backend_xmin xid The current backend's xmin horizon.
Can someone point me to a better understanding on xmin horizon?
https://momjian.us/main/writings/pgsql/mvcc.pdf
you can find this talk
Hi,
we have a complex structure of streaming replication (PG 9.3) like:
master -- replica1
|
+- replica2 -- replica21
|
+-- replica22 -- replica221
Now I want to retire master and make replica2 the new master:
+-- replica1
On 18/05/15 13:44, Sachin Srivastava wrote:
But currently my pg_xlog size is 60 GB and there are 3740 WAL file in
this folder and in Last week this was 400 GB(pg_xlog folder) and WAL
file were approx. 3. Due to archiving pg_xlog folder size is
decreasing now but it’s taking one week to
Hi,
given a query like this:
select *
from account a
cross join lateral (
select rate
from exchange
where target='USD'
and source=a.currency
order by date desc
limit 1) e
where a.id=19
for update;
If I understand the documentation
On 25/02/15 07:22, David G Johnston wrote:
I'm doubting you intended to join a bunch of commas using the field value as
the delimiter...methinks your got the argument order reversed for
string_agg.
OMG, I am so stupid. Thanks.
--
Sent via pgsql-general mailing list
On 25/02/15 07:34, David G Johnston wrote:
Torsten Förtsch wrote
Is there anything similar for JSON scalars?
IDK, but have you tried ::text?
yes. Here is the difference
select * from (values (('{a:b}'::json - 'a')::text),
('{a:b}'::json - 'a')) t;
column1
-
b
Hi,
I think I found a json related bug in 9.3.
Given this query:
select *
from json_array_elements('[{s:[{x:1},{x:2},{x:5}]},
{s:[{x:3},{x:4},{x:6}]}]')
t(el)
cross join lateral (
select syms.sym - 'x' as x
from json_array_elements(t.el - 's')
Hi,
I was looking for a PG user group around KL. I know there is one in
Singapore. As it happens, Chris Travers, the PG contact for Malaysia is
a friend of mine. So, I asked him. He wasn't aware of one either.
However, he very much appreciated the idea of founding one. I know there
are lots of PG
On 31/10/14 09:38, Magnus Hagander wrote:
I think in this particular case, much of the blame can be placed on
everybody being at pgconf.eu last week, and struggling under
backlogs. But as Stephen says, the backlog will eventually get worked
through, and the moderation happen.
I see. No
Hi,
I don't know if this is the right mailing list for my question but I
have already emailed pla...@postgresql.org without any response.
What is the correct way to get a blog listed on planet postgres?
The documented way does not work. I registered my blog on Monday. It
still is in Not
On 28/10/14 19:37, Joe Van Dyk wrote:
On Mon, Oct 27, 2014 at 6:22 AM, Emanuel Calvo
emanuel.ca...@2ndquadrant.com mailto:emanuel.ca...@2ndquadrant.com
wrote:
El 23/10/14 a las 17:40, Joe Van Dyk escibió:
Hi,
I have a master and a slave database.
I've got
Hi,
I noticed a strange behaviour regarding pg_stat_replication in 9.3. If
called from psql using the \watch command, I see all my replicas. From
time to time one of them drops out and reconnects in a short period of
time, typically ~30 sec.
If I use the same select in plpgsql like:
FOR r in
On 14/09/14 16:24, Andy Colson wrote:
I wonder if its a transaction thing? Maybe \watch is using a
transaction for each (or isn't using transactions at all), whereas the
plpgsql is one long transaction?
Also if one of your replicas is far away, it doesn't really surprise me
that it might
On 14/09/14 18:55, Tom Lane wrote:
Are you watching the state in a loop inside a single plpgsql function?
If so, I wonder whether the problem is that the plpgsql function's
snapshot isn't changing. From memory, marking the function VOLATILE
would help if that's the issue.
The function is
On 21/07/14 16:17, Tom Lane wrote:
db=# select page_header(get_raw_page(2836::oid::regclass::text, 'fsm',
1));
ERROR: block number 1 is out of range for relation pg_toast_1255
db=# select pg_relation_size(2836::oid::regclass, 'fsm');
pg_relation_size
--
On 22/07/14 16:58, Tom Lane wrote:
Doh. I looked right at this code in get_raw_page yesterday:
if (blkno = RelationGetNumberOfBlocks(rel))
elog(ERROR, block number %u is out of range for relation \%s\,
blkno, RelationGetRelationName(rel));
Hi,
we are using 9.3 with data checksums enabled. Now I am looking for a way
to check if all database blocks are still intact. First I tried
pg_filedump. In many cases it simply ignored tampered data blocks. It is
probably not made for this task.
Then I remembered about the pageinspect
On 20/07/14 17:35, Tom Lane wrote:
=?ISO-8859-1?Q?Torsten_F=F6rtsch?= torsten.foert...@gmx.net writes:
Then I remembered about the pageinspect extension. The following select
is a bit too verbose but it seems to do the job for everything except
fsm files.
SELECT
On 20/07/14 16:02, Andrew Sullivan wrote:
Then I could also use it in production. But currently I
need it only to verify a backup.
If you need to verify a backup, why isn't pg_dump acceptable? Or is
it that you are somehow trying to prove that what you have on the
target (backup) machine
Hi,
our developers use a ORM tool which generates updates that write all
columns of a table. In most cases, however, very few columns actually
change. So, those updates mostly write the same value that already is
there in the column.
Now, if there is an index on such columns, does Postgres
Hi,
when a row is updated a RowExclusiveLock is obtained on the updated row
and on every related index.
Is the order in which these locks are acquired defined in some way?
Thanks,
Torsten
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Hi,
time and again I need to build indexes. If they are big, that generates
a lot of WAL data that needs to be replicated to streaming replication
slaves. Usually these slaves don't lag behind noticeably. So, the
application often reads from them. Well, unless I build indexes and,
thus, create a
On 22/05/14 21:05, Jeff Janes wrote:
time and again I need to build indexes. If they are big, that
generates
a lot of WAL data that needs to be replicated to streaming
replication
slaves. Usually these slaves don't lag behind noticeably. So, the
On 01/05/14 19:50, Seb wrote:
Hello,
I've been looking for a way to write a table into multiple files, and am
wondering if there are some clever suggestions. Say we have a table
that is too large (several Gb) to write to a file that can be used for
further analyses in other languages. The
Hi,
we have the ROW type and we have arrays. We also can create arrays of
rows like:
select array_agg(r)
from (values (1::int, 'today'::timestamp, 'a'::text),
(2, 'yesterday', 'b')) r(a,b,c);
array_agg
On 30/04/14 20:19, David G Johnston wrote:
ISTM that you have to CREATE TYPE ... as appropriate then
... tb ( col_alias type_created_above[] )
There is only so much you can do with anonymous types (which is what the ROW
construct creates; ROW is not a type but an expression anchor - like
Hi,
I think I can find out the transaction ids of concurrent transactions
older than the current one by:
select * from txid_snapshot_xip(txid_current_snapshot())
union
select * from txid_snapshot_xmax(txid_current_snapshot());
Now, I want to map these transaction ids to backend process
On 25/04/14 13:26, Torsten Förtsch wrote:
I think I can find out the transaction ids of concurrent transactions
older than the current one by:
select * from txid_snapshot_xip(txid_current_snapshot())
union
select * from txid_snapshot_xmax(txid_current_snapshot());
Now, I want
Hi,
I got this plan:
Limit (cost=0.00..1.12 rows=1 width=0)
- Seq Scan on fmb (cost=0.00..6964734.35 rows=6237993 width=0)
Filter: ...
The table has ~80,000,000 rows. So, the filter, according to the plan,
filters out 90% of the rows. Although the cost for the first row to
come
(depending where the filter is taken into account). In my case the
startup cost for the limit node should be somewhere between 25 and
30.
Torsten
2014-04-22 14:10 GMT+02:00 Torsten Förtsch torsten.foert...@gmx.net
mailto:torsten.foert...@gmx.net:
Hi,
I got this plan
On 22/04/14 16:39, Albe Laurenz wrote:
Could you run EXPLAIN ANALYZE for the query with enable_seqscan
on and off? I'd be curious
a) if the index can be used
b) if it can be used, if that is actually cheaper
c) how the planner estimates compare with reality.
Using the index:
Limit
On 22/04/14 16:45, Tom Lane wrote:
No. The model is that startup cost is what's expended before the scan can
start, and then the run cost (total_cost - startup_cost) is expended while
scanning. Applying a filter increases the run cost and also reduces the
number of rows returned, but that's
Hi,
just out of curiosity, what's the difference between streaming and
catchup state in pg_stat_replication.
According to the documentation this field is Current WAL sender state.
But that does not tell me anything.
Thanks,
Torsten
--
Sent via pgsql-general mailing list
On 21/04/14 13:18, Michael Paquier wrote:
When a standby connects for the first time to a primary, it is not yet
synchronized, this is the catchup phase. Once the lag between the
standby and the master is reduced to zero for the first time,
replication state changes to streaming.
Thanks.
I
On 20/04/14 03:02, Sergey Konoplev wrote:
Thanks for you reply.
an index can be INVALID (pg_index.indisvalid=false).
I want to temporarily disable an index so that it won't be used to
access data but will still be updated.
Can I simply set pg_index.indisvalid=false and later turn it true
On 20/04/14 12:08, Thomas Kellerer wrote:
Example:
btree (fmb_id, action_type)
btree (fmb_id)
[...]
Btw: in the above example the second one is definitely not needed.
Any query that uses the second one (single column) can also use the
first one.
I know.
But the single column
Hi,
an index can be INVALID (pg_index.indisvalid=false).
I want to temporarily disable an index so that it won't be used to
access data but will still be updated.
Can I simply set pg_index.indisvalid=false and later turn it true again?
Thanks,
Torsten
--
Sent via pgsql-general mailing list
Hi,
currently, ALTER TABLE VALIDATE CONSTRAINT for foreign key constraints
acquires an AccessExclusiveLock on the referencing table.
Why?
If the constraint is in place but not validated (ADD CONSTRAINT ... NOT
VALID) it already prevents new modifications from violating the constraint.
The code
On 13/04/14 13:34, Vik Fearing wrote:
Actually, it is implemented yet.
http://www.postgresql.org/message-id/e1wwovd-0004ts...@gemulon.postgresql.org
It'll be in 9.4.
That's good news.
So, I could validate a FK constraint this way:
UPDATE pg_constraint
SET convalidated = NOT EXISTS(
On 04/04/14 17:14, Kohler Manuel (ID SIS) wrote:
Here is the same query with fdw:
db=# EXPLAIN VERBOSE select ds.code, count(*), sum(dsf.size_in_bytes) as
raw_size,pg_size_pretty(sum(dsf.size_in_bytes)) as size from
data_set_files_fdw dsf, data_sets_fdw ds where dsf.parent_id is null and
On 03/04/14 15:34, Leonardo M. Ramé wrote:
Hi, in one of our systems, we added a kind of pagination feature, that
shows N records of Total records.
To do this, we added a count(*) over() as Total field in our queries
in replacement of doing two queries, one for fetching the records, and
On 03/04/14 17:44, Leonardo M. Ramé wrote:
Nice!, do you know if this will work on 8.4?.
no way
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
I have a volatile function that returns multiple rows. It may also
return nothing. Now, I want to write an SQL statement that calls this
function until it returns an empty result set and returns all the rows.
So, in principle I want to:
WITH RECURSIVE
t AS (
SELECT * FROM xx()
UNION ALL
On 17/03/14 21:42, Merlin Moncure wrote:
I can do it in plpgsql. But that would mean to accumulate the complete
result in memory first, right? I need to avoid that.
I would test that assumption. This is better handled in loop IMO.
LOOP
RETURN QUERY SELECT * FROM xx();
IF NOT found
Hi,
I have a query that involves an external sort:
- Sort (cost=13662680.01..13850498.48 rows=75127389 width=16)
(actual time=980098.397..1021411.862 rows=74181544 loops=1)
Sort Key: (ROW(account_id, (purchase_time)::date))
Sort Method: external merge Disk:
On 11/03/14 14:36, Tom Lane wrote:
Perhaps you fat-fingered the SET somehow?
I just repeated it:
# select * from pg_settings where name='work_mem';
-[ RECORD 1 ]
name | work_mem
setting| 52428800
unit | kB
...
# explain (analyze,buffers)
select
On 11/03/14 16:03, Tom Lane wrote:
[ thinks for awhile... ] Oh, I know what's happening: your sort is so
large that it's being constrained by the MaxAllocSize limit on the tuple
pointer array. This has been fixed in HEAD, but it's not yet in any
shipping release. According to the log entry
On 22/02/14 03:21, Torsten Förtsch wrote:
2014-02-21 05:17:10 UTC PANIC: heap2_redo: unknown op code 32
2014-02-21 05:17:10 UTC CONTEXT: xlog redo UNKNOWN
2014-02-21 05:17:11 UTC LOG: startup process (PID 1060) was terminated
by signal 6: Aborted
2014-02-21 05:17:11 UTC LOG
Hi,
one of our streaming replicas died with
2014-02-21 05:17:10 UTC PANIC: heap2_redo: unknown op code 32
2014-02-21 05:17:10 UTC CONTEXT: xlog redo UNKNOWN
2014-02-21 05:17:11 UTC LOG: startup process (PID 1060) was terminated
by signal 6: Aborted
2014-02-21 05:17:11 UTC LOG: terminating
On 21/02/14 09:17, Torsten Förtsch wrote:
one of our streaming replicas died with
2014-02-21 05:17:10 UTC PANIC: heap2_redo: unknown op code 32
2014-02-21 05:17:10 UTC CONTEXT: xlog redo UNKNOWN
2014-02-21 05:17:11 UTC LOG: startup process (PID 1060) was terminated
by signal 6: Aborted
Hi,
we decreased wal_keep_segments quite a lot. What is the supposed way to
get rid of the now superfluous files in pg_xlog?
Thanks,
Torsten
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 06/02/14 06:46, Torsten Förtsch wrote:
we decreased wal_keep_segments quite a lot. What is the supposed way to
get rid of the now superfluous files in pg_xlog?
Nothing special. The database did it for me.
Sorry for the noise.
Torsten
--
Sent via pgsql-general mailing list (pgsql-general
On 23/01/14 14:46, Bill Moran wrote:
Some quickie background: I'm on a project to migrate a fairly large
database from MySQL to PostgreSQL (~2T). As a result of a number of
factors, I have to do it in one shot and I have a limited time window
in which things can be down while I switch it
Hi,
I am asking because ...
I have a table with
relpages | 19164
reltuples | 194775
pg_relation_size / 8192 yields the same number as relpages. So, there is
no need to scale reltuples. Relcardinality is therefore 194775.
Statistics target is the default, 100. So, I assume each of the 100
On 13/01/14 17:50, Mike Broers wrote:
Is there a built-in method of scanning the server to check for
corruption or will I have to wait for a corrupt object to be accessed to
see the log entry?
This presentation:
http://www.youtube.com/watch?v=TovZ0lb16-Q
suggests pg_filedump.
Torsten
--
Hi,
assuming I have a query that computes a running sum like this:
select id, buy_price, sum(buy_price) over (order by id) sum
from fmb
where 202300=id and id=202400
order by id;
Do I need the ORDER BY clause at the end? Or does the ORDER BY in the
window function already define the
Hi,
can the VACUUM FREEZE be skipped if
a) txid_current vacuum_freeze_min_age or if
b) txid_current vacuum_freeze_table_age or if
c) txid_current autovacuum_freeze_max_age and no manual VACUUM has
been done?
Or is the operation perhaps only necessary for tables matching
select n.nspname,
On 22/11/13 11:57, Albe Laurenz wrote:
Don't use synchronous replication if you have a high transaction
rate and a noticable network latency between the sites.
Wait for the next bugfix release, since a nasty bug has just
been discovered.
Can you please explain or provide a pointer for more
Hi,
we have a table partitioned by time. Each month goes into a separate
child table. Primary key in each table is (underlying, ts). The
resulting index is perfect for ordering like in the query below. Each
child table has a constraint like:
CHECK(ts= '2011-1-1' and ts'2011-1-1'::DATE +
On 13/11/13 13:49, Gabriel Sánchez Martínez wrote:
My question is, why does it then try to fetch one row from every other
index? Can that be avoided without a lower bound on ts?
If you don't set a lower bound, since every other table has dates below
2013-05-01, they have to be scanned too.
On 13/11/13 20:21, Jeff Janes wrote:
The planner uses the check constraints to reason about the relation
between each partition separately and the query, not between the
different partitions. So while it may be possible to know that all rows
in 2013_4 must be greater than all in 2013_3, it
On 21/10/13 20:46, Tom Lane wrote:
=?ISO-8859-1?Q?Torsten_F=F6rtsch?= torsten.foert...@gmx.net writes:
I noticed that with synchronous replication I often see locks like this:
[ AccessExclusiveLock on database 0 ]
You did not say what PG version you're talking about, but if it's recent
It's
Hi,
I noticed that with synchronous replication I often see locks like this:
-[ RECORD 10 ]-+
locktype | object
database | 0
relation |
page |
tuple |
virtualxid |
transactionid |
classid|
On 12/10/13 20:08, Scott Ribe wrote:
select * from test where tz = start and tz end and colb like '%foobar%'
I think you can use an index only for wildcard expressions that are
anchored at the beginning. So,
select * from test where tz = start and tz end
and colb like 'foobar%'
can
Hi,
assuming a constraint is added to a table as NOT VALID. Now I know it IS
valid. Can I simply declare it as valid by
update pg_constraint
set convalidated='t'
where conrelid=(select c.oid
from pg_class c
join pg_namespace n on (n.oid=c.relnamespace)
Hi,
I want to add a new column named sell_time to a table which already has
a boolean field named is_sold. Then I want to create a new check constraint:
ALTER TABLE ...
CHECK(is_sold AND sell_time IS NOT NULL OR
NOT is_sold AND sell_time IS NULL)
NOT VALID;
The constraint is added as NOT
Hi,
is it possible to turn on checksums in an existing database? Or do I
have to dump/initdb/restore?
Thanks,
Torsten
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
93 matches
Mail list logo