FYI
http://freebsd.1045724.n5.nabble.com/Postgresql-related-memory-question-td5759467.html
Am 2012-11-07 10:28, schrieb Achilleas Mantzios:
On Τετ 07 Νοε 2012 09:42:47 Frank Broniewski wrote:
Hey, this is really cool. I directly tried the script and there's a line
from the output that caught
Hi all:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table,
which has two distribution child table ---
Thanx for the link.
I just think that it would be a good idea, instead of posting the links at this
list, to include a
dense but detailed summary of the situation in your machine, and give as much
data as possible.
In short, you might do a quantitative compilation of this thread, and present
I've a bunch of databases that cannot be dropped in any case.
I was wondering if it is possible to revoke "drop database"
permission for all users, in order that even superuser, if he wishes
to drop a database, he will need first to "grant drop database"
first.
Edson Richter wrote:
I've a bunch of databases that cannot be dropped in any case.
I was wondering if it is possible to revoke drop database permission
for all users, in order that
even superuser, if he wishes to drop a database, he will need first to
grant drop database first.
I know
高健 wrote:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from
test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table, which
has two distribution
child
Bruce Momjian wrote:
On Fri, Nov 9, 2012 at 10:50:42AM +0800, Craig Ringer wrote:
Not sure that's enough of an improvement. Really, these tests should
pass whatever the optimizer parameters are set to; instead of checking
the exact order of the result set, they should check that the
I have a PGSQL 9.0.1 database which is on the back-end of an app I was
stress testing last night. This morning, when I try to run psql, I get:
psql: FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
I believe something in the app is failing to release
Hi all.
A tablespace is created successfully and so is a test table in it:
# CREATE TABLESPACE ts_name LOCATION '/path/to/tablespace/location';
# SET default_tablespace=ts_name;
# CREATE TABLE ts_test_table;
From the command line:
$ cd /path/to/tablespace/location
$ ls
PG_9.1_201105231
$ cd
Thalis Kalfigkopoulos wrote:
A tablespace is created successfully and so is a test table in it:
# CREATE TABLESPACE ts_name LOCATION '/path/to/tablespace/location';
# SET default_tablespace=ts_name;
# CREATE TABLE ts_test_table;
From the command line:
$ cd /path/to/tablespace/location
$
Hello there,
Once in awhile, as I am trying to run various versions of the Postgres DB
engine I get a message on startup indicating that my control file is not up
to snuff. Last time it happened with Postgres 9.1 on OpenIndiana 11.
So my questions are as follows:
1) Why does that generally
On Fri, Nov 9, 2012 at 12:29 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
Thalis Kalfigkopoulos wrote:
A tablespace is created successfully and so is a test table in it:
# CREATE TABLESPACE ts_name LOCATION '/path/to/tablespace/location';
# SET default_tablespace=ts_name;
# CREATE
On Fri, Nov 9, 2012 at 8:43 AM, Eliot Gable
egable+pgsql-gene...@gmail.com wrote:
I have a PGSQL 9.0.1 database which is on the back-end of an app I was
stress testing last night. This morning, when I try to run psql, I get:
psql: FATAL: out of shared memory
HINT: You might need to increase
most likely possibility you have a transaction being left open and
accumulating locks. of course, you have to rule out the fact that
you simply have to increase max_locks_per_transaction: if you have a
lot of tables, it might be reasonable to have to extend this on a
stock config.
We
Hi Everyone,
I have a view made up of a local query unioned with a view comprised of a
dblink query.
If the dblink query cannot establish a connection, I get the could not connect
to server error and the whole view fails.
Setting fail_on_error to false does not work in this situation.
Is
Dear Everybody!
Can somebody show me an example for phantom read in Repeatable Read mode
(in 9.1, new, serialized kind of isolation level)?
The help wrote that it is possible:
Repeatable read Not possible Not possible Possible
But I don't understand, why, and how.
C1, C2 are clients.
Every of
Durumdara wrote:
Can somebody show me an example for phantom read in Repeatable
Read mode (in 9.1, new, serialized kind of isolation level)?
The repeatable read isolation level did not change in 9.1. Phantom
reads were not possible in repeatable read transactions either before
or after 9.1.
Another process comes along and processes records which are being inserted
into the database. It pulls up to 10 records from a table, processes them,
and moves those records into a processed table. The processing of the
records is rather complex. To facilitate the processing, 6 temporary
On Fri, Nov 9, 2012 at 9:21 PM, George Weaver gwea...@shaw.ca wrote:
**
Hi Everyone,
I have a view made up of a local query unioned with a view comprised of a
dblink query.
If the dblink query cannot establish a connection, I get the could not
connect to server error and the whole view
Durumdara wrote:
What need to do in C1 to read C2's phantom records?
After posting a reply I remembered another recent thread which was
more-or-less on the same topic, but with a different slant. Perhaps
you would also find that useful:
Thanks, this is exactly what I was
looking for.
One more question: changing this attribute will present is no side
effects?
Thanks,
Edson Carlos Ericksson Richter
On 2012-11-09 12:42:37 +0100, Albe Laurenz wrote:
Edson Richter wrote:
I've a bunch of databases that cannot be dropped in any case.
I was wondering if it is possible to revoke drop database permission
for all users, in order that
even superuser, if he wishes to drop a database, he will
Andres Freund and...@anarazel.de writes:
On 2012-11-09 12:42:37 +0100, Albe Laurenz wrote:
You cannot play it over permissions, but what about the following trick:
postgres=# CREATE DATABASE persist;
CREATE DATABASE
postgres=# UPDATE pg_database SET datistemplate=TRUE WHERE
Em 09/11/2012 20:18, Tom Lane escreveu:
Andres Freund and...@anarazel.de writes:
On 2012-11-09 12:42:37 +0100, Albe Laurenz wrote:
You cannot play it over permissions, but what about the following trick:
postgres=# CREATE DATABASE persist;
CREATE DATABASE
postgres=# UPDATE pg_database SET
As I've spent a considerable amount of time trying to sort this out, I'm
posting it for the benefit other users.
I've experienced persistent, ongoing issues with autovacuum in a mixed
read/write environment with midrange hardware (16 core Xeon, 128 GB RAM,
200 GB SATA3 6 Gb SSDs for disk I/O
On 11/09/2012 04:28 PM, Lists wrote:
As I've spent a considerable amount of time trying to sort this out,
I'm posting it for the benefit other users.
Having missed the earlier conversationa couple comments:
I've experienced persistent, ongoing issues with autovacuum in a mixed
read/write
Hi.
I've been struggling with understanding all the necessary pieces for
streaming replication. So I put down the pieces as I did understand them,
and would appreciate if you guys could point out any of the stuff I
understood or have done wrong.
The set up is pgpool + streaming replication + hot
On Fri, Nov 9, 2012 at 5:28 PM, Lists li...@benjamindsmith.com wrote:
As I've spent a considerable amount of time trying to sort this out, I'm
posting it for the benefit other users.
SNIP
D) concurrent use of pg_dump;
Not usually a problem, unless it's overloading your IO subsystem.
On Sat, Nov 10, 2012 at 12:26 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
Don't do that. Defaults are good for ensuring that PostgreSQL will start on
the widest reasonable variety of systems. They are *terrible* for
performance and are certainly wrong for the system you describe.
Scott --
...
To tune autovacuum with 50 databases, start by dropping nap time to
something much lower, like 10s. Then if you need to, drop cost delay
until you get to 0. If you get to 0 and it's still not hitting your
IO too hard, but not keeping up, then increase cost limit. If you get
to
30 matches
Mail list logo