Shigeru Hanada shigeru.han...@gmail.com
7:48 AM (5 hours ago)
to Eliot, pgsql-general
This message may not have been sent by: shigeru.han...@gmail.com Learn
morehttp://mail.google.com/support/bin/answer.py?hl=enctx=mailanswer=185812
Report phishing
Why this message is popping up in my
Yan Chunlu wrote:
recently I have found several tables has exactly the same pkey, here
is the definition:
diggcontent_data_account_pkey PRIMARY KEY, btree (thing_id, key)
the data is like this:
159292 | funnypics_link_point | 41
| num
159292 | funnypics_link_point |
On 17 November 2011 06:19, Yan Chunlu springri...@gmail.com wrote:
recently I have found several tables has exactly the same pkey, here is
the definition:
diggcontent_data_account_pkey PRIMARY KEY, btree (thing_id, key)
the data is like this:
159292 | funnypics_link_point | 41
Thank you Tom John.
In this case, there are no updates/deleted - only inserts. For now, I have
set per-table autovacuum rules in order to minimize the frequency of
vacuums but to ensure the statistics are updated frequently with analyze:
Table auto-vacuum VACUUM base threshold5
Hi Alban,
Thanks for the reply.
1) I'm using PostgreSQL 8.1; So, I can't use RETURNING clause!
2) The function I gave is just to put my understanding! Thanks for spotting the
error though.
Regards,
Siva.
-Original Message-
From: Alban Hertroys [mailto:haram...@gmail.com]
Sent:
Em 17-11-2011 03:19, Yan Chunlu escreveu:
recently I have found several tables has exactly the same pkey, here
is the definition:
diggcontent_data_account_pkey PRIMARY KEY, btree (thing_id, key)
the data is like this:
159292 | funnypics_link_point | 41
srs, need help.
I have several applications accessing my databases. My customers are
divided into databases within my cluster pg905, have at least a 60/90
clients per cluster, heavy number of updates in the bank, as inserts,
updates and deletes and an average 50 to 100 simultaneous connections
On Nov 17, 2011 1:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
John R Pierce pie...@hogranch.com writes:
On 11/16/11 4:24 PM, Jason Buberel wrote:
Just wondering if there is ever a reason to vacuum a very large table
( 1B rows) containing rows that never has rows deleted.
no updates
I am using pgpool's replication feature, it does copy pg_xlog from one
server to another, was that possible cause of the problem?
thanks for the help!
On Thu, Nov 17, 2011 at 5:38 PM, Edson Richter rich...@simkorp.com.brwrote:
Em 17-11-2011 03:19, Yan Chunlu escreveu:
recently I have
seems they are identical:
159292 | |funnypicscn_link_karma|
159292 | |funnypicscn_link_karma|
On Thu, Nov 17, 2011 at 4:07 PM, Szymon Guz mabew...@gmail.com wrote:
On 17 November 2011 06:19, Yan Chunlu springri...@gmail.com wrote:
recently I have found several tables has exactly the
Hi,
I'm on the verge of upgrading a server (Fedora 8 ehehe) running postgresql 8.3
It also has postgis 1.3 installed.
Thinking of using pgadmin3 to perform the backup and then restore it after
I've upgraded the server to fedora 15/16 and thus upgrading postgresql to 9.0.
I seem to remember
On Wed, Nov 16, 2011 at 07:02:11PM -0500, Tom Lane wrote:
I'd try looking to see which row in pg_proc has the latest xmin.
Unfortunately you can't ORDER BY xmin ...
order by age(xmin) ?
Best regards,
depesz
--
The best thing about modern society is how easy it is to avoid contact with it.
On Thu, Nov 17, 2011 at 01:19:30PM +0800, Yan Chunlu wrote:
recently I have found several tables has exactly the same pkey, here is
the definition:
diggcontent_data_account_pkey PRIMARY KEY, btree (thing_id, key)
please check:
select thing_id, key, count(*) from diggcontent_data_account group
I have a lot of entries like this in the log file
2011-11-17 02:02:46 PYST LOG: checkpoints are occurring too frequently (13
seconds apart)
2011-11-17 02:02:46 PYST HINT: Consider increasing the configuration
parameter checkpoint_segments.
No, checkpoint parameters in postgres.conf are:
increase your checkpoint segments
--
GJ
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi Anibal,
On Thu, 17 Nov 2011 09:48:10 -0300, Anibal David Acosta
a...@devshock.com wrote:
What should be a correct value for checkpoint_segments to avoid
excessive checkpoint events?
There is no golden rule or value that fits all scenarios. Usually 32 is
a good value to start with,
Thanks!
-Mensaje original-
De: Gabriele Bartolini [mailto:gabriele.bartol...@2ndquadrant.it]
Enviado el: jueves, 17 de noviembre de 2011 10:14 a.m.
Para: Anibal David Acosta
CC: pgsql-general@postgresql.org
Asunto: Re: [GENERAL] checkpoints are occurring too frequently
Hi Anibal,
I ran into a rather unusual problem today where Postgres brought down a
database to avoid transaction wraparound in a situation where it doesn't appear
that it should have.
The error in the log is explicit enough...
Nov 16 04:00:03 SRP1 postgres[58101]: [1-1] FATAL: database is not
Hello,
I have two servers with battery backed power supply (USV). So it is unlikely,
that both will crash at the same time.
Will synchronous replication work with fsync=off?
That means we will commit to system cache, but not to disk. Data will not
survive a system crash but the second system
Zitat von Siva Palanisamy siv...@hcl.com:
Hi Alban,
Thanks for the reply.
1) I'm using PostgreSQL 8.1; So, I can't use RETURNING clause!
You should Upgrade ASAP! 8.1 is 'out of lifetime'.
Regards, Andreas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
What if power supply goes ?
What if someone trips on the cable, and both servers go ?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Em 17-11-2011 09:21, Yan Chunlu escreveu:
I am using pgpool's replication feature, it does copy pg_xlog from one
server to another, was that possible cause of the problem?
I did not mean that this IS your problem, I just gave you a tip
regarding a problem I had in the past, that eventually
Craig Ringer ring...@ringerc.id.au writes:
On Nov 17, 2011 1:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
If it's purely an insert-only table, such as a logging table, then in
principle you only need periodic ANALYZEs and not any VACUUMs.
Won't a VACUUM FREEZE (or autovac equivalent) be
On Thu, Nov 17, 2011 at 7:52 AM, Schubert, Joerg jschub...@cebacus.de wrote:
Hello,
I have two servers with battery backed power supply (USV). So it is
unlikely, that both will crash at the same time.
Will synchronous replication work with fsync=off?
That means we will commit to system
On Thu, Nov 17, 2011 at 9:07 AM, Jaime Casanova ja...@2ndquadrant.com wrote:
On Thu, Nov 17, 2011 at 7:52 AM, Schubert, Joerg jschub...@cebacus.de wrote:
Hello,
I have two servers with battery backed power supply (USV). So it is
unlikely, that both will crash at the same time.
Will
On 17 Listopad 2011, 17:07, Jaime Casanova wrote:
On Thu, Nov 17, 2011 at 7:52 AM, Schubert, Joerg jschub...@cebacus.de
wrote:
Hello,
I have two servers with battery backed power supply (USV). So it is
unlikely, that both will crash at the same time.
Will synchronous replication work with
I am in need of a tool or method to see each/every SQL query that hits
the PostgreSQL database. By query I mean the query in SQL syntax with
all the parameters passed.
What I want to do is:
1) see the query
2) Determine how long the query takes to execute
3) Possibly log both of
On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:
I am in need of a tool or method to see each/every SQL query that
hits the PostgreSQL database. By query I mean the query in SQL
syntax with all the parameters passed.
What I want to do is:
1) see the query
2) Determine how
On 17 Listopad 2011, 17:32, hubert depesz lubaczewski wrote:
On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:
I am in need of a tool or method to see each/every SQL query that
hits the PostgreSQL database. By query I mean the query in SQL
syntax with all the parameters passed.
What I
I'm writing a custom C function and one of the things it needs to do is
to be configured from the SQL-land, per user session (different users
have different configurations in different sessions).
I have found (and have used) the SET SESSION command and the
current_setting() function for use with
On Thu, Nov 17, 2011 at 11:46 AM, Tomas Vondra t...@fuzzy.cz wrote:
On 17 Listopad 2011, 17:32, hubert depesz lubaczewski wrote:
On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:
I am in need of a tool or method to see each/every SQL query that
hits the PostgreSQL database. By query I
Hi All,
I'm using PostgreSQL 8.1.4. I've 3 tables: one being the core (table1), others
are dependents (table2,table3). I inserted 7 records in table1 and
appropriate related records in other 2 tables. As I'd used CASCADE, I could
able to delete the related records using DELETE FROM table1;
Ivan Voras ivo...@freebsd.org writes:
Ideally, the C module would create its own custom variable class,
named e.g. module, then define some setting, e.g. module.setting.
The users would then execute an SQL command such as SET SESSION
module.setting='something', and the module would need to
Hi All,
I'm using PostgreSQL 8.1.4. I've 3 tables: one being the core (table1), others
are dependents (table2,table3). I inserted 7 records in table1 and
appropriate related records in other 2 tables. As I'd used CASCADE, I could
able to delete the related records using DELETE FROM table1;
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Siva Palanisamy
Sent: Thursday, November 17, 2011 1:04 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Please recommend me the best bulk-delete option
Hi All,
I'm
Hi.
On 17 Listopad 2011, 19:03, Siva Palanisamy wrote:
Hi All,
I'm using PostgreSQL 8.1.4. I've 3 tables: one being the core (table1),
That's a bit old - update to 8.1.23 (or to a never version, if possible).
others are dependents (table2,table3). I inserted 7 records in table1
and
On 17 Listopad 2011, 19:26, David Johnston wrote:
Anyway, if you execute the three TRUNCATEs in the proper order, thus
avoiding any kind of cascade, you should get maximum performance possible
on your UNSUPPORTED VERSION of PostgreSQL.
AFAIK cascade with TRUNCATE means 'truncate the depending
Hi All.
I am in the middle of a process to get all my data into utf8. As its
not all converted yet, my database encoding is SQL_ASCII.
I am getting external apps fixed up to write utf8 to the database, and
so far so good. But, I ran across some stuff that needs a one time
convert, and
On 11/17/11 2:34 AM, Emanuel Araújo wrote:
Based on my scenario, can anyone help me?
how can we help you? you didn't ask any questions (other than the above
metaquestion, which is unanswerable)
I might note in passing... a connection pool will only help if your
applications are written
This query is taking much longer on 9.1 than it did on 8.4. Why is it
using a seq scan?
= explain verbose SELECT status,EXISTS(SELECT 1 FROM eventlog e WHERE
e.uid = ml.uid AND e.jobid = ml.jobid AND type = 4),EXISTS(SELECT 1 FROM
eventlog e WHERE e.uid = ml.uid AND e.jobid = ml.jobid AND type =
On Nov 17, 2011, at 14:24, Joseph Shraibman wrote:
This query is taking much longer on 9.1 than it did on 8.4. Why is it
using a seq scan?
Without seeing the table definition (including indexes) as well as the output
of EXPLAIN for 8.4, it's kind of hard to say.
Does this formulation of
On 11/17/2011 03:30 PM, Michael Glaesemann wrote:
On Nov 17, 2011, at 14:24, Joseph Shraibman wrote:
This query is taking much longer on 9.1 than it did on 8.4. Why is it
using a seq scan?
Without seeing the table definition (including indexes) as well as the output
of EXPLAIN for
How is this accomplished?
Is it possible to log queries to a table with additional information?
1) num rows returned (if a select)
2) time to complete the query
3) other info?
How is enabling this actually done?
On 11/17/2011 9:32 AM, hubert depesz lubaczewski wrote:
On Thu, Nov 17, 2011 at
What is a GUC and how do I use it?
On 11/17/2011 9:46 AM, Tomas Vondra wrote:
On 17 Listopad 2011, 17:32, hubert depesz lubaczewski wrote:
On Thu, Nov 17, 2011 at 09:29:11AM -0700, J.V. wrote:
I am in need of a tool or method to see each/every SQL query that
hits the PostgreSQL database. By
On Thu, Nov 17, 2011 at 2:59 AM, Raghavendra
raghavendra@enterprisedb.com wrote:
Shigeru Hanada shigeru.han...@gmail.com
7:48 AM (5 hours ago)
to Eliot, pgsql-general
This message may not have been sent by: shigeru.han...@gmail.com Learn
On Thu, 17 Nov 2011 14:32:22 -0700
J.V. jvsr...@gmail.com wrote:
How is this accomplished?
The best way that I know if is to use pgFouine.
The documentation for pgFouine should get you started.
HTH,
Bill
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Thu, Nov 17, 2011 at 4:32 PM, J.V. jvsr...@gmail.com wrote:
How is this accomplished?
Is it possible to log queries to a table with additional information?
1) num rows returned (if a select)
This isn't logged
2) time to complete the query
This is logged
3) other info?
Take a
On 17 Listopad 2011, 22:34, J.V. wrote:
What is a GUC and how do I use it?
It just means there's a config option log_min_duration_statement that you
can set in postgresql.conf. Set it e.g. to 100, reload the configuration
(e.g. by restarting the server or sending HUP signal to the process) and
Hi !
When attempting to start Postgres 9.1.1 with hba conf for local
connections on Windows, I get an error.
e.g. I tried adding the following line to pg_hba.conf
localall user1 trust
and I get:
pg_ctl -D pgsql\data -w start
waiting for server to
On Thursday, November 17, 2011 3:41:22 pm deepak wrote:
Hi !
When attempting to start Postgres 9.1.1 with hba conf for local
connections on Windows, I get an error.
e.g. I tried adding the following line to pg_hba.conf
localall user1 trust
and
On 17/11/2011 23:41, deepak wrote:
Hi !
When attempting to start Postgres 9.1.1 with hba conf for local
connections on Windows, I get an error.
e.g. I tried adding the following line to pg_hba.conf
localall user1 trust
and I get:
pg_ctl -D
On 17 November 2011 19:02, Tom Lane t...@sss.pgh.pa.us wrote:
Ivan Voras ivo...@freebsd.org writes:
Ideally, the C module would create its own custom variable class,
named e.g. module, then define some setting, e.g. module.setting.
The users would then execute an SQL command such as SET
Ivan Voras ivo...@freebsd.org writes:
Is there any way to make _PG_init() called earlier, e.g. as soon as
the session is established or at database connection time, something
like that?
Preload the library --- see shared/local_preload_libraries configuration
settings.
On 18 November 2011 01:20, Tom Lane t...@sss.pgh.pa.us wrote:
Ivan Voras ivo...@freebsd.org writes:
Is there any way to make _PG_init() called earlier, e.g. as soon as
the session is established or at database connection time, something
like that?
Preload the library --- see
On Thursday, November 17, 2011 3:41:22 pm deepak wrote:
Hi !
Although, it is not clear what options I have to use while
building/configuring?
This same configuration used to work with Postgres 9.0.3, though.
Any thoughts?
Error in my previous post the setting should be host not localhost.
On Mon, Nov 14, 2011 at 1:45 PM, Venkat Balaji venkat.bal...@verse.in wrote:
Question: what can I do to rsync only the new additions in every table
starting 00:00:01 until 23:59:59 for each day?
A table level replication (like Slony) should help here.
Slony needs more than one physical
Hi. I have a massive traffic website.
I keep getting FATAL: Sorry, too many clients already problems.
It's a Quad core machine with dual servers, 4 SCSI disks with RAID 10,
with RAM of 8GB.
Server is Nginx backed by Apache for the php.
Postgresql just has to do about 1000 SELECTs a minute, and
On 11/17/11 4:44 PM, Phoenix Kiula wrote:
I keep getting FATAL: Sorry, too many clients already problems.
It's a Quad core machine with dual servers, 4 SCSI disks with RAID 10,
with RAM of 8GB.
Server is Nginx backed by Apache for the php.
Postgresql just has to do about 1000 SELECTs a
I've performed a very similar upgrade including postgis upgrade at the same
time, we used the following command examples ... also put some simple scripting
together to dump multiple databases in parallel as downtime was critical:
Dump database data: pg_dump -Fc database --compress=1
On 11/17/2011 04:44 PM, Phoenix Kiula wrote:
Hi. I have a massive traffic website.
Massive = what, exactly?
I keep getting FATAL: Sorry, too many clients already problems.
It's a Quad core machine with dual servers, 4 SCSI disks with RAID 10,
with RAM of 8GB.
Database only? Or is it also
Hi, there's a pretty wiki page about tuning PostgreSQL databases:
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
On 18 Listopad 2011, 1:44, Phoenix Kiula wrote:
Hi. I have a massive traffic website.
I keep getting FATAL: Sorry, too many clients already problems.
That has
I need to assemble a complete data dictionary for project documentation and
other purposes and I was wondering about the pros and cons of using the
pg_catalog metadata. But I hesitate to poke around in here because I don't know
why it's kept so out of sight and not much documented. But it seems
On Nov 17, 2011, at 22:17, Bill Thoen bth...@gisnet.com wrote:
I need to assemble a complete data dictionary for project documentation and
other purposes and I was wondering about the pros and cons of using the
pg_catalog metadata. But I hesitate to poke around in here because I don't
know
On 11/17/11 7:17 PM, Bill Thoen wrote:
I need to assemble a complete data dictionary for project documentation and
other purposes and I was wondering about the pros and cons of using the
pg_catalog metadata. But I hesitate to poke around in here because I don't know
why it's kept so out of
[PostgreSQL 8.3.9]
I have a query, as follows
SELECT DISTINCT ON(category) category
FROM gdb_books
WHERE category LIKE 'Fiction%'
GROUP BY category
The (partial) result is this:
...
# Fiction - General (A)
# Fiction - General - Anthologies
# Fiction - General (B)
# Fiction - General (C)
#
On Nov 17, 2011, at 7:14 PM, Good Day Books wrote:
Does anyone have an explanation why this is not so; are the special
characters (parenthesis, hyphen) just ignored? If so, is there a way to
force ORDER BY to include the special characters in the sort?
The query as shown does't actually
On Fri, Nov 18, 2011 at 12:14:35PM +0900, Good Day Books wrote:
[PostgreSQL 8.3.9]
I have a query, as follows
SELECT DISTINCT ON(category) category
FROM gdb_books
WHERE category LIKE 'Fiction%'
GROUP BY category
Does anyone have an explanation why this is not so; are the special
On Sun, Nov 13, 2011 at 7:01 AM, Craig Ringer ring...@ringerc.id.au wrote:
On Nov 13, 2011 7:39 PM, Phoenix Kiula
Searching google leads to complex things like incremental WAL and
whatnot, or talks of stuff like pgcluster. I'm hoping there's a more
straightforward core solution without
On Fri, Nov 18, 2011 at 6:08 AM, Phoenix Kiula phoenix.ki...@gmail.comwrote:
On Mon, Nov 14, 2011 at 1:45 PM, Venkat Balaji venkat.bal...@verse.in
wrote:
Question: what can I do to rsync only the new additions in every table
starting 00:00:01 until 23:59:59 for each day?
A table level
Hi,
I am using PostgreSQL 9.0. It is installed on Windows XP. I am trying to take
a backup of the database using pg_dump. But each time I give a command
Pg_dump pgdb backup.sql
I am prompted for a password and I provided the database password. After this,
I got an error as follows..
On 11/17/11 11:01 PM, mamatha_kagathi_c...@dell.com wrote:
I am using PostgreSQL 9.0. It is installed on Windows XP. I am trying
to take a backup of the database using pg_dump. But each time I give a
command
Pg_dump pgdb backup.sql
I am prompted for a password and I provided the database
71 matches
Mail list logo