On Thu, Feb 12, 2015 at 3:27 AM, hailong Li wrote:
> Hi, dear pgsql-hackers
Please have a look at
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems
This is the wrong mailing list for this sort of question, and your
report is pretty unclear, so it's hard to tell what might have gone
wr
Hi, dear pgsql-hackers
*1. environment*
*DB Master*
$ cat /etc/issue
CentOS release 6.5 (Final)
Kernel \r on an \m
$ uname -av
Linux l-x1.xx.cnx 3.14.29-3.centos6.x86_64 #1 SMP Tue Jan 20 17:48:32
CST 2015 x86_64 x86_64 x86_64 GNU/Linux
$ psql -U postgres
psql (9.3.5)
Type "help" for help
On 29 July 2012 16:39, Peter Geoghegan wrote:
> Many of you will be aware that the behaviour of commit_delay was
> recently changed. Now, the delay only occurs within the group commit
> leader backend, and not within each and every backend committing a
> transaction:
I've moved this to the pgsql-
Peter,
For some reason I didn't receive the beginning of this thread. Can you
resend it to me, or (better) post it to the pgsql-performance mailing list?
I have a linux system where I can test both on regular disk and on SSD.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sen
> From: Peter Geoghegan [mailto:pe...@2ndquadrant.com]
> Sent: Wednesday, August 01, 2012 8:49 PM
On 1 August 2012 15:14, Amit Kapila wrote:
>> I shall look into this aspect also(setting commit_delay based on raw
sync).
>> You also suggest if you want to run the test with different
configuration
On 1 August 2012 15:14, Amit Kapila wrote:
> I shall look into this aspect also(setting commit_delay based on raw sync).
> You also suggest if you want to run the test with different configuration.
Well, I was specifically interested in testing if half of raw sync
time was a widely useful setting
> From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Peter Geoghegan
> Sent: Sunday, July 29, 2012 9:09 PM
> I made what may turn out to be a useful observation during the
> development of the patch, which was that for both the tpc-b.sql and
> insert
Many of you will be aware that the behaviour of commit_delay was
recently changed. Now, the delay only occurs within the group commit
leader backend, and not within each and every backend committing a
transaction:
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f11e8be3e812cdbbc139c1
du li wrote:
Dear hackers,
I'm working on a windows application with C# language and use npgsql to connect postgres DB. I'm eager to learn how to make a solo setup file which included windows application and postgres DB. My develop environment is Visual Studio 2003 and Framework 1.1
I
Dear hackers, I'm working on a windows application with C# language and use npgsql to connect postgres DB. I'm eager to learn how to make a solo setup file which included windows application and postgres DB. My develop environment is Visual Studio 2003 and Framework 1.1 I don't know if ther
Thomas F.O'Connell wrote:
Does auto_vacuum vacuum the system tables?
Yes
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Does auto_vacuum vacuum the system tables?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Feb 16, 2005, at 5:42 PM, Matthew T. O'Connor wrote:
Tom Lane wrote:
[EMAIL PROTECT
Matthew T. O'Connor wrote:
> Tom Lane wrote:
>
> >[EMAIL PROTECTED] writes:
> >
> >
> >>Maybe I'm missing something, but shouldn't the prospect of data loss (even
> >>in the presense of admin ignorance) be something that should be
> >>unacceptable? Certainly within the realm "normal PostgreSQL"
Russell Smith wrote:
On Fri, 18 Feb 2005 04:38 pm, Kevin Brown wrote:
Tom Lane wrote:
No, the entire point of this discussion is to whup the DBA upside the
head with a big enough cluestick to get him to install autovacuum.
Once autovacuum is default, it won't matter anymore.
I have a
On Thursday 17 February 2005 07:47, [EMAIL PROTECTED] wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> >> We do ~4000 txn/minute so in 6 month you are screewd up...
> >
> > Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
> > the
> > huge slowdowns from all those dea
On Fri, 18 Feb 2005 08:53 pm, Jürgen Cappel wrote:
> Just wondering after this discussion:
>
> Is transaction wraparound limited to a database or to an installation ?
> i.e. can heavy traffic in one db affect another db in the same installation ?
>
XID's are global to the pg cluster, or installat
On Fri, 18 Feb 2005 04:38 pm, Kevin Brown wrote:
> Tom Lane wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> > > BTW, why not do an automatic vacuum instead of shutdown ? At least the
> > > DB do not stop working untill someone study what the problem is and
> > > how solve it.
> >
> > No,
Just wondering after this discussion:
Is transaction wraparound limited to a database or to an installation ?
i.e. can heavy traffic in one db affect another db in the same installation ?
---(end of broadcast)---
TIP 8: explain analyze is your friend
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
> > BTW, why not do an automatic vacuum instead of shutdown ? At least the
> > DB do not stop working untill someone study what the problem is and
> > how solve it.
>
> No, the entire point of this discussion is to whup the DBA upside t
Greg Stark wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>
>>We do ~4000 txn/minute so in 6 month you are screewd up...
>
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
> huge slowdowns from all those dead tuples before that?
>
In my applications yes
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>> We do ~4000 txn/minute so in 6 month you are screewd up...
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
> the
> huge slowdowns from all those dead tuples before that?
>
>
I would think that only applies to databases
And most databases get a mix of updates and selects. I would expect it would
be pretty hard to go that long with any significant level of update activity
and no vacuums and not notice the performance problems from the dead tuples.
I think the people who've managed to shoot themselves in the foot t
On 17 Feb 2005, Greg Stark wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
> > We do ~4000 txn/minute so in 6 month you are screewd up...
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
> huge slowdowns from all those dead tuples before that?
Most people
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> We do ~4000 txn/minute so in 6 month you are screewd up...
Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
huge slowdowns from all those dead tuples before that?
--
greg
---(end of broadcast)---
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> BTW, why not do an automatic vacuum instead of shutdown ? At least the
> DB do not stop working untill someone study what the problem is and
> how solve it.
No, the entire point of this discussion is to whup the DBA upside the
head with a big enough cl
Tom Lane wrote:
> Bruno Wolff III <[EMAIL PROTECTED]> writes:
>
>>I don't think there is much point in making it configurable. If they knew
>>to do that they would most likely know to vacuum as well.
>
>
> Agreed.
>
>
>>However, 100K out of 1G seems too small. Just to get wrap around there
>>m
Stephan Szabo wrote:
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>
>>>Once autovacuum gets to the point where it's used by default, this
>>>particular failure mode should be a thing of the past, but in the
>>>meantime I'm not going to panic about it.
>>
>>I don't know how to say this without
Greg Stark wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>
>
>>Christopher Kings-Lynne wrote:
>>
>>
>>>I wonder if I should point out that we just had 3 people suffering XID
>>>wraparound failure in 2 days in the IRC channel...
>>
>>I have had half a dozen new customers in the last six m
Tom Lane wrote:
[EMAIL PROTECTED] writes:
Maybe I'm missing something, but shouldn't the prospect of data loss (even
in the presense of admin ignorance) be something that should be
unacceptable? Certainly within the realm "normal PostgreSQL" operation.
Once autovacuum gets to the point wher
I think the people who've managed to shoot themselves in the foot this
way are those who decided to "optimize" their cron jobs to only vacuum
their user tables, and forgot about the system catalogs. So it's
probably more of a case of "a little knowledge is a dangerous thing"
than never having hea
Greg Stark <[EMAIL PROTECTED]> writes:
> How are so many people doing so many transactions so soon after installing?
> To hit wraparound you have to do a billion transactions? ("With a `B'") That
> takes real work. If you did 1,000 txn/minute for every minute of every day it
> would still take a c
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Christopher Kings-Lynne wrote:
>
> > I wonder if I should point out that we just had 3 people suffering XID
> > wraparound failure in 2 days in the IRC channel...
>
> I have had half a dozen new customers in the last six months that have
> had the
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I don't think there is much point in making it configurable. If they knew
> to do that they would most likely know to vacuum as well.
Agreed.
> However, 100K out of 1G seems too small. Just to get wrap around there
> must be a pretty high transaction
Stephan Szabo <[EMAIL PROTECTED]> writes:
> All in all, I figure that odds are very high that if someone isn't
> vacuuming in the rest of the transaction id space, either the transaction
> rate is high enough that 100,000 warning may not be enough or they aren't
> going to pay attention anyway and
Tom Lane wrote:
Maybe
(a) within 200,000 transactions of wrap, every transaction start
delivers a WARNING message;
(b) within 100,000 transactions, forced shutdown as above.
This seems sound enough, but if the DBA and/or SA can't be bothered
reading the docs where this topic features quite pro
Stephan Szabo wrote:
On Wed, 16 Feb 2005, Tom Lane wrote:
Stephan Szabo <[EMAIL PROTECTED]> writes:
(a) within 200,000 transactions of wrap, every transaction start
delivers a WARNING message;
(b) within 100,000 transactions, forced shutdown as above.
This seems reasonable, although perhaps the fo
On Wed, Feb 16, 2005 at 09:38:31 -0800,
Stephan Szabo <[EMAIL PROTECTED]> wrote:
> On Wed, 16 Feb 2005, Tom Lane wrote:
>
> > (a) within 200,000 transactions of wrap, every transaction start
> > delivers a WARNING message;
> >
> > (b) within 100,000 transactions, forced shutdown as above.
>
> T
On Wed, 16 Feb 2005, Tom Lane wrote:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > Right, but since the how to resolve it currently involves executing a
> > query, simply stopping dead won't allow you to resolve it. Also, if we
> > stop at the exact wraparound point, can we run into problems act
Stephan Szabo <[EMAIL PROTECTED]> writes:
> Right, but since the how to resolve it currently involves executing a
> query, simply stopping dead won't allow you to resolve it. Also, if we
> stop at the exact wraparound point, can we run into problems actually
> trying to do the vacuum if that's stil
>
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> > On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>> >
>> >> >
>> >> > Once autovacuum gets to the point where it's used by default, this
>> >> > particular failure mode should be a thing of the past, but in the
>> >> > meantime I'm not going to pa
> Stephan Szabo <[EMAIL PROTECTED]> writes:
>> Right, but since the how to resolve it currently involves executing a
>> query, simply stopping dead won't allow you to resolve it. Also, if we
>> stop at the exact wraparound point, can we run into problems actually
>> trying to do the vacuum if that'
>
> On Wed, 16 Feb 2005, Joshua D. Drake wrote:
>
>>
>> >Do you have a useful suggestion about how to fix it? "Stop working" is
>> >handwaving and merely basically saying, "one of you people should do
>> >something about this" is not a solution to the problem, it's not even
>> an
>> >approach towa
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> > On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> >
> >> >
> >> > Once autovacuum gets to the point where it's used by default, this
> >> > particular failure mode should be a thing of the past, but in the
> >> > meantime I'm not going to panic about
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> >
>> > Once autovacuum gets to the point where it's used by default, this
>> > particular failure mode should be a thing of the past, but in the
>> > meantime I'm not going to panic about it.
>>
>> I don't know how to say this without sounding lik
On Wed, 16 Feb 2005, Joshua D. Drake wrote:
>
> >Do you have a useful suggestion about how to fix it? "Stop working" is
> >handwaving and merely basically saying, "one of you people should do
> >something about this" is not a solution to the problem, it's not even an
> >approach towards a soluti
Christopher Kings-Lynne wrote:
At this point we have a known critical bug. Usually the PostgreSQL
community
is all over critical bugs. Why is this any different?
It sounds to me that people are just annoyed that users don't RTFM.
Get over it. Most won't. If users RTFM more often, it would put mo
Do you have a useful suggestion about how to fix it? "Stop working" is
handwaving and merely basically saying, "one of you people should do
something about this" is not a solution to the problem, it's not even an
approach towards a solution to the problem.
I believe that the ability for Postgr
At this point we have a known critical bug. Usually the PostgreSQL
community
is all over critical bugs. Why is this any different?
It sounds to me that people are just annoyed that users don't RTFM. Get
over it. Most won't. If users RTFM more often, it would put most support
companies out of bu
in the foot. We've seen several instances of people blowing away
pg_xlog and pg_clog, for example, because they "don't need log files".
Or how about failing to keep adequate backups? That's a sure way for an
ignorant admin to lose data too.
There is a difference between actively doing somet
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> >
> > Once autovacuum gets to the point where it's used by default, this
> > particular failure mode should be a thing of the past, but in the
> > meantime I'm not going to panic about it.
>
> I don't know how to say this without sounding like a jerk,
> [EMAIL PROTECTED] writes:
>> Maybe I'm missing something, but shouldn't the prospect of data loss
>> (even
>> in the presense of admin ignorance) be something that should be
>> unacceptable? Certainly within the realm "normal PostgreSQL" operation.
>
> [ shrug... ] The DBA will always be able to
[EMAIL PROTECTED] writes:
> Maybe I'm missing something, but shouldn't the prospect of data loss (even
> in the presense of admin ignorance) be something that should be
> unacceptable? Certainly within the realm "normal PostgreSQL" operation.
[ shrug... ] The DBA will always be able to find a way
>> The checkpointer is entirely incapable of either detecting the problem
>> (it doesn't have enough infrastructure to examine pg_database in a
>> reasonable way) or preventing backends from doing anything if it did
>> know there was a problem.
>
> Well, I guess I meant 'some regularly running proc
The checkpointer is entirely incapable of either detecting the problem
(it doesn't have enough infrastructure to examine pg_database in a
reasonable way) or preventing backends from doing anything if it did
know there was a problem.
Well, I guess I meant 'some regularly running process'...
I think
> Not being able to issue new transactions *is* data loss --- how are you
> going to get the system out of that state?
Yes, but I also would prefer the server to say something as "The database is
full, please vacuum." - the same as when the hard disk is full and you try
to record something on it -
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> This might seem like a stupid question, but since this is a massive data
> loss potential in PostgreSQL, what's so hard about having the
> checkpointer or something check the transaction counter when it runs and
> either issue a db-wide vac
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> This might seem like a stupid question, but since this is a massive
> data loss potential in PostgreSQL, what's so hard about having the
> checkpointer or something check the transaction counter when it runs
> and either issue a db-wide vacuum
>> I think you're pretty well screwed as far as getting it *all* back goes,
>> but you could use pg_resetxlog to back up the NextXID counter enough to
>> make your tables and databases reappear (and thereby lose the effects of
>> however many recent transactions you back up over).
>>
>> Once you've
It must be possible to create a tool based on the PostgreSQL sources that
can read all the tuples in a database and dump them to a file stream. All
the data remains in the file until overwritten with data after a vacuum.
It *should* be doable.
If there data in the table is worth anything, then it
I think you're pretty well screwed as far as getting it *all* back goes,
but you could use pg_resetxlog to back up the NextXID counter enough to
make your tables and databases reappear (and thereby lose the effects of
however many recent transactions you back up over).
Once you've found a NextXID s
> Once you've found a NextXID setting you like, I'd suggest an immediate
> pg_dumpall/initdb/reload to make sure you have a consistent set of data.
> Don't VACUUM, or indeed modify the DB at all, until you have gotten a
> satisfactory dump.
>
> Then put in a cron job to do periodic vacuuming ;-)
T
"Kouber Saparev" <[EMAIL PROTECTED]> writes:
> After asking the guys in the [EMAIL PROTECTED] channel they told
> me that the reason is the "Transaction ID wraparound", because I have never
> ran VACUUM on the whole database.
> So they proposed to ask here for help. I have stopped the server, but
Hi folks,
I ran into big trouble - it seems that my DB is lost.
"select * from pg_database" gives me 0 rows, but I still can connect to
databases with \c and even select from tables there, although they're also
not visible with \dt.
After asking the guys in the [EMAIL PROTECTED] channel they tol
Hello,
My name is rong xie. I am a Student at
TU-Munich.
I have a question to Postgresql an Linux.
e.g:
for IBM DB2: I can write a test.sql
file.
--test.sql
connect to database1;
set schema xie;
select * from table1;
insert table1 value('rong','xie',22);
select * from table1;
terminate
my cgi program is
test.cgi:###require
"./connectdb.pl";&connectdatabase();$query="select count(*) from
messages";$sth=$dbh->prepare($query);$sth->execute();$count=$sth->fetchrow_array();print
"Content-type: text/html\n\n";print
<<"TAG"; The count is
$count. TAGexit
0;#
65 matches
Mail list logo