> We use Zabbix.
There's a Zabbix template for PostgreSQL called "pg_monz".
http://pg-monz.github.io/pg_monz/index-en.html
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
--
Sent via pgsql-p
chmarking result, at least
for me. I've never seen better result with HT on, except POWER.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
connect to PostgreSQL directly. Note that there's no problem that some
sessions connect via pgpool-II, while others directly connect
PostgreSQL.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via p
ion will be turned off.
If you have further questions, you'd better to subscribe and post the
question:
http://www.pgpool.net/mailman/listinfo/pgpool-general
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
> this is my pool
d related macros
* in postgres.h. Many datatypes assume that any allocatable size can
* be represented in a varlena header.
IMO the SQL string size limit is totally different from
TOAST. Shouldn't we have different limit for SQL string?
(MAX_CONVERSION_GROWTH is different story, of course)
-
> On Tue, Jun 12, 2012 at 1:54 AM, Tatsuo Ishii wrote:
>>> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes wrote:
>>>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii wrote:
>>>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
> On Sun, Jun 10, 2012 at 4:47 PM, Jeff Janes wrote:
>> On Wed, May 30, 2012 at 2:06 AM, Tatsuo Ishii wrote:
>>>> Yeah, Jeff's experiments indicated that the remaining bottleneck is lock
>>>> management in the server. What I fixed so far on the pg_dump sid
> On Thu, May 31, 2012 at 10:45 AM, Tatsuo Ishii wrote:
>> Just for record, I rerun the test again with my single-LOCK patch, and
>> now total runtime of pg_dump is 113 minutes.
>> 188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).
>>
&g
est again with my single-LOCK patch, and
now total runtime of pg_dump is 113 minutes.
188 minutes(9.0)->125 minutes(git master)->113 minutes(with my patch).
So far, I'm glad to see 40% time savings at this point.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/inde
time in pg_dump is spent for SELECTs
against system catalogs. The fix will be turn many SELECTs into single
SELECT, probably using big IN clause for tables oids.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent v
k 13 minutes in total, now
> it only takes 3 seconds. Comments?
Shall I commit to master and all supported branches?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
if psql is taking
> AccessShareLock on lots of tables, there's still a problem.
Ok, I modified the part of pg_dump where tremendous number of LOCK
TABLE are issued. I replace them with single LOCK TABLE with multiple
tables. With 100k tables LOCK statements took 13 minutes in total, now
it only takes 3 sec
nce of pg_dump
might be highlighted more in the future.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http:
p.s. You need to increate max_locks_per_transaction before running
pg_dump (I raised to 640 in my case).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresq
gsql-committers/2012-03/msg00230.php
I'm wondering if these fixes (or today's commit) include the case for
a database has ~100 thounsands of tables, indexes. One of my customers
has had troubles with pg_dump for the database, it takes over 10
hours.
--
Tatsuo Ishii
SRA OSS, Inc. Ja
only the
nullable side of the join. These changes provide additional query
optimization possibilities.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
lAdmin.
> (Have tried to post to http://pgsqlpgpool.blogspot.com - with no success)
It's my personal blog:-) Please post to pgpool-geneal mailing list.
You can subscribe it from:
http://lists.pgfoundry.org/mailman/listinfo/pgpool-general
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/
lAdmin.
> (Have tried to post to http://pgsqlpgpool.blogspot.com - with no success)
It's my personal blog:-) Please post to pgpool-geneal mailing list.
You can subscribe it from:
http://lists.pgfoundry.org/mailman/listinfo/pgpool-general
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/
gt; *) function reads (only) from tables, or is an immutable function in
> most senses but influenced from the GUC (or any other out of scope
> thing): STABLE
It seems if above is correct, I can say STABLE functions should never
modify databases as well.
> *) all other cases: VOLATILE
From: Rajesh Kumar Mallah
Subject: Re: [PERFORM] Pooling in Core WAS: Need help in performance tuning.
Date: Mon, 19 Jul 2010 08:06:09 +0530
Message-ID:
> Thanks for the thought but it (-C) does not work .
Still you need:
pgbench's -c <= (pool_size + reserve_pool_size)
--
Tatsu
y much similar to the real world application which do not use
connection pooling. You will be supprised how PostgreSQL connection
overhead is large.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp
--
Sent via pgsql-performance ma
> -with-pgsql-libdir = PostgreSQL dir/lib/
>
> But still could not resolve the issue.
What are the exact error messages? What kind of platform are you
using? What pgpool-II version?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sr
> Perhaps we should make that change only in HEAD, not in the
> back branches, or crank it up only to 512 in the back branches?
I'm a little bit worried about cranking up PG_SYSLOG_LIMIT in the back
branches. Cranking it up will definitely change syslog messages text
style and might confuse
r you.
As a member of pgpool development team, I am always looking for pgpool
examples in the real world which could be open to public. Can you
plese tell me more details the pgpool usage if possible?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
> > 3. is there a way to somehow log what happened to the
> > Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > > Interesting. We (some Japanese companies including SRA OSS,
> > > Inc. Japan) did some PG scalability testing using a Unisys's big 16
> > > (physical) CPU machine and found PG scales up to 8 CPUs.
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Interesting. We (some Japanese companies including SRA OSS,
> > Inc. Japan) did some PG scalability testing using a Unisys's big 16
> > (physical) CPU machine and found PG scales up to 8 CPUs. However
> > beyon
is available at the
moment. Please use some automatic translation services)
Evalution environment was:
PostgreSQL 8.1.2
OSDL DBT-1 2.1
Miracle Linux 4.0
Unisys ES700 Xeon 2.8GHz CPU x 16 Mem 16GB(HT off)
--
Tatsuo Ishii
SRA OSS, Inc. Japan
---(end of broadcast)---
TIP 6: explain analyze is your friend
d idea (though of course only for the "standard" scripts).
> Tatsuo, what do you think?
That would be annoying since almost every users will get the kind of
warnings. What about improving the README?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
at's the code relationship between pgPool and pgCluster, if
> any?
PGCluster consists of three kind of servers, "load balance server",
"cluster server"(modified PostgreSQL backend) and "replication
server". I believe some of codes of pgpo
> > > Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > > > Probably VACUUM works well for small to medium size tables, but not
> > > > for huge ones. I'm considering about to implement "on the spot
> > > > salvaging dead tuples".
>
ng back and forth to the master -- as long as the
> > connection-recycling
> > timeout were set higher than the pgPool switch-off period.
>
> no. you can only count on web-server-process==connection, but not
> web-user==connection, unless you can garantee that the same user
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > Probably VACUUM works well for small to medium size tables, but not
> > for huge ones. I'm considering about to implement "on the spot
> > salvaging dead tuples".
>
> That's impossible on i
Depends on your connection pooling software, I suppose. Most connection
> pooling software only returns connections to the pool after a user has been
> inactive for some period ... generally more than 3 seconds. So connection
> continuity could be trusted.
Not sure what you mean by "most connection pooling software", but I'm
sure that pgpool behaves differently.
--
Tatsuo Ishii
---(end of broadcast)---
TIP 8: explain analyze is your friend
> The alternative is, of course, that pgPool direct all explicit transactions
> to
> the master ... which is a good idea anyway.So you could do:
>
> BEGIN;
> SELECT some_update_function();
> COMMIT;
Yes. pgpool has already done this in load balancing. Expanding this
fo
s not help. It just make the
execution time of VACUUM longer, that means more and more dead tuples
are being made while updating.
Probably VACUUM works well for small to medium size tables, but not
for huge ones. I'm considering about to implement "on the spot
salvaging dead tuples&qu
monitor "update switching" by
*connection* not by *table*" means. In your example:
> (1) 00:00 User A updates "My Profile"
> (2) 00:01 "My Profile" UPDATE finishes executing.
> (3) 00:02 User A sees "My Profile" re-displayed
> (6) 00:04
at period. Then
> it switches back to "pool" mode where the slaves may be used.
Can I ask a question?
Suppose table A gets updated on the master at time 00:00. Until 00:03
pgpool needs to send all queries regarding A to the master only. My
question is, how can pgpool know a query is rel
lony-I. I.e.
1) pgpool does the load balance and sends query to Slony-I's slave and
master if the query is SELECT.
2) pgpool sends query only to the master if the query is other than
SELECT.
Remaining problem is that Slony-I is not a sync replication
solution. Thus you need to prepare th
FSM size: 100 relations + 1600 pages = 19 kB shared memory.
In this case 1808 is the minimum FSM size. Of course this number would
change depending on the frequency of VACUUM. Therefore you need some
room for the FSM size.
--
Tatsuo Ishii
---(end of broadcast)---
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > First, it's not a particular problem with pgpool. As far as I know any
> > connection pool solution has exactly the same problem. Second, it's
> > easy to fix if PostgreSQL provides a functionarity such as:
has exactly the same problem. Second, it's
easy to fix if PostgreSQL provides a functionarity such as:"drop all
temporary tables if any". I think we should implement it if we agree
that connection pooling should be implemented outside the PostgreSQL
engine itself. I think cores agree w
get around them, you get the idea.
I don't know what you are exactly referring to in above URL when you
are talking about "potential pitfalls of pooling". Please explain
more.
--
Tatsuo Ishii
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
ng
> forked to make up for it.
>
> So I raised it to a higher value (256) and it immediately segfaulted and
> dropped the core.So not sure exactly how to proceed, since I rather
> need the thing to fork additional servers as load hits and not the other
> way around.
What version of pgpool did you test? I know that certain version
(actually 2.0.2) had such that problem. Can you try again with the
latest verison of pgpool? (it's 2.0.6).
--
Tatsuo Ishii
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
4330.290294 99.4%
94.1%
with pgpool(replication mode) 4297.614996 98.6%
87.6%
with pgpoo(replication with strictmode) 4270.223136 98.0%
81.5%
--
Tatsuo Ishii
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
44 matches
Mail list logo