I can extract a tar ball of 3.0-STABLE from CVS for you. Does this help you? I do not recommend to use alpha version for a production system. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp
> Hi Tatsuo, > > I am running 3.0.3-2 at the moment. That could be an option, however > I do not have experience with CVS. Do you anticipate that there will > be another stable package released in the 3.0.X series? If not, is it > worth trying 3.1 alpha2? This is for a production cluster that we > hope to go live with in the next few days. > > Thanks, > > Jonathan > > > On 5/17/2011 5:23 AM, Tatsuo Ishii wrote: >> I'm not sure what pgool-II version you are using but if you are using >> 3.0.x, you might want to try the lastest pgpool-II-3.0-STABLE tree >> from the source repository. >> -- >> Tatsuo Ishii >> SRA OSS, Inc. Japan >> English: http://www.sraoss.co.jp/index_en.php >> Japanese: http://www.sraoss.co.jp >> >>> Hi everyone, >>> >>> I'm looking for help (and hopefully a solution) involving an error I >>> am getting while using PGPool2 in master/slave +LB mode on top of PG9 >>> in streaming replication mode. In general Pgpool2 is working great, >>> but as noted in the subject every so often we get portal errors back >>> from PGPool2 that result in internal server errors. When our >>> application is connected directly to PG9, these errors do not occur. >>> Digging deeper these errors seem to commonly occur in PGPool2 while in >>> Load Balance mode, but only sometimes are they reported back to DBI as >>> an error. Here is an excerpt from Pgpool2's debug log. >>> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9083: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9080: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9082: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9085: pool_add_sent_message: portal "" already exists >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9085: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9078: pool_add_sent_message: portal "" already exists >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9051: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9078: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9080: pool_add_sent_message: portal "" already exists >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 9080: Execute: portal name<> >>> May 13 21:13:22 ip-10-119-14-100 pgpool: 2011-05-13 21:13:22 DEBUG: >>> pid 8910: pool_add_sent_message: portal "" already exists >>> >>> On occasion Pgpool2 logs this error at the LOG level in addition to >>> the DEBUG level: >>> >>> May 13 20:39:12 web2 pgpool: 2011-05-13 20:39:12 LOG: pid 8107: >>> pool_send_and_wait: Error or notice message from backend: : DB node >>> id: 0 backend pid: 8157 statement: P message: portal "" does not exist >>> >>> And when that happens our application receives the error too (And >>> generates an Internal server error) >>> >>> This is the error as logged by apache2, which would have originated >>> from DBI and DBD::Pg: >>> >>> [Fri May 13 20:39:152011] -e: DBD::Pg::st execute failed: ERROR: >>> portal "" does not exist at /home/www/directory_to_module/a_module.pm >>> line 514. >>> >>> What I'm looking for is insight into what this error means, and help >>> determing whether this is user error in our configuration or usage of >>> PGPool2, or a potential bug in PGPool2. I'd like to note in case it >>> helps, that when Load balancing is turned off, all portal errors go >>> away. However, in our application we need loadbalancing as our >>> application has reached a point where it peak traffic demands better >>> database performance, and continues to grow) >>> >>> I could post up our pgpool.conf, but it's nothing special, but if it >>> helps let me know and I'll send it out. >>> >>> Thanks for any help or insight! >>> >>> -Jonathan >>> >>> -- >>> Jonathan Regeimbal >>> The Richard Group >>> w: 703.584.5808 >>> c: 540.907.5116 >>> e: [email protected] > _______________________________________________ Pgpool-general mailing list [email protected] http://pgfoundry.org/mailman/listinfo/pgpool-general
