Re: [HACKERS] [pgsql-advocacy] Not 7.5, but 8.0 ?

2003-11-19 Thread Nick Fankhauser
> > > Least interesting to many user perhaps, but lost of them
> > seen to think
> > > that it's important for expanding our userbase:
> > > http://www.postgresql.org/survey.php?View=1&SurveyID=9

> > That does not say that better entertainment will attract new
> > viewers, just that the existing viewers think that.

Perhaps more compelling is this survey, which shows that 21% of the users
are on actually the win32/cygwin platform now & hence are not enjoying the
performance or ease of installation that the other 79% of us get.

http://www.postgresql.org/survey.php?View=1&SurveyID=11

-Nick




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [JDBC] [HACKERS] PostgreSQL JDBC and sub-select

2002-11-10 Thread Nick Fankhauser

> > You could set up query logging in the backend and see what the offending
> > query is. It may still be something you did (a missing or extra
> > something somewhere).
> >
> How ?

These settings have worked for me in a similar situation: (pulled from the
admin list archives)


My goal was to get all of the SQL statements from a JDBC front-end to be
logged as they are executed in the postgres.log file (and not in the
syslog.) Adding the following to my postgresql.conf did the job:

syslog = 0
silent_mode = off
debug_print_query = on
debug_pretty_print = on

I'm not sure if the pretty print option does anything for the SQL, but it
didn't hurt.


The results appear on /var/log/postgresql.log using the Debian Linux
distribution. Not sure of the location in others.

-Nick


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Script to compute random page cost

2002-09-09 Thread Nick Fankhauser

Hi again-

I bounced these numbers off of Ray Ontko here at our shop, and he pointed
out that random page cost is measured in multiples of a sequential page
fetch. It seems almost impossible that a random fetch would be less
expensive than a sequential fetch, yet we all seem to be getting results <
1. I can't see anything obviously wrong with the script, but something very
odd is going.

-Nick

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Nick Fankhauser
> Sent: Monday, September 09, 2002 11:25 AM
> To: Bruce Momjian; PostgreSQL-development
> Cc: Ray Ontko
> Subject: Re: [HACKERS] Script to compute random page cost
>
>
> Bruce-
>
> With the change in the script that I mentioned to you off-list (which I
> believe just pointed it at our "real world" data), I got the following
> results with 6 successive runs on each of our two development platforms:
>
> (We're running PGSQL 7.2.1 on Debian Linux 2.4)
>
> System 1:
> 1.2 GHz Athlon Processor, 512MB RAM, Database on IDE hard drive
> random_page_cost = 0.857143
> random_page_cost = 0.809524
> random_page_cost = 0.809524
> random_page_cost = 0.809524
> random_page_cost = 0.857143
> random_page_cost = 0.884615
>
> System 2:
> Dual 1.2Ghz Athlon MP Processors, SMP enabled, 1 GB RAM, Database on Ultra
> SCSI RAID 5 with Hardware controller.
> random_page_cost = 0.894737
> random_page_cost = 0.842105
> random_page_cost = 0.894737
> random_page_cost = 0.894737
> random_page_cost = 0.842105
> random_page_cost = 0.894737
>
>
> I was surprised that the SCSI RAID drive is generally slower than IDE, but
> the values are in line with the results that others have been getting.
>
> -Nick
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
> > Sent: Monday, September 09, 2002 1:14 AM
> > To: PostgreSQL-development
> > Subject: Re: [HACKERS] Script to compute random page cost
> >
> >
> >
> > OK, turns out that the loop for sequential scan ran fewer times and was
> > skewing the numbers.  I have a new version at:
> >
> > ftp://candle.pha.pa.us/pub/postgresql/randcost
> >
> > I get _much_ lower numbers now for random_page_cost.
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Script to compute random page cost

2002-09-09 Thread Nick Fankhauser

Bruce-

With the change in the script that I mentioned to you off-list (which I
believe just pointed it at our "real world" data), I got the following
results with 6 successive runs on each of our two development platforms:

(We're running PGSQL 7.2.1 on Debian Linux 2.4)

System 1:
1.2 GHz Athlon Processor, 512MB RAM, Database on IDE hard drive
random_page_cost = 0.857143
random_page_cost = 0.809524
random_page_cost = 0.809524
random_page_cost = 0.809524
random_page_cost = 0.857143
random_page_cost = 0.884615

System 2:
Dual 1.2Ghz Athlon MP Processors, SMP enabled, 1 GB RAM, Database on Ultra
SCSI RAID 5 with Hardware controller.
random_page_cost = 0.894737
random_page_cost = 0.842105
random_page_cost = 0.894737
random_page_cost = 0.894737
random_page_cost = 0.842105
random_page_cost = 0.894737


I was surprised that the SCSI RAID drive is generally slower than IDE, but
the values are in line with the results that others have been getting.

-Nick

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
> Sent: Monday, September 09, 2002 1:14 AM
> To: PostgreSQL-development
> Subject: Re: [HACKERS] Script to compute random page cost
>
>
>
> OK, turns out that the loop for sequential scan ran fewer times and was
> skewing the numbers.  I have a new version at:
>
>   ftp://candle.pha.pa.us/pub/postgresql/randcost
>
> I get _much_ lower numbers now for random_page_cost.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly