Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2006-01-12 Thread Anjan Kumar. A.



Through googling, i found that Normal Disk has external data transfer rate of 
around 40MBps,

^^
Does this includes, seek and rotational latency ?


where as Main Memory has Data transfer rate ranging from 1.6GBps to 2.8GBps.












As we can see, the ratio between Disk and Main Memory data transfer rates is 
around 50. Then, if we multiply all cpu_* paramters by 50, the resulting 
values will be:


   random_page_cost = 1;
   cpu_tuple_cost = 0.5;
   cpu_index_tuple_cost = 0.05;
   cpu_operator_cost = 0.0125;


Would it be a suitable approach ? We request all of u to give 
comments/suggestions on this calcualations. Thanking You.






On Sun, 11 Dec 2005, Tom Lane wrote:


[ trimming cc list to something sane ]

Anjan Kumar. A. [EMAIL PROTECTED] writes:
In Main Memory DataBase(MMDB) entire database on the disk is loaded 
on to the main memory during initial startup of the system.  There after 
all the references are made to database on the main memory.  When the 
system is going to shutdown, we will write back the database on  the main 
memory to disk.  Here, for the sake of recovery we are writing log records 
on to the disk  during the transaction execution.


Don't you get 99.9% of this for free with Postgres' normal behavior?
Just increase shared_buffers.

  Can any one tell me the modifications needs to be incorporated to 
PostgreSQL,  so that it considers only Processing Costs during 
optimization of the Query.


Assuming that a page fetch costs zero is wrong even in an all-in-memory
environment.  So I don't see any reason you can't maintain the
convention that a page fetch costs 1.0 unit, and just adjust the other
cost parameters in the light of a different idea about what that
actually means.

Will it be sufficient, if we change the  default values of above paramters 
in src/include/optimizer/cost.h and 
src/backend/utils/misc/postgresql.conf.sample as follows:



 random_page_cost = 4;
 cpu_tuple_cost = 2;
 cpu_index_tuple_cost = 0.2;
 cpu_operator_cost = 0.05;


You'd want random_page_cost = 1 since there is presumably no penalty for
random access in this context.  Also, I think you'd want
cpu_operator_cost a lot higher than that (maybe you dropped a decimal
place?  You scaled the others up by 200 but this one only by 20).

It's entirely possible that the ratios of the cpu_xxx_cost values
aren't very good and will need work.  In the past we've never had
occasion to study them very carefully, since they were only marginal
contributions anyway.

regards, tom lane






--
Regards.

Anjan Kumar A.
MTech2,  Comp Sci.,
www.cse.iitb.ac.in/~anjankumar
__
Bradley's Bromide:
If computers get too powerful, we can organize
them into a committee -- that will do them in.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-12 Thread Anjan Kumar. A.




Defaulat values of various parameters in PostgreSQL:

#random_page_cost = 4   # units are one sequential page fetch 
cost
#cpu_tuple_cost = 0.01  # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#cpu_operator_cost = 0.0025 # (same)
#effective_cache_size = 1000# typically 8KB each


Since sequential access is not significantly faster than random access 
in a MMDB, random_page_cost will be approximately same as sequential page fetch 
cost.

If we make both sequential_page_fetch_cost and random_page_cost to 1, then  we need to 
increase the various cpu_* paramters by multiplying the default values with appropriate 
Scaling Factor.  Now, we need to determine this Scaling Factor.


Through googling, i found that Normal Disk has external data transfer rate of 
around 40MBps,
where as Main Memory has Data transfer rate ranging from 1.6GBps to 2.8GBps.

As we can see, the ratio between Disk and Main Memory data transfer rates is 
around 50. Then, if we multiply all cpu_* paramters by 50, the resulting values 
will be:

random_page_cost = 1;
cpu_tuple_cost = 0.5;
cpu_index_tuple_cost = 0.05;
cpu_operator_cost = 0.0125;


Would it be a suitable approach ? We request all of u to give 
comments/suggestions on this calcualations. Thanking You.





On Sun, 11 Dec 2005, Tom Lane wrote:


[ trimming cc list to something sane ]

Anjan Kumar. A. [EMAIL PROTECTED] writes:

In Main Memory DataBase(MMDB) entire database on the disk is loaded  on to 
the main memory during initial startup of the system.  There after all the 
references are made to database on the main memory.  When the system is going 
to shutdown, we will write back the database on  the main memory to disk.  
Here, for the sake of recovery we are writing log records on to the disk  
during the transaction execution.


Don't you get 99.9% of this for free with Postgres' normal behavior?
Just increase shared_buffers.


  Can any one tell me the modifications needs to be incorporated to PostgreSQL, 
 so that it considers only Processing Costs during optimization of the Query.


Assuming that a page fetch costs zero is wrong even in an all-in-memory
environment.  So I don't see any reason you can't maintain the
convention that a page fetch costs 1.0 unit, and just adjust the other
cost parameters in the light of a different idea about what that
actually means.


Will it be sufficient, if we change the  default values of above paramters in 
src/include/optimizer/cost.h and  
src/backend/utils/misc/postgresql.conf.sample as follows:



 random_page_cost = 4;
 cpu_tuple_cost = 2;
 cpu_index_tuple_cost = 0.2;
 cpu_operator_cost = 0.05;


You'd want random_page_cost = 1 since there is presumably no penalty for
random access in this context.  Also, I think you'd want
cpu_operator_cost a lot higher than that (maybe you dropped a decimal
place?  You scaled the others up by 200 but this one only by 20).

It's entirely possible that the ratios of the cpu_xxx_cost values
aren't very good and will need work.  In the past we've never had
occasion to study them very carefully, since they were only marginal
contributions anyway.

regards, tom lane



--
Regards.

Anjan Kumar A.
MTech2,  Comp Sci.,
www.cse.iitb.ac.in/~anjankumar
__
A woman physician has made the statement that smoking is neither
physically defective nor morally degrading, and that nicotine, even
when indulged to in excess, is less harmful than excessive petting.
-- Purdue Exponent, Jan 16, 1925

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-12 Thread Jim C. Nasby
On Mon, Dec 12, 2005 at 06:39:42PM +0530, Anjan Kumar. A. wrote:
 Through googling, i found that Normal Disk has external data transfer rate 
 of around 40MBps,
 where as Main Memory has Data transfer rate ranging from 1.6GBps to 2.8GBps.

I think 40MB/s is a burst speed. You should do some testing to verify.

In any case, PostgreSQL doesn't come close to the theoretical maximum
disk bandwidth even on a sequential scan. There's been discussion about
this on various lists in the past. For a single drive, expect something
more in the range of 4-6MB/s (depending on the drive).

More important that throughput though, is latency. Because the latency
on memory is much closer to 0 (it's not truely 0 due to L1/L2 caching),
you can serve concurrent requests a lot faster.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-12 Thread Josh Berkus
Anjan,

 But, in PostgreSQL  all costs are  scaled relative to a page fetch. If we
 make both sequential_page_fetch_cost and random_page_cost to 1, then  we
 need to increase the various cpu_* paramters by multiplying the default
 values with appropriate  Scaling Factor.  Now, we need to determine this
 Scaling Factor.

I see, so you're saying that because the real cost of a page fetch has 
decreased, the CPU_* costs should increase proportionally because relative to 
the real costs of a page fetch they should be higher?  That makes a sort of 
sense.

The problem that you're going to run into is that currently we have no 
particularly reason to believe that the various cpu_* costs are more than 
very approximately correct as rules of thumb.  So I think you'd be a lot 
better off trying to come up with some means of computing the real cpu costs 
of each operation, rather than trying to calculate a multiple of numbers 
which may be wrong in the first place.

I know that someone on this list was working on a tool to digest EXPLAIN 
ANALYZE results and run statistics on them.   Can't remember who, though.

Also, I'm still curious on how you're handling shared_mem, work_mem and 
maintenance_mem.  You didn't answer last time.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Anjan Kumar. A.



I'm working on a project, whose implementation deals with PostgreSQL. A brief 
description of the project is given  below.

 Project Description:
 
   In Main Memory DataBase(MMDB) entire database on the disk is loaded  on to 
the main memory during initial startup of the system.  There after all the 
references are made to database on the main memory.  When the system is going 
to shutdown, we will write back the database on  the main memory to disk.  
Here, for the sake of recovery we are writing log records on to the disk  
during the transaction execution.

   We want to implement MMDB by modifying PostgreSQL. We implemented  our own 
Main Memory File System to store the primary copy of the database in main 
memory, and Modified the PostgreSQL to access the data in the Main Memory File 
System.

   Now, in our implementation Disk access is completely avoided during normal 
transaction execution. So, we need to modify the Query Optimizer of PostgreSQL 
so that it wont  consider disk related costs during calculation of Query Costs. 
Query Optimizer should try to minimize the Processing Cost. The criteria for 
cost can be taken as the number of tuples that have to read/write from main 
memory, number of comparisons, etc.


 Can any one tell me the modifications needs to be incorporated to PostgreSQL,  
so that it considers only Processing Costs during optimization of the Query.

In PostgreSQL, Path costs are measured in units of disk accesses. One 
sequential page fetch has cost 1. I think, in PostgreSQL following paramters 
are used in calculating the cost of the Query Path :

#random_page_cost = 4   # units are one sequential page fetch 
cost
#cpu_tuple_cost = 0.01  # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#cpu_operator_cost = 0.0025 # (same)
#effective_cache_size = 1000# typically 8KB each


In our case we are reading pages from Main Memory File System, but not from Disk. Will it 
be sufficient, if we change the  default values of above paramters in 
src/include/optimizer/cost.h and  
src/backend/utils/misc/postgresql.conf.sample as follows:

random_page_cost = 4;
cpu_tuple_cost = 2;
cpu_index_tuple_cost = 0.2;
cpu_operator_cost = 0.05;


Please help us in this regard. I request all of you to give 
comments/suggestions on this. Waiting for your kind help.



--
Thanks.

Anjan Kumar A.
MTech2,  Comp Sci.,
www.cse.iitb.ac.in/~anjankumar
__
May's Law:
The quality of correlation is inversly proportional to the density
of control.  (The fewer the data points, the smoother the curves.)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Josh Berkus
Anjan,

 In our case we are reading pages from Main Memory File System, but not from
 Disk. Will it be sufficient, if we change the  default values of above
 paramters in src/include/optimizer/cost.h and 
 src/backend/utils/misc/postgresql.conf.sample as follows:

  random_page_cost = 4;

This should be dramatically lowered.  It's supposed to represent the ratio of 
seek-fetches to seq scans on disk.  Since there's no disk, it should be a 
flat 1.0.   However, we are aware that there are flaws in our calculations 
involving random_page_cost, such that the actual number for a system where 
there is no disk cost would be lower than 1.0.   Your research will hopefully 
help us find these flaws.

  cpu_tuple_cost = 2;
  cpu_index_tuple_cost = 0.2;
  cpu_operator_cost = 0.05;

I don't see why you're increasing the various cpu_* costs.  CPU costs would be 
unaffected by the database being in memory.   In general, I lower these by a 
divisor based on the cpu speed; for example, on a dual-opteron system I lower 
the defaults by /6.   However, that's completely unrelated to using an MMDB.

So, other than random_page_cost, I don't know of other existing GUCs that 
would be directly related to using a disk/not using a disk.  How are you 
handling shared memory and work memory?

I look forward to hearing more about your test!

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [DOCS] [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Tom Lane
Josh Berkus josh@agliodbs.com writes:
 I don't see why you're increasing the various cpu_* costs.

You missed the point Josh --- these numbers are relative to the cost of
a page fetch, so if page fetch is measured in microseconds instead of
milliseconds, then you *do* want to bump the CPU costs up.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Anjan Kumar. A.



Since sequential access is not significantly faster than random access in a 
MMDB, random_page_cost will be approximately same as sequential page fetch cost.

As every thing is present in Main Memory, we need to give approximately same 
cost to read/write to Main Memory and CPU Related operations.


But, in PostgreSQL  all costs are  scaled relative to a page fetch. If we make both 
sequential_page_fetch_cost and random_page_cost to 1, then  we need to 
increase the various cpu_* paramters by multiplying the default values with appropriate  
Scaling Factor.  Now, we need to determine this Scaling Factor.


Still, i want to confirm whether this approach is the correct one.





On Sun, 11 Dec 2005, Josh Berkus wrote:


Anjan,


In our case we are reading pages from Main Memory File System, but not from
Disk. Will it be sufficient, if we change the  default values of above
paramters in src/include/optimizer/cost.h and
src/backend/utils/misc/postgresql.conf.sample as follows:

 random_page_cost = 4;


This should be dramatically lowered.  It's supposed to represent the ratio of
seek-fetches to seq scans on disk.  Since there's no disk, it should be a
flat 1.0.   However, we are aware that there are flaws in our calculations
involving random_page_cost, such that the actual number for a system where
there is no disk cost would be lower than 1.0.   Your research will hopefully
help us find these flaws.


 cpu_tuple_cost = 2;
 cpu_index_tuple_cost = 0.2;
 cpu_operator_cost = 0.05;


I don't see why you're increasing the various cpu_* costs.  CPU costs would be
unaffected by the database being in memory.   In general, I lower these by a
divisor based on the cpu speed; for example, on a dual-opteron system I lower
the defaults by /6.   However, that's completely unrelated to using an MMDB.

So, other than random_page_cost, I don't know of other existing GUCs that
would be directly related to using a disk/not using a disk.  How are you
handling shared memory and work memory?

I look forward to hearing more about your test!




--
Regards.

Anjan Kumar A.
MTech2,  Comp Sci.,
www.cse.iitb.ac.in/~anjankumar
__
Do not handicap your children by making their lives easy.
-- Robert Heinlein

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Carlos Moreno


Hi,

I'm very new to this list -- I've been using and advocating PostgreSQL for
no less than 4 or 5 years now, and have participated in some of the other
mailing lists, but never on this one.

My question is (short version):  how would one go about adding a new
(built-in) function to PostgreSQL?

Long-ish version:

I know the answer in theory -- one goes through the source code, find
out how it all works, and modify/add the code to add or fix whatever
feature we want.

I guess my point in here would be rather a feature request -- except
that I'd find it pretty exciting to implement it myself, and then propose
the new feature by volunteering the implementation that I already wrote
(seems like the spirit of open-source communities, right?) -- then of
course, it would be subject to consensus, whether or not the feature
makes sense and the implementation is good enough.

I'm interested in adding additional hash functions -- PG supports, as part
of the built-in SQL functions, MD5 hashing.  So, for instance, I can simply
type, at a psql console, the following:

select md5('abc');

My feature request (which again, I'd like to implement it myself) would
be the ability to do:

select sha1('xyz'), sha256('etc');

(At least these two -- maybe for completeness it would be good to have
sha224, 384, and 512, but I guess SHA1 and SHA-256 would be a very good
and sound starting point)

So, can you offer some advice or pointers on how to go about that?

I started by doing a search for the string md5 through all the source
code -- the problem is, md5 shows up in many many many places (it is
part of the authentication protocol, among other things), so I got a
little bit lost searching through it all.

I wonder if you have some documents specifically aimed at providing
advice and documentation for prospective developers (or for people
that want to tweak the source code to fix/tuneup or add functionality),
I guess that would be great for me in this case.

Thanks!

Carlos
--


---(end of broadcast)---
TIP 6: explain analyze is your friend


Adding funtions to postgresql (Not - )e: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Hannu Krosing
Ühel kenal päeval, P, 2005-12-11 kell 17:55, kirjutas Carlos Moreno:
 Hi,
 
 I'm very new to this list -- I've been using and advocating PostgreSQL for
 no less than 4 or 5 years now, and have participated in some of the other
 mailing lists, but never on this one.
 
 My question is (short version):  how would one go about adding a new
 (built-in) function to PostgreSQL?

Ask your question as a separate post, not as an answer t another
thread :)

 I'm interested in adding additional hash functions -- PG supports, as part
 of the built-in SQL functions, MD5 hashing.  So, for instance, I can simply
 type, at a psql console, the following:
 
 select md5('abc');
 
 My feature request (which again, I'd like to implement it myself) would
 be the ability to do:
 
 select sha1('xyz'), sha256('etc');
 
 (At least these two -- maybe for completeness it would be good to have
 sha224, 384, and 512, but I guess SHA1 and SHA-256 would be a very good
 and sound starting point)

Take a look at the pgcrypto contrib module (in src/contrib/pgcrypto )

Not only does it show how to add functions, but it also provides many of
the ones you need.


Hannu



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Andrew Dunstan


Actually, there is probably comparatively little to gain from making it 
a builtin. And SHA1 is already there in the pgcrypto contrib module. 
Presumably if we wanted a builtin we would start from that code base.


cheers

andrew

Carlos Moreno wrote:



Hi,

I'm very new to this list -- I've been using and advocating PostgreSQL 
for

no less than 4 or 5 years now, and have participated in some of the other
mailing lists, but never on this one.

My question is (short version):  how would one go about adding a new
(built-in) function to PostgreSQL?

Long-ish version:

I know the answer in theory -- one goes through the source code, find
out how it all works, and modify/add the code to add or fix whatever
feature we want.

I guess my point in here would be rather a feature request -- except
that I'd find it pretty exciting to implement it myself, and then propose
the new feature by volunteering the implementation that I already wrote
(seems like the spirit of open-source communities, right?) -- then of
course, it would be subject to consensus, whether or not the feature
makes sense and the implementation is good enough.

I'm interested in adding additional hash functions -- PG supports, as 
part
of the built-in SQL functions, MD5 hashing.  So, for instance, I can 
simply

type, at a psql console, the following:

select md5('abc');

My feature request (which again, I'd like to implement it myself) would
be the ability to do:

select sha1('xyz'), sha256('etc');

(At least these two -- maybe for completeness it would be good to have
sha224, 384, and 512, but I guess SHA1 and SHA-256 would be a very good
and sound starting point)

So, can you offer some advice or pointers on how to go about that?

I started by doing a search for the string md5 through all the source
code -- the problem is, md5 shows up in many many many places (it is
part of the authentication protocol, among other things), so I got a
little bit lost searching through it all.

I wonder if you have some documents specifically aimed at providing
advice and documentation for prospective developers (or for people
that want to tweak the source code to fix/tuneup or add functionality),
I guess that would be great for me in this case.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Please Help: PostgreSQL Query Optimizer

2005-12-11 Thread Tino Wildenhain
Am Sonntag, den 11.12.2005, 17:55 -0500 schrieb Carlos Moreno:
...
 I'm interested in adding additional hash functions -- PG supports, as part
 of the built-in SQL functions, MD5 hashing.  So, for instance, I can simply
 type, at a psql console, the following:
 
 select md5('abc');
 
 My feature request (which again, I'd like to implement it myself) would
 be the ability to do:
 
 select sha1('xyz'), sha256('etc');
 
 (At least these two -- maybe for completeness it would be good to have
 sha224, 384, and 512, but I guess SHA1 and SHA-256 would be a very good
 and sound starting point)
 
 So, can you offer some advice or pointers on how to go about that?

You might want to check out contrib/pgcrypto

more often then not, if you want something, its
already done ;)

Not sure if this will ever be included in the core,
since not many people need these advanced hash functions.

HTH
Tino Wildenhain


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings