Re: [HACKERS] pg_tables bug?

2015-12-19 Thread Gaetano Mendola
On Sat, Dec 19, 2015, 01:50 Andrew Dunstan <and...@dunslane.net> wrote:

>
>
>
>
> On 12/18/2015 05:18 PM, Gaetano Mendola wrote:
> > From documentation about "CREATE DATABASE name WITH TABLESAPCE =
> > tablespace_name":
> >
> > tablespace_name
> > The name of the tablespace that will be associated with the new
> > database, or DEFAULT to
> > use the template database's tablespace. This tablespace will be the
> > default tablespace used
> > for objects created in this database. See CREATE TABLESPACE for more
> > information.
> >
> > I'm sure that my tables are created in the name space but those are
> > not reported either in
> > pg_tables, either in pg_dump or by \d.
>
> 1. Please don't top-post on the PostgreSQL lists. See
> <http://idallen.com/topposting.html>
>
> 2. The system is working as designed and as documented - see the
> comments in the docs on pg_tables. If nothing is shown for the table's
> tablespace then it will be in the default tablespace for the database.
> That's what you're seeing. You appear to be assuming incorrectly that it
> means that the table will be in the system's default tablespace.
>

I did a reply using a not correctly setup client sorry for it. I'm not new
to this list. I understood now how it works. Having many database and many
tablespace is a nightmare this way. I will make my own view to fix the
annoyance.

>


Re: [HACKERS] pg_tables bug?

2015-12-18 Thread Gaetano Mendola
>From documentation about "CREATE DATABASE name WITH TABLESAPCE =
tablespace_name":

tablespace_name
The name of the tablespace that will be associated with the new database,
or DEFAULT to
use the template database's tablespace. This tablespace will be the default
tablespace used
for objects created in this database. See CREATE TABLESPACE for more
information.

I'm sure that my tables are created in the name space but those are not
reported either in
pg_tables, either in pg_dump or by \d.

Look as this:

kalman@kalman-VirtualBox:~$ mkdir tablespace_XXX
kalman@kalman-VirtualBox:~$ sudo chown postgres.postgres tablespace_XXX
kalman@kalman-VirtualBox:~$ psql template1
psql (9.4.5)
Type "help" for help.

template1=# create tablespace XXX LOCATION '/home/kalman/tablespace_XXX';
CREATE TABLESPACE
template1=# create database db_test with tablespace = XXX;
CREATE DATABASE
template1=# \q

kalman@kalman-VirtualBox:~$ psql db_test
psql (9.4.5)
Type "help" for help.

db_test=# create table t_test ( a integer, b numeric);
CREATE TABLE
db_test=# \d+ t_test
Table "public.t_test"
 Column |  Type   | Modifiers | Storage | Stats target | Description
+-+---+-+--+-
 a  | integer |   | plain   |  |
 b  | numeric |   | main|  |

db_test=# select * from pg_tables where tablename = 't_test';
 schemaname | tablename | tableowner | tablespace | hasindexes | hasrules |
hastriggers
+---++++--+-
 public | t_test| kalman || f  | f|
f
(1 row)

db_test=# select oid from pg_database where datname = 'db_test';
  oid
---
 80335

db_test=# select relfilenode from pg_class where relname = 't_test';
 relfilenode
-
   80336
(1 row)

Unfortunately contrary to what postgres is showing me the table test is in
/home/kalman/tablespace_:

root@kalman-VirtualBox:~# file
/home/kalman/tablespace_XXX/PG_9.4_201409291/80335/80336
/home/kalman/tablespace_XXX/PG_9.4_201409291/80335/80336: empty

as you can see the CREATE DATABASE documentation is honored but the system
is failing to give me the right tablespace location for that table.


Regards





On Thu, 17 Dec 2015 at 15:36 Tom Lane <t...@sss.pgh.pa.us> wrote:

> Gaetano Mendola <mend...@gmail.com> writes:
> > I'm playing around with tablespace (postgresq 9.4) and I found out what I
> > believe is a bug in pg_tables.
> > Basically if you create a database in a table space X and then you
> create a
> > table on the database the table is created correctly on the tablespace X
> (
> > I did a check on the filesystem) however if you do a select on pg_tables
> > the column tablespace for that table is empty and even worst if you dump
> > the DB there is no reporting about the the database or table being on
> that
> > tablespace.
> > Even \d doesn't report that the table is in the tablespace X.
>
> An empty entry in that column means that the table is in the default
> tablespace for the database.  Which it sounds like is what you have
> here.  I think it's operating as designed, though you might quibble
> with the decision that showing default tablespaces explicitly would
> have been clutter.
>
> regards, tom lane
>


Re: [HACKERS] pg_tables bug?

2015-12-18 Thread Gaetano Mendola
On Thu, 17 Dec 2015 at 15:36 Tom Lane <t...@sss.pgh.pa.us> wrote:

> Gaetano Mendola <mend...@gmail.com> writes:
> > I'm playing around with tablespace (postgresq 9.4) and I found out what I
> > believe is a bug in pg_tables.
> > Basically if you create a database in a table space X and then you
> create a
> > table on the database the table is created correctly on the tablespace X
> (
> > I did a check on the filesystem) however if you do a select on pg_tables
> > the column tablespace for that table is empty and even worst if you dump
> > the DB there is no reporting about the the database or table being on
> that
> > tablespace.
> > Even \d doesn't report that the table is in the tablespace X.
>
> An empty entry in that column means that the table is in the default
> tablespace for the database.  Which it sounds like is what you have
> here.  I think it's operating as designed, though you might quibble
> with the decision that showing default tablespaces explicitly would
> have been clutter.
>

Now it's clear thank you.


[HACKERS] pg_tables bug?

2015-12-16 Thread Gaetano Mendola
I'm playing around with tablespace (postgresq 9.4) and I found out what I
believe is a bug in pg_tables.
Basically if you create a database in a table space X and then you create a
table on the database the table is created correctly on the tablespace X (
I did a check on the filesystem) however if you do a select on pg_tables
the column tablespace for that table is empty and even worst if you dump
the DB there is no reporting about the the database or table being on that
tablespace.
Even \d doesn't report that the table is in the tablespace X.

Regards


Re: [HACKERS] useless assignment pointer argument

2015-05-28 Thread Gaetano Mendola
If the compiler is good the assignment is elided indeed, that's not what I
meant to point out.

On Thu, 28 May 2015 at 22:17 Andres Freund and...@anarazel.de wrote:

 On 2015-05-28 20:14:33 +, Gaetano Mendola wrote:
  src/backend/commands/explain.c:1692
  src/backend/commands/explain.c:1874
  src/backend/commands/explain.c:1986
 
  there is the following assignment:
 
 ancestors = list_delete_first(ancestors);
 
  but it has no effect at all being that a function parameter and not used
  anymore after the assignment itself.

 So? It costs little to nothing, and it'll make it much less likely that
 a stale version of 'ancestors' is used when the code is expanded.

 Greetings,

 Andres Freund



[HACKERS] useless assignment pointer argument

2015-05-28 Thread Gaetano Mendola
Hi,
in the following spots:

src/backend/commands/explain.c:1692
src/backend/commands/explain.c:1874
src/backend/commands/explain.c:1986

there is the following assignment:

   ancestors = list_delete_first(ancestors);

but it has no effect at all being that a function parameter and not used
anymore after the assignment itself.


Re: [HACKERS] Possible pointer dereference

2015-05-28 Thread Gaetano Mendola
While at it the  assert(cnfa != NULL  cnfa-nstates != 0);   at
src/backend/regex/rege_dfa.c:282
is issued too late indeed at line 278 and 279 cnfa was already
dereferenced.

Same for assert(t != NULL) in src/backend/regex/regexec.c:821 is issued way
too late.




On Thu, 28 May 2015 at 15:59 Tom Lane t...@sss.pgh.pa.us wrote:

 Robert Haas robertmh...@gmail.com writes:
  On Wed, May 27, 2015 at 8:57 PM, Haribabu Kommi
  kommi.harib...@gmail.com wrote:
  By correcting the following way will solve the problem.
  return ts ? (*ts != 0) : false; instead of retun *ts != 0;
  Attached a patch for it.

  If the only caller always passes a valid pointer, there's no point in
  adding this check.  We have many functions in our source base that
  assume that the caller will pass a valid pointer, and changing them
  all would make the code bigger, harder to read, and possibly slower,
  without any real benefit.

 Well, we should either install something like Haribabu's patch, or else
 remove the existing tests in the function that allow ts to be NULL.
 And the function's API contract comment needs to be clarified in either
 case; the real bug here is lack of a specification.

 I don't particularly have an opinion on whether it's valuable to allow
 this function to be called without receiving a timestamp back.  Perhaps
 the authors of the patch can comment on that.

 regards, tom lane



[HACKERS] Possible pointer dereference

2015-05-27 Thread Gaetano Mendola
I'm playing with a static analyzer and it's giving out some real error
analyzing postgresql code base like the following one

src/backend/access/transam/commit_ts.c
   return *ts != 0  // line 321
but a few line up (line 315) ts is checked for null, so either is not
needed to check for null or *ts can lead to a null pointer dereference.
Same happens a few line later lines 333 and 339

Regards
Gaetano


[HACKERS] Suspicious check (src/backend/access/gin/gindatapage.c)

2014-09-11 Thread Gaetano Mendola
At line 650 I can read:

 if ((leaf-lsize - segsize) - (leaf-lsize - segsize)  BLCKSZ / 4)
 break;

I believe one of the two should be leaf-rsize


-- 
cpp-today.blogspot.com


[HACKERS] C++ compiler

2013-06-15 Thread Gaetano Mendola
I have read Peter Eisentraut blog entry about Moving to C++, I full agree
with him about what he wrote.

Is there any interest or work in progress in making the entire Postgresql
code
 base compilable by a C++ compiler?

Regards
Gaetano Mendola

-- 
cpp-today.blogspot.com


[HACKERS] hackers newsgroup hacked ?

2012-10-04 Thread Gaetano Mendola
Reading this mailing list via newsgroup  (news.postgresql.org port
119) I can see that last legitimate message is from
29 August since then only RUSSIAN posts are present.

Regards
Gaetano Mendola

-- 
cpp-today.blogspot.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] GPU and Database

2012-09-02 Thread Gaetano Mendola

May be someone of you is interested in this

ADBIS workshop on GPUs in Databases

http://gid2012.cs.put.poznan.pl/


Regards
Gaetano Mendola


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] elog and MemoryContextSwitchTo

2012-02-17 Thread Gaetano Mendola

Hi all,

Is the following code well formed?

oldContext = MemoryContextSwitchTo(newContext);

if (something_bad) {
  elog(ERROR, ...);
}
...
MemoryContextSwitchTo(oldContext);


or do I have to ripristinate the oldContext before to issue the elog ?



Regards
Gaetano Mendola


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-15 Thread Gaetano Mendola

On 13/02/2012 19:48, Greg Stark wrote:

I don't think we should be looking at either CUDA or OpenCL directly.
We should be looking for a generic library that can target either and
is well maintained and actively developed. Any GPU code we write
ourselves would rapidly be overtaken by changes in the hardware and
innovations in parallel algorithms. If we find a library that provides
a sorting api and adapt our code to use it then we'll get the benefits
of any new hardware feature as the library adds support for them.



I think one option is to make the sort function pluggable with a shared
library/dll. I see several benefits from this:

 - It could be in the interest of the hardware vendor to provide the 
most powerful sort implementation (I'm sure for example that TBB sort 
implementation is faster that pg_sort)


 - It can permit people to play with it without being deep involved 
in pg development and stuffs.


 - It can relieve the postgres core group the choose about the right 
language/tool/implementation to use.


 - Also for people not willing (or not able for the matter) to upgrade
postgres engine to change instead the sort function upon an hardware
upgrade.


Of course if this happens postgres engine has to make some sort of
sanity check (that the function for example actually sorts) before to 
thrust the plugged sort.

The engine can even have multiple sort implementation available and
use the most proficient one (imagine some sorts acts better on
a certain range value or on certain element size).


Regards
Gaetano Mendola


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-15 Thread Gaetano Mendola

On 13/02/2012 19:48, Greg Stark wrote:

I don't think we should be looking at either CUDA or OpenCL directly.
We should be looking for a generic library that can target either and
is well maintained and actively developed. Any GPU code we write
ourselves would rapidly be overtaken by changes in the hardware and
innovations in parallel algorithms. If we find a library that provides
a sorting api and adapt our code to use it then we'll get the benefits
of any new hardware feature as the library adds support for them.



I think one option is to make the sort function plugable with a shared
library/dll. I see several benefits from this:

 - It could be in the interest of the hardware vendor to provide the 
most powerful sort implementation (I'm sure for example that TBB sort 
implementation is faster that pg_sort)


 - It can permit people to play with it without being deep involved 
in pg development and stuffs.


 - It can relieve the postgres core group the choose about the right 
language/tool/implementation to use.


 - Also for people not willing (or not able for the matter) to upgrade
postgres engine to change instead the sort function upon an hardware
upgrade.


Of course if this happens postgres engine has to make some sort of
sanity check (that the function for example actually sorts) before to 
thrust the plugged sort.

The engine can even have multiple sort implementation available and
use the most proficient one (imagine some sorts acts better on
a certain range value or on certain element size).


Regards
Gaetano Mendola


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-15 Thread Gaetano Mendola

On 15/02/2012 23:11, Peter Geoghegan wrote:

On 15 February 2012 20:00, Gaetano Mendolamend...@gmail.com  wrote:

On 13/02/2012 19:48, Greg Stark wrote:


I don't think we should be looking at either CUDA or OpenCL directly.
We should be looking for a generic library that can target either and
is well maintained and actively developed. Any GPU code we write
ourselves would rapidly be overtaken by changes in the hardware and
innovations in parallel algorithms. If we find a library that provides
a sorting api and adapt our code to use it then we'll get the benefits
of any new hardware feature as the library adds support for them.



I think one option is to make the sort function pluggable with a shared
library/dll. I see several benefits from this:

  - It could be in the interest of the hardware vendor to provide the most
powerful sort implementation (I'm sure for example that TBB sort
implementation is faster that pg_sort)

  - It can permit people to play with it without being deep involved in pg
development and stuffs.


Sorry, but I find it really hard to believe that the non-availability
of pluggable sorting is what's holding people back here. Some vanguard
needs to go and prove the idea by building a rough prototype before we
can even really comment on what an API should look like. For example,
I am given to understand that GPUs generally sort using radix sort -
resolving the impedance mismatch that prevents someone from using a
non-comparison based sort sure sounds like a lot of work for an
entirely speculative reward.


AFAIK thrust library uses the radix sort if the keys you are sorting are 
POD data comparable with a  operator otherwise it does the

comparison based sort using the operator provided.

http://docs.thrust.googlecode.com/hg/modules.html

I'm not saying that the non-availability of pluggable sort completely
holds people back, I'm saying that it will simplify the process now
and int the future, of course that's my opinion.


Someone who cannot understand tuplesort, which is not all that
complicated, has no business trying to build GPU sorting into
Postgres.


That sounds a bit harsh. I'm one of those indeed, I haven't look in the 
details not having enough time for it. At work we do GPU computing (not

the sort type stuff) and given the fact I'm a Postgres enthusiast I
asked my self: my server is able to sort around 500 milions integer per
seconds, if postgres was able to do that as well it would be very nice.

What I have to say? Sorry for my thoughts.


I had a patch committed a few hours ago that almost included the
capability of assigning an alternative sorting function, but only one
with the exact same signature as my variant of qsort_arg. pg_qsort
isn't used to sort tuples at all, by the way.


Then I did look in the wrong direction. Thank you for point that out.


Threading building blocks is not going to form the basis of any novel
sorting implementation, because comparators in general are not thread
safe, and it isn't available on all the platforms we support, and
because of how longjmp interacts with C++ stack unwinding and so on
and so on. Now, you could introduce some kind of parallelism into
sorting integers and floats, but that's an awful lot of work for a
marginal reward.


The TBB was just example that did come in my mind.
What do you mean with you could introduce some kind of parallelism?
As far as I know any algorithm using the divide and conquer can be
parallelized.

Regards
Gaetano Mendola



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-15 Thread Gaetano Mendola

On 15/02/2012 23:11, Peter Geoghegan wrote:

On 15 February 2012 20:00, Gaetano Mendolamend...@gmail.com  wrote:

On 13/02/2012 19:48, Greg Stark wrote:


I don't think we should be looking at either CUDA or OpenCL directly.
We should be looking for a generic library that can target either and
is well maintained and actively developed. Any GPU code we write
ourselves would rapidly be overtaken by changes in the hardware and
innovations in parallel algorithms. If we find a library that provides
a sorting api and adapt our code to use it then we'll get the benefits
of any new hardware feature as the library adds support for them.



I think one option is to make the sort function pluggable with a shared
library/dll. I see several benefits from this:

  - It could be in the interest of the hardware vendor to provide the most
powerful sort implementation (I'm sure for example that TBB sort
implementation is faster that pg_sort)

  - It can permit people to play with it without being deep involved in pg
development and stuffs.


Sorry, but I find it really hard to believe that the non-availability
of pluggable sorting is what's holding people back here. Some vanguard
needs to go and prove the idea by building a rough prototype before we
can even really comment on what an API should look like. For example,
I am given to understand that GPUs generally sort using radix sort -
resolving the impedance mismatch that prevents someone from using a
non-comparison based sort sure sounds like a lot of work for an
entirely speculative reward.


AFAIK thrust library uses the radix sort if the keys you are sorting are 
POD data comparable with a  operator otherwise it does the

comparison based sort using the operator provided.

http://docs.thrust.googlecode.com/hg/modules.html

I'm not saying that the non-availability of pluggable sort completely
holds people back, I'm saying that it will simplify the process now
and int the future, of course that's my opinion.


Someone who cannot understand tuplesort, which is not all that
complicated, has no business trying to build GPU sorting into
Postgres.


That sounds a bit harsh. I'm one of those indeed, I haven't look in the 
details not having enough time for it. At work we do GPU computing (not

the sort type stuff) and given the fact I'm a Postgres enthusiast I
asked my self: my server is able to sort around 500 milions integer per
seconds, if postgres was able to do that as well it would be very nice.

What I have to say? Sorry for my thoughts.


I had a patch committed a few hours ago that almost included the
capability of assigning an alternative sorting function, but only one
with the exact same signature as my variant of qsort_arg. pg_qsort
isn't used to sort tuples at all, by the way.


Then I did look in the wrong direction. Thank you for point that out.


Threading building blocks is not going to form the basis of any novel
sorting implementation, because comparators in general are not thread
safe, and it isn't available on all the platforms we support, and
because of how longjmp interacts with C++ stack unwinding and so on
and so on. Now, you could introduce some kind of parallelism into
sorting integers and floats, but that's an awful lot of work for a
marginal reward.


The TBB was just example that did come in my mind.
What do you mean with you could introduce some kind of parallelism?
As far as I know any algorithm using the divide and conquer can be
parallelized.

Regards
Gaetano Mendola



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-14 Thread Gaetano Mendola

On 13/02/2012 08:26, Greg Smith wrote:

On 02/11/2012 08:14 PM, Gaetano Mendola wrote:

The trend is to have server capable of running CUDA providing GPU via
external hardware (PCI Express interface with PCI Express switches),
look for example at PowerEdge C410x PCIe Expansion Chassis from DELL.


The C410X adds 16 PCIe slots to a server, housed inside a separate 3U
enclosure. That's a completely sensible purchase if your goal is to
build a computing cluster, where a lot of work is handed off to a set of
GPUs. I think that's even less likely to be a cost-effective option for
a database server. Adding a single dedicated GPU installed in a server
to accelerate sorting is something that might be justifiable, based on
your benchmarks. This is a much more expensive option than that though.
Details at http://www.dell.com/us/enterprise/p/poweredge-c410x/pd for
anyone who wants to see just how big this external box is.


I did some experimenst timing the sort done with CUDA and the sort
done with pg_qsort:
CUDA pg_qsort
33Milion integers: ~ 900 ms, ~ 6000 ms
1Milion integers: ~ 21 ms, ~ 162 ms
100k integers: ~ 2 ms, ~ 13 ms
CUDA time has already in the copy operations (host-device,
device-host).
As GPU I was using a C2050, and the CPU doing the pg_qsort was a
Intel(R) Xeon(R) CPU X5650 @ 2.67GHz


That's really interesting, and the X5650 is by no means a slow CPU. So
this benchmark is providing a lot of CPU power yet still seeing over a
6X speedup in sort times. It sounds like the PCI Express bus has gotten
fast enough that the time to hand data over and get it back again can
easily be justified for medium to large sized sorts.

It would be helpful to take this patch and confirm whether it scales
when using in parallel. Easiest way to do that would be to use the
pgbench -f feature, which allows running an arbitrary number of some
query at once. Seeing whether this acceleration continued to hold as the
number of clients increases is a useful data point.

Is it possible for you to break down where the time is being spent? For
example, how much of this time is consumed in the GPU itself, compared
to time spent transferring data between CPU and GPU? I'm also curious
where the bottleneck is at with this approach. If it's the speed of the
PCI-E bus for smaller data sets, adding more GPUs may never be
practical. If the bus can handle quite a few of these at once before it
saturates, it might be possible to overload a single GPU. That seems
like it would be really hard to reach for database sorting though; I
can't really defend justify my gut feel for that being true though.


There you go (times are in ms):

Size   H-D SORT D-H TOTAL
64   0.209824 0.479392 0.013856 0.703072
128  0.098144 0.41744  0.01312  0.528704
256  0.096832 0.420352 0.013696 0.53088
512  0.097568 0.3952   0.014464 0.507232
1024 0.09872  0.396608 0.014624 0.509952
2048 0.101344 0.56224  0.016896 0.68048
4096 0.106176 0.562976 0.02016  0.689312
8192 0.116512 0.571264 0.02672  0.714496
163840.136096 0.587584 0.040192 0.763872
327680.179296 0.658112 0.066304 0.903712
655360.212352 0.84816  0.118016 1.178528
131072   0.317056 1.1465   0.22784  1.691396
262144   0.529376 1.82237  0.42512  2.776866
524288   0.724032 2.39834  0.64576  3.768132
1048576  1.11162  3.51978  1.12176  5.75316
2097152  1.95939  5.93434  2.06992  9.96365
4194304  3.76192  10.6011  4.10614  18.46916
8388608  7.16845  19.9245  7.93741  35.03036
16777216 13.8693  38.7413  15.4073  68.0179
33554432 27.3017  75.6418  30.6646  133.6081
67108864 54.2171  151.192  60.327   265.7361

pg_sort

64   0.01
128  0.01
256  0.021000
512  0.128000
1024 0.092000
2048 0.196000
4096 0.415000
8192 0.883000
163841.881000
327683.96
655368.432000
131072  17.951000
262144  37.14
524288  78.32
1048576163.276000
2097152339.118000
4194304693.223000
8388608   1423.142000
16777216  2891.218000
33554432  5910.851000
67108864 11980.93

As you can notice the times with CUDA are lower than the timing I have 
reported on my previous post because the server was doing something else

in mean while, I have repeated those benchmarks with server completely
unused.

And this is the boost as in pg_sort/cuda :

64   0.0142232943
128  0.018914175
256  0.039556962
512  0.2070058671
1024 0.1804091365
2048 0.2880319774
4096 0.6078524674
8192 1.2372357578
163842.4637635625
327684.4106972133
655367.1742037525
131072   10.5090706139
262144   13.3719091955
524288   20.5834084369
1048576  28.2516043357
2097152  33.9618513296
4194304  37.5247168794
8388608  40.5135716561
16777216 42.4743633661
33554432 44.2394809896
67108864 45.1499777411



 I've never seen a PostgreSQL server capable of running CUDA, and I
 don't expect that to change.

That sounds like:

I think there is a world market

Re: [HACKERS] CUDA Sorting

2012-02-13 Thread Gaetano Mendola
On Feb 13, 2012 11:39 a.m., Kohei KaiGai kai...@kaigai.gr.jp wrote:

 2012/2/13 Greg Smith g...@2ndquadrant.com:
  On 02/11/2012 08:14 PM, Gaetano Mendola wrote:
 
  The trend is to have server capable of running CUDA providing GPU via
  external hardware (PCI Express interface with PCI Express switches),
look
  for example at PowerEdge C410x PCIe Expansion Chassis from DELL.
 
 
  The C410X adds 16 PCIe slots to a server, housed inside a separate 3U
  enclosure.  That's a completely sensible purchase if your goal is to
build a
  computing cluster, where a lot of work is handed off to a set of GPUs.
 I
  think that's even less likely to be a cost-effective option for a
database
  server.  Adding a single dedicated GPU installed in a server to
accelerate
  sorting is something that might be justifiable, based on your
benchmarks.
   This is a much more expensive option than that though.  Details at
  http://www.dell.com/us/enterprise/p/poweredge-c410x/pd for anyone who
wants
  to see just how big this external box is.
 
 
  I did some experimenst timing the sort done with CUDA and the sort done
  with pg_qsort:
CUDA  pg_qsort
  33Milion integers:   ~ 900 ms,  ~ 6000 ms
  1Milion integers:~  21 ms,  ~  162 ms
  100k integers:   ~   2 ms,  ~   13 ms
  CUDA time has already in the copy operations (host-device,
device-host).
  As GPU I was using a C2050, and the CPU doing the pg_qsort was a
Intel(R)
  Xeon(R) CPU X5650  @ 2.67GHz
 
 
  That's really interesting, and the X5650 is by no means a slow CPU.  So
this
  benchmark is providing a lot of CPU power yet still seeing over a 6X
speedup
  in sort times.  It sounds like the PCI Express bus has gotten fast
enough
  that the time to hand data over and get it back again can easily be
  justified for medium to large sized sorts.
 
  It would be helpful to take this patch and confirm whether it scales
when
  using in parallel.  Easiest way to do that would be to use the pgbench
-f
  feature, which allows running an arbitrary number of some query at once.
   Seeing whether this acceleration continued to hold as the number of
clients
  increases is a useful data point.
 
  Is it possible for you to break down where the time is being spent?  For
  example, how much of this time is consumed in the GPU itself, compared
to
  time spent transferring data between CPU and GPU?  I'm also curious
where
  the bottleneck is at with this approach.  If it's the speed of the
PCI-E bus
  for smaller data sets, adding more GPUs may never be practical.  If the
bus
  can handle quite a few of these at once before it saturates, it might be
  possible to overload a single GPU.  That seems like it would be really
hard
  to reach for database sorting though; I can't really defend justify my
gut
  feel for that being true though.
 
 
   I've never seen a PostgreSQL server capable of running CUDA, and I
   don't expect that to change.
 
  That sounds like:
 
  I think there is a world market for maybe five computers.
  - IBM Chairman Thomas Watson, 1943
 
 
  Yes, and 640K will be enough for everyone, ha ha.  (Having said the
640K
  thing is flat out denied by Gates, BTW, and no one has come up with
proof
  otherwise).
 
  I think you've made an interesting case for this sort of acceleration
now
  being useful for systems doing what's typically considered a data
warehouse
  task.  I regularly see servers waiting for far more than 13M integers to
  sort.  And I am seeing a clear trend toward providing more PCI-E slots
in
  servers now.  Dell's R810 is the most popular single server model my
  customers have deployed in the last year, and it has 5 X8 slots in it.
 It's
  rare all 5 of those are filled.  As long as a dedicated GPU works fine
when
  dropped to X8 speeds, I know a fair number of systems where one of those
  could be added now.
 
  There's another data point in your favor I didn't notice before your
last
  e-mail.  Amazon has a Cluster GPU Quadruple Extra Large node type that
  runs with NVIDIA Tesla hardware.  That means the installed base of
people
  who could consider CUDA is higher than I expected.  To demonstrate how
much
  that costs, to provision a GPU enabled reserved instance from Amazon
for one
  year costs $2410 at Light Utilization, giving a system with 22GB of
RAM
  and 1.69GB of storage.  (I find the reserved prices easier to compare
with
  dedicated hardware than the hourly ones)  That's halfway between the
  High-Memory Double Extra Large Instance (34GB RAM/850GB disk) at $1100
and
  the High-Memory Quadruple Extra Large Instance (64GB RAM/1690GB disk) at
  $2200.  If someone could prove sorting was a bottleneck on their server,
  that isn't an unreasonable option to consider on a cloud-based database
  deployment.
 
  I still think that an approach based on OpenCL is more likely to be
suitable
  for PostgreSQL, which was part of why I gave CUDA low odds here.  The
points
  in favor of OpenCL are:
 
  -Since you last posted

Re: [HACKERS] CUDA Sorting

2012-02-13 Thread Gaetano Mendola
On Feb 13, 2012 7:49 p.m., Greg Stark st...@mit.edu wrote:

 I don't think we should be looking at either CUDA or OpenCL directly.
 We should be looking for a generic library that can target either and
 is well maintained and actively developed. Any GPU code we write
 ourselves would rapidly be overtaken by changes in the hardware and
 innovations in parallel algorithms. If we find a library that provides
 a sorting api and adapt our code to use it then we'll get the benefits
 of any new hardware feature as the library adds support for them.

To sort integer I used the Thrust Nvidia library.


Re: [HACKERS] CUDA Sorting

2012-02-12 Thread Gaetano Mendola

On 12/02/2012 13:13, Oleg Bartunov wrote:

I'm wondering if CUDA will win in geomentry operations, for example,
tesing point @ complex_polygon



I'm not sure if the algorithm you mentioned can be implemented in terms
of vector algebra, blas, etc.

It's plenty of geometry operations implemented in CUDA out there, my
field of CUDA application is not this one so I'm not that much in it.

However I can point you to official NVIDIA npp library that provides
vector algebra algorithms, and some geometry algorithms as well.

http://developer.download.nvidia.com/compute/DevZone/docs/html/CUDALibraries/doc/NPP_Library.pdf

(take a look at around page 620).

Regards
Gaetano Mendola



Oleg
On Sun, 12 Feb 2012, Gaetano Mendola wrote:


On 19/09/2011 16:36, Greg Smith wrote:

On 09/19/2011 10:12 AM, Greg Stark wrote:

With the GPU I'm curious to see how well
it handles multiple processes contending for resources, it might be a
flashy feature that gets lots of attention but might not really be
very useful in practice. But it would be very interesting to see.


The main problem here is that the sort of hardware commonly used for
production database servers doesn't have any serious enough GPU to
support CUDA/OpenCL available. The very clear trend now is that all
systems other than gaming ones ship with motherboard graphics chipsets
more than powerful enough for any task but that. I just checked the 5
most popular configurations of server I see my customers deploy
PostgreSQL onto (a mix of Dell and HP units), and you don't get a
serious GPU from any of them.

Intel's next generation Ivy Bridge chipset, expected for the spring of
2012, is going to add support for OpenCL to the built-in motherboard
GPU. We may eventually see that trickle into the server hardware side of
things too.



The trend is to have server capable of running CUDA providing GPU via
external hardware (PCI Express interface with PCI Express switches),
look for example at PowerEdge C410x PCIe Expansion Chassis from DELL.

I did some experimenst timing the sort done with CUDA and the sort
done with pg_qsort:
CUDA pg_qsort
33Milion integers: ~ 900 ms, ~ 6000 ms
1Milion integers: ~ 21 ms, ~ 162 ms
100k integers: ~ 2 ms, ~ 13 ms

CUDA time has already in the copy operations (host-device,
device-host).

As GPU I was using a C2050, and the CPU doing the pg_qsort was a
Intel(R) Xeon(R) CPU X5650 @ 2.67GHz

Copy operations and kernel runs (the sort for instance) can run in
parallel, so while you are sorting a batch of data, you can copy the
next batch in parallel.

As you can see the boost is not negligible.

Next Nvidia hardware (Keplero family) is PCI Express 3 ready, so
expect in the near future the bottle neck of the
device-host-device copies to have less impact.

I strongly believe there is space to provide modern database engine of
a way to offload sorts to GPU.


I've never seen a PostgreSQL server capable of running CUDA, and I
don't expect that to change.


That sounds like:

I think there is a world market for maybe five computers.
- IBM Chairman Thomas Watson, 1943

Regards
Gaetano Mendola





Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-11 Thread Gaetano Mendola

On 19/09/2011 21:41, PostgreSQL - Hans-Jürgen Schönig wrote:


On Sep 19, 2011, at 5:16 PM, Tom Lane wrote:


Greg Starkst...@mit.edu  writes:

That said, to help in the case I described you would have to implement
the tapesort algorithm on the GPU as well.


I think the real problem would be that we are seldom sorting just the
key values.  If you have to push the tuples through the GPU too, your
savings are going to go up in smoke pretty quickly …




i would argument along a similar line.
to make GPU code fast it has to be pretty much tailored to do exactly one thing 
- otherwise you have no chance to get anywhere close to card-bandwith.
if you look at two similar GPU codes which seem to do the same thing you 
might easily see that one is 10 times faster than the other - for bloody reason such as 
memory alignment, memory transaction size or whatever.
this opens a bit of a problem: PostgreSQL sorting is so generic and so flexible 
that i would be really surprised if somebody could come up with a solution 
which really comes close to what the GPU can do.
it would definitely be interesting to see a prototype, however.


Thrust Nvidia library provides the same sorting flexibility as postgres 
does.


// generate 32M random numbers on the host
thrust::host_vectorint h_vec(32  20);
thrust::generate(h_vec.begin(), h_vec.end(), rand);

// transfer data to the device
thrust::device_vectorint d_vec = h_vec;

// sort data on the device (846M keys per second on GeForce GTX 480)
thrust::sort(d_vec.begin(), d_vec.end());

// transfer data back to host
thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin());


as you can see the type to be ordered is template, and
the thrust::sort have also a version in where it takes the comparator to 
use.

So compared with pg_qsort  thrust::sort gives you the same flexibility.

http://docs.thrust.googlecode.com/hg/group__sorting.html

Regards
Gaetano Mendola











--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CUDA Sorting

2012-02-11 Thread Gaetano Mendola

On 19/09/2011 16:36, Greg Smith wrote:

On 09/19/2011 10:12 AM, Greg Stark wrote:

With the GPU I'm curious to see how well
it handles multiple processes contending for resources, it might be a
flashy feature that gets lots of attention but might not really be
very useful in practice. But it would be very interesting to see.


The main problem here is that the sort of hardware commonly used for
production database servers doesn't have any serious enough GPU to
support CUDA/OpenCL available. The very clear trend now is that all
systems other than gaming ones ship with motherboard graphics chipsets
more than powerful enough for any task but that. I just checked the 5
most popular configurations of server I see my customers deploy
PostgreSQL onto (a mix of Dell and HP units), and you don't get a
serious GPU from any of them.

Intel's next generation Ivy Bridge chipset, expected for the spring of
2012, is going to add support for OpenCL to the built-in motherboard
GPU. We may eventually see that trickle into the server hardware side of
things too.



The trend is to have server capable of running CUDA providing GPU via 
external hardware (PCI Express interface with PCI Express switches), 
look for example at PowerEdge C410x PCIe Expansion Chassis from DELL.


I did some experimenst timing the sort done with CUDA and the sort done 
with pg_qsort:

   CUDA  pg_qsort
33Milion integers:   ~ 900 ms,  ~ 6000 ms
1Milion integers:~  21 ms,  ~  162 ms
100k integers:   ~   2 ms,  ~   13 ms

CUDA time has already in the copy operations (host-device, device-host).

As GPU I was using a C2050, and the CPU doing the pg_qsort was a 
Intel(R) Xeon(R) CPU X5650  @ 2.67GHz


Copy operations and kernel runs (the sort for instance) can run in 
parallel, so while you are sorting a batch of data, you can copy the 
next batch in parallel.


As you can see the boost is not negligible.

Next Nvidia hardware (Keplero family) is PCI Express 3 ready, so expect 
in the near future the bottle neck of the device-host-device copies 
to have less impact.


I strongly believe there is space to provide modern database engine of
a way to offload sorts to GPU.

 I've never seen a PostgreSQL server capable of running CUDA, and I
 don't expect that to change.

That sounds like:

I think there is a world market for maybe five computers.
- IBM Chairman Thomas Watson, 1943

Regards
Gaetano Mendola


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] bitfield and gcc

2012-02-10 Thread Gaetano Mendola
I wonder if somewhere in Postgres source we are relying on the GCC 
correct behaviour regarding the read-modify-write of bitfield in

structures.

Take a read at this https://lwn.net/Articles/478657/

sorry if this was already mentioned.


Regards
Gaetano Mendola

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Base64 decode/encode performance

2008-09-10 Thread Gaetano Mendola
Hi,
I have been experimenting with some base64 encoding/decoding implementation.

I find out that the one at http://code.google.com/p/stringencoders is the best
obtaining a 1.3 speedup vs the postgres one.

Do you think is worth to submit a patch that replaces the postgres base64 
implementation
with this one?


Regards
Gaetano Mendola

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Not valid dump [8.2.9, 8.3.1]

2008-06-21 Thread Gaetano Mendola
On Fri, Jun 20, 2008 at 4:37 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Gaetano Mendola [EMAIL PROTECTED] writes:
  we have faced lately dumps not valid, the bug can be replicated using a
 8.2.9 or
  a 8.3.1 server.

  These are the steps to create the database that will generate a not valid
 dump:

 This is a bug in your function: it will not work if the search path
 doesn't contain the public schema.  You'd be best advised to make it
 qualify the reference to t_public explicitly.


Yes, that's the way we are fixing it. Still I have a bitter taste being able
to
create a working database instance that doesn't generate a valid dump.

(Of course you realize that referencing any table at all in an
 immutable function is probably a mortal sin...)


Yes Tom I know, in our case that table is a lookup table, noone update,
delete, insert data in it, so from my point of view it is like I have
declared a
static array inside the function declaration.

-- 
cpp-today.blogspot.com


[HACKERS] Not valid dump [8.2.9, 8.3.1]

2008-06-20 Thread Gaetano Mendola
Hi all,
we have faced lately dumps not valid, the bug can be replicated using a 8.2.9 or
a 8.3.1 server.

These are the steps to create the database that will generate a not valid dump:

---
CREATE TABLE t_public (
a integer
);

CREATE OR REPLACE FUNCTION sp_public ( )
RETURNS INTEGER AS'
BEGIN
   PERFORM * FROM t_public LIMIT 1;
   RETURN 0;
END;
' LANGUAGE 'plpgsql'
IMMUTABLE;

CREATE SCHEMA my_schema;
CREATE TABLE my_schema.table_ref(x integer primary key);
CREATE TABLE my_schema.table_test(x integer references my_schema.table_ref (x));

CREATE INDEX idx ON my_schema.table_test (x) WHERE x = sp_public();


Briefly: in the public schema we have a function that uses a table.
In another schema we have a table with a foreign key to another table, and an 
partial
index that uses the function in the public schema.

The function is immutable because in our case the table being used inside the 
function
is a lookup table (readonly).

When the dump is restored the index idx is created but the foreign key is not.
This is the error we obtain during the restore:

psql:test.dump:143: ERROR:  relation t_public does not exist
CONTEXT:  SQL statement SELECT  * FROM t_public LIMIT 1
PL/pgSQL function sp_public line 2 at perform
SQL statement SELECT fk.x FROM ONLY my_schema.table_test fk LEFT
OUTER JOIN ONLY my_schema.table_ref pk ON (pk.x=fk.x) WHERE pk.x
IS NULL AND (fk.x IS NOT NULL)

Regards






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] new field content lost

2008-04-18 Thread Gaetano Mendola
Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 since long time I have implemented a materialized view, today I had to add a
 new field and I faced the following (I believe) bug.
 The bug can be replicated on a 8.2.7
 
 Cached plan for the function's UPDATE.  Should work okay in 8.3 ...
 

It does.

Regards
Gaetano Mendola

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] new field content lost

2008-04-17 Thread Gaetano Mendola
Hi all,
since long time I have implemented a materialized view, today I had to add a
new field and I faced the following (I believe) bug.

The bug can be replicated on a 8.2.7


-- SETUP
create table test (a integer, b integer);
create table test_trigger (a integer);

CREATE OR REPLACE FUNCTION trigger_test()
RETURNS TRIGGER AS'
DECLARE

BEGIN
   update test set b = b*10 where a = NEW.a;

   RETURN NEW;
END;
' LANGUAGE 'plpgsql';

CREATE CONSTRAINT TRIGGER trigger_test
AFTER INSERT OR UPDATE ON test_trigger DEFERRABLE INITIALLY DEFERRED
FOR EACH ROW EXECUTE PROCEDURE trigger_test();

insert into test values (1,1),(2,2),(3,3);
insert into test_trigger values (1),(2),(3);


-- FROM CONNECTION A
update test_trigger set a=1 where a=1;

-- FROM CONNECTION B
alter table test add column c integer;
update test set c = 15;
select * from test;

-- FROM CONNECTION A
update test_trigger set a=2 where a=2;

--FROM CONNECTION B
select * from test;

you can see that the value c=15 for a=2 has been nullified



Regards
Gaetano


























-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] /etc/init.d/postgresql status error

2008-04-16 Thread Gaetano Mendola
I have just installed the rpm found at:

http://www.postgresql.org/ftp/binary/v8.2.7/linux/rpms/redhat/rhel-4-i386/

and the status option generates an error:

$ /etc/init.d/postgresql status
pidof: invalid options on command line!

pidof: invalid options on command line!

-p is stopped


I was able to fix it in the following mode:

  status)
#   status -p /var/run/postmaster.${PGPORT}.pid
status postmaster
script_result=$?
;;

Commented the original line and replaced with the one just below.

Regards
Gaetano Mendola


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Limit changes query plan

2008-02-01 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm using 8.2.6 and I'm observing a trange behaviour using
offset and limits.

This are the two queries that are puzzling me:

explain SELECT c.id, tsk, lir, nctr, nctn, ncts, rvel,ecp, pvcp, 
pvcc,pvcf,pvcl,ldcn,ogtd,sgti
FROM t_OA_2_00_card c JOIN t_OA_2_00_dt dt ON (dt.card_id = c.id)
WHERE ecp=18 AND _to = 1500 AND _from = 1550
ORDER BY nctr,nctn,ncts,rvel
offset 0 ;
  QUERY PLAN
- 

Limit  (cost=175044.75..175071.04 rows=10518 width=90)
  -  Sort  (cost=175044.75..175071.04 rows=10518 width=90)
Sort Key: c.nctr, c.nctn, c.ncts, c.rvel
-  Hash Join  (cost=25830.72..174342.12 rows=10518 width=90)
  Hash Cond: (c.id = dt.card_id)
  -  Bitmap Heap Scan on t_oa_2_00_card c  (cost=942.36..148457.19 
rows=101872 width=90)
Recheck Cond: (ecp = 18)
-  Bitmap Index Scan on i7_t_oa_2_00_card  
(cost=0.00..916.89 rows=101872 width=0)
  Index Cond: (ecp = 18)
  -  Hash  (cost=22743.45..22743.45 rows=171593 width=8)
-  Bitmap Heap Scan on t_oa_2_00_dt dt  
(cost=2877.26..22743.45 rows=171593 width=8)
  Recheck Cond: (_from = 1550)
  Filter: (_to = 1500)
  -  Bitmap Index Scan on i_oa_2_00_dt_from  
(cost=0.00..2834.36 rows=182546 width=0)
Index Cond: (_from = 1550)


explain SELECT c.id, tsk, lir, nctr, nctn, ncts, rvel,ecp, pvcp, 
pvcc,pvcf,pvcl,ldcn,ogtd,sgti
FROM t_OA_2_00_card c JOIN t_OA_2_00_dt dt ON (dt.card_id = c.id)
WHERE ecp=18 AND _to = 1500 AND _from = 1550
ORDER BY nctr,nctn,ncts,rvel
offset 0 limit 5;
QUERY PLAN
- 

Limit  (cost=0.00..2125.12 rows=5 width=90)
  -  Nested Loop  (cost=0.00..4470402.02 rows=10518 width=90)
-  Index Scan using i_oa_2_00_card_keys on t_oa_2_00_card c  
(cost=0.00..3927779.56 rows=101872 width=90)
  Filter: (ecp = 18)
-  Index Scan using i_oa_2_00_dt_for on t_oa_2_00_dt dt  
(cost=0.00..5.31 rows=1 width=8)
  Index Cond: (dt.card_id = c.id)
  Filter: ((_to = 1500) AND (_from = 1550))


using the limit I have an execution time of minutes vs a some seconds.

What am I missing here ?

Regards
Gaetano Mendola



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHov3I7UpzwH2SGd4RApR+AJ0dG/+0MoB3PMD1kRgQt0BisHwQBACgzVwC
BN/SBWrvVxVE9eBLK0C1Pnw=
=9Ucp
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Limit changes query plan

2008-02-01 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Martijn van Oosterhout wrote:
 On Fri, Feb 01, 2008 at 12:08:56PM +0100, Gaetano Mendola wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi all,
 I'm using 8.2.6 and I'm observing a trange behaviour using
 offset and limits.
 
 Please post EXPLAIN ANALYZE output so we can see what's actually taking
 the time.

The analyze is still running (I launched it 30 mins ago), I'll post it as soon
I have it.

Disabling the nested_loop ( set enable_nestloop = false ) the query with the 
limit
has now the same execution time without the limit.

I don't get why a limit is going to change the query plan and most of all 
decreasing
the performances.


Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHowXA7UpzwH2SGd4RAomqAJ409579Jk7d5FYWf92PjOYDRxWNIQCggg1w
1WJcVmn2g1MASBGh9OtCQ0Q=
=h2Z6
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Limit changes query plan

2008-02-01 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gregory Stark wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 
 I don't get why a limit is going to change the query plan and most of all 
 decreasing
 the performances.
 
 Until we see the explain analyze it won't be clear what exactly is going on.
 But in theory a LIMIT can definitely change the plan because the planner knows
 it won't need to generate all the rows to satisfy the LIMIT.
 
 In the plans you gave note that the plan for the unlimited query has a Sort so
 it has to produce all the records every time. The second query produces the
 records in order so if the LIMIT is satisfied quickly then it can save a lot
 of work.
 
 It's evidently guessing wrong about the limit being satisfied early. The
 non-indexed restrictions might be pruning out a lot more records than the
 planner expects. Or possibly the table is just full of dead records.
 

Here the analyze result:


explain analyze SELECT c.id, tsk, lir, nctr, nctn, ncts, rvel,ecp, pvcp, 
pvcc,pvcf,pvcl,ldcn,ogtd,sgti FROM t_OA_2_00_card c JOIN t_OA_2_00_dt dt ON 
(dt.card_id = c.id) WHERE ecp=18 AND _to = 1500 AND _from = 1550 ORDER BY 
nctr,nctn,ncts,rvel offset 0 limit 5;

   
QUERY PLAN   
---
Limit  (cost=0.00..2125.12 rows=5 width=90) (actual 
time=3399923.424..3399960.174 rows=5 loops=1)
  -  Nested Loop  (cost=0.00..4470402.02 rows=10518 width=90) (actual 
time=3399923.420..3399960.156 rows=5 loops=1)
-  Index Scan using i_oa_2_00_card_keys on t_oa_2_00_card c  
(cost=0.00..3927779.56 rows=101872 width=90) (actual 
time=3399892.632..3399896.773 rows=50 loops=1)
  Filter: (ecp = 18)
-  Index Scan using i_oa_2_00_dt_for on t_oa_2_00_dt dt  
(cost=0.00..5.31 rows=1 width=8) (actual time=1.264..1.264 rows=0 loops=50)
  Index Cond: (dt.card_id = c.id)
  Filter: ((_to = 1500) AND (_from = 1550))
Total runtime: 3399960.277 ms


explain analyze SELECT c.id, tsk, lir, nctr, nctn, ncts, rvel,ecp, pvcp, 
pvcc,pvcf,pvcl,ldcn,ogtd,sgti FROM t_OA_2_00_card c JOIN t_OA_2_00_dt dt ON 
(dt.card_id = c.id) WHERE ecp=18 AND _to = 1500 AND _from = 1550 ORDER BY 
nctr,nctn,ncts,rvel offset 0 ;
  QUERY 
PLAN
-
Limit  (cost=175044.75..175071.04 rows=10518 width=90) (actual 
time=2425.138..2435.633 rows=3298 loops=1)
  -  Sort  (cost=175044.75..175071.04 rows=10518 width=90) (actual 
time=2425.134..2428.812 rows=3298 loops=1)
Sort Key: c.nctr, c.nctn, c.ncts, c.rvel
-  Hash Join  (cost=25830.72..174342.12 rows=10518 width=90) (actual 
time=797.540..2382.900 rows=3298 loops=1)
  Hash Cond: (c.id = dt.card_id)
  -  Bitmap Heap Scan on t_oa_2_00_card c  (cost=942.36..148457.19 
rows=101872 width=90) (actual time=70.212..1507.429 rows=97883 loops=1)
Recheck Cond: (ecp = 18)
-  Bitmap Index Scan on i7_t_oa_2_00_card  
(cost=0.00..916.89 rows=101872 width=0) (actual time=53.340..53.340 rows=97883 
loops=1)
  Index Cond: (ecp = 18)
  -  Hash  (cost=22743.45..22743.45 rows=171593 width=8) (actual 
time=726.597..726.597 rows=89277 loops=1)
-  Bitmap Heap Scan on t_oa_2_00_dt dt  
(cost=2877.26..22743.45 rows=171593 width=8) (actual time=86.181..593.275 
rows=89277 loops=1)
  Recheck Cond: (_from = 1550)
  Filter: (_to = 1500)
  -  Bitmap Index Scan on i_oa_2_00_dt_from  
(cost=0.00..2834.36 rows=182546 width=0) (actual time=80.863..80.863 
rows=201177 loops=1)
Index Cond: (_from = 1550)
Total runtime: 2440.396 ms



Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHoytQ7UpzwH2SGd4RAujPAKDkM53sirwNFa7jH/Q3R2y1/QAcKQCgn9VH
pUSwTkR3c963BoCbNwG+W6Y=
=s7Vr
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Limit changes query plan

2008-02-01 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
 Greg Stark [EMAIL PROTECTED] writes:
 -  Index Scan using i_oa_2_00_dt_for on t_oa_2_00_dt dt  (cost=0.00..5.31 
 rows=1 width=8) (actual time=1.264..1.264 rows=0 loops=50)
 Index Cond: (dt.card_id = c.id)
 Filter: ((_to = 1500) AND (_from = 1550))
 Total runtime: 3399960.277 ms
 
 Also, are 1500 and 1550 user-supplied parameters or are they part of a small 
 set of possible values? You could consider having a partial index on 
 card_id WHERE _to = 1500 AND _from = 1550. The numbers don't even have 
 to match exactly as long as they include all the records the query needs.
 
 That side of the join isn't where the problem is, though.
 
 If you're willing to invent new indexes, one on ecp,nctr,nctn,ncts,rvel
 would probably fix the performance issue very nicely.
 

As always you are right, creating the index  ivan btree (ecp, nctr, nctn, 
ncts, rvel)

that query with the limit responds now in the blink of an eye:


 explain analyze SELECT c.id, tsk, lir, nctr, nctn, ncts, rvel,ecp, pvcp, 
 pvcc,pvcf,pvcl,ldcn,ogtd,sgti
FROM t_OA_2_00_card c JOIN t_OA_2_00_dt dt ON (dt.card_id = c.id)
WHERE ecp=18 AND _to = 1500 AND _from = 1550
ORDER BY nctr,nctn,ncts,rvel
offset 0 limit 5;
  QUERY PLAN
- 
---
 Limit  (cost=0.00..370.03 rows=5 width=90) (actual time=0.102..0.608 rows=5 
loops=1)
   -  Nested Loop  (cost=0.00..778392.80 rows=10518 width=90) (actual 
time=0.099..0.594 rows=5 loops=1)
 -  Index Scan using ivan on t_oa_2_00_card c  (cost=0.00..235770.34 
rows=101872 width=90) (actual time=0.024..0.134 rows=50 loops=1)
   Index Cond: (ecp = 18)
 -  Index Scan using i_oa_2_00_dt_for on t_oa_2_00_dt dt  
(cost=0.00..5.31 rows=1 width=8) (actual time=0.006..0.006 rows=0 loops=50)
   Index Cond: (dt.card_id = c.id)
   Filter: ((_to = 1500) AND (_from = 1550))
 Total runtime: 0.700 ms
(8 rows)


Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHo1SB7UpzwH2SGd4RAhTeAJ0WL49jjUgCWSrNopV/8L+rbOLaEgCfTDlh
crAHZYxxTYz6VqTDggqW7x0=
=dKey
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
it seems that the stats collector on my box is using more CPU than
it did in the past.

This is what I'm observing:

CPU usage for the stat process: 25% flat

$ psql -c select version()
version
- 
---
 PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 
20060404 (Red Hat 3.4.6-3)
(1 row)


$ strace -tt -p 10773
[...]
09:47:37.867655 write(3, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 4096) = 4096
09:47:37.867738 write(3, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 823) = 823
09:47:37.867820 close(3)= 0
09:47:37.867862 munmap(0xb7ced000, 4096) = 0
09:47:37.867906 rename(global/pgstat.tmp, global/pgstat.stat) = 0
09:47:37.868188 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868245 recv(7, 
\1\0\0\0\320\3\0\0\rg\0\0\16\0\0\0\0\0\0\0\1\0\0\0\347..., 1000, 0) = 976
09:47:37.868317 setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={0, 500}}, 
NULL) = 0
09:47:37.868372 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868428 recv(7, 
\1\0\0\0\320\3\0\0\rg\0\0\16\0\0\0\0\0\0\0\0\0\0\0g\n\0..., 1000, 0) = 976
09:47:37.868501 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868559 recv(7, 
\1\0\0\0\320\3\0\0\rg\0\0\16\0\0\0\0\0\0\0\0\0\0\0i\n\0..., 1000, 0) = 976
09:47:37.868629 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868687 recv(7, 
\1\0\0\0\4\3\0\0\rg\0\0\v\0\0\0\0\0\0\0\0\0\0\0^\n\0\0..., 1000, 0) = 772
09:47:37.868757 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868815 recv(7, 
\1\0\0\0\240\0\0\0\0\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\354..., 1000, 0) = 160
09:47:37.868886 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.868944 recv(7, 
\1\0\0\0\240\0\0\0\rg\0\0\2\0\0\0\1\0\0\0\0\0\0\0\n\0..., 1000, 0) = 160
09:47:37.869012 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.869070 recv(7, 
\1\0\0\0l\1\0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\0\354\4\0..., 1000, 0) = 364
09:47:37.869141 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.869198 recv(7, 
\1\0\0\0\300\2\0\0\rg\0\0\n\0\0\0\1\0\0\0\0\0\0\0007\n..., 1000, 0) = 704
09:47:37.869267 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.869329 recv(7, 
\1\0\0\0(\1\0\0\rg\0\0\4\0\0\0\1\0\0\0\0\0\0\0/\n\0\0\0..., 1000, 0) = 296
09:47:37.869398 poll([{fd=7, events=POLLIN|POLLERR, revents=POLLIN}], 1, 2000) 
= 1
09:47:37.869456 recv(7, 
\1\0\0\0\320\3\0\0\rg\0\0\16\0\0\0\0\0\0\0\1\0\0\0\347..., 1000, 0) = 976
09:47:37.869524 --- SIGALRM (Alarm clock) @ 0 (0) ---
09:47:37.869575 sigreturn() = ? (mask now [])
09:47:37.869659 getppid()   = 10768
09:47:37.869702 open(global/pgstat.tmp, O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 
0666) = 3
09:47:37.869775 fstat64(3, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
09:47:37.869871 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7ced000
09:47:37.869928 write(3, 
\226\274\245\1D\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 4096) = 4096
09:47:37.870252 write(3, 
;\n\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 4096) = 4096
[...]

and doing the statistics on the system calls:


$ time strace -c -p 10773
Process 10773 attached - interrupt to quit
Process 10773 detached
% time seconds  usecs/call callserrors syscall
- -- --- --- - - 
 62.704.980721  16307851   write
 25.722.043299 156 13058  3039 poll
  9.610.763046 248  3078   rename
  0.640.050992  17  3079   open
  0.490.038819   4 10019   recv
  0.260.020469   7  3078   munmap
  0.130.010344   3  3078   close
  0.120.009425   3  3079   mmap2
  0.110.008353   3  3079   setitimer
  0.090.007114   2  3079  3039 sigreturn
  0.070.005923   2  3079   fstat64
  0.060.004734   2  3079   getppid
- -- --- --- - - 
100.007.943239358636  6078 total

real0m16.313s
user0m1.428s
sys 0m3.802s


so instead of 32 or such rename it did the rename 3K times.

To solve the problem is it possible to kill that process? (will it be 
respawned?)


Regards
Gaetano Mendola











-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHJZ9I7UpzwH2SGd4RAqwrAJ9vyt5fd1rdEu+uTnef6QpFYVBbhwCePB69
b0jA7Ko85TyEfMqAmVVRy/w=
=ax+l

Re: [HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread Gaetano Mendola
hubert depesz lubaczewski wrote:
  On Mon, Oct 29, 2007 at 09:52:24AM +0100, Gaetano Mendola wrote:
  it seems that the stats collector on my box is using more CPU than
  it did in the past.
 
  it's well known bug, and it was fixed in 8.2.4:
  http://www.postgresql.org/docs/current/interactive/release-8-2-4.html
  ...
  Prevent the statistics collector from writing to disk too frequently
  (Tom)
  ...

I saw that, upgrading the DB at this very moment is not doable, killing
that process will the postmaster respawn another one? BTW I discover that
it was triggered by the time change due the daylight saving.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread Gaetano Mendola
hubert depesz lubaczewski wrote:
 On Mon, Oct 29, 2007 at 09:52:24AM +0100, Gaetano Mendola wrote:
 it seems that the stats collector on my box is using more CPU than
 it did in the past.
 
 it's well known bug, and it was fixed in 8.2.4:
 http://www.postgresql.org/docs/current/interactive/release-8-2-4.html
 ...
 Prevent the statistics collector from writing to disk too frequently
 (Tom)
 ...

I saw that, upgrading the DB at this very moment is not doable, killing
that process will the postmaster respawn another one? BTW I discover that
it was triggered by the time change due the daylight saving.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread Gaetano Mendola
Andrew Dunstan wrote:
 
 
 Gaetano Mendola wrote:
 hubert depesz lubaczewski wrote:
  
 On Mon, Oct 29, 2007 at 09:52:24AM +0100, Gaetano Mendola wrote:
  
 it seems that the stats collector on my box is using more CPU than
 it did in the past.
   
 it's well known bug, and it was fixed in 8.2.4:
 http://www.postgresql.org/docs/current/interactive/release-8-2-4.html
 ...
 Prevent the statistics collector from writing to disk too frequently
 (Tom)
 ...
   

 I saw that, upgrading the DB at this very moment is not doable, killing
 that process will the postmaster respawn another one? BTW I discover that
 it was triggered by the time change due the daylight saving.


   
 
 you do realize that this upgrade wouldn't require a dump/restore, don't
 you? It would be close to instantaneous.

Sure I do, for me it's even easier than that, I use DRBD so I can just shut
down one node upgrade it and then upgrade the other one; however upgrade the
DB means for me touch an entire satellite trasmission platform, I have to
schedule a platform maintenance for that...

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread Gaetano Mendola
Magnus Hagander wrote:
 Gaetano Mendola wrote:
 hubert depesz lubaczewski wrote:
 On Mon, Oct 29, 2007 at 09:52:24AM +0100, Gaetano Mendola wrote:
 it seems that the stats collector on my box is using more CPU than
 it did in the past.
 it's well known bug, and it was fixed in 8.2.4:
 http://www.postgresql.org/docs/current/interactive/release-8-2-4.html
 ...
 Prevent the statistics collector from writing to disk too frequently
 (Tom)
 ...
 I saw that, upgrading the DB at this very moment is not doable, killing
 that process will the postmaster respawn another one? BTW I discover that
 it was triggered by the time change due the daylight saving.
 
 IIRC, it will. You need to change postgresql.conf and disable the stats
 collector. If you do that, it won't be started.
 
 Shouldn't be trigged by DST.

The high cpu usage started at that time, may be that is another problem of some
applications stuck on it. I will investigate further.

Gaetano

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Lock table in non-volatile functions

2007-03-16 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm observing that is not allowed to LOCK a table in a
STABLE/IMMUTABLE function but at same time is allowed
a SELECT FOR UPDATE.

Is that normal?




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF+lsm7UpzwH2SGd4RAjloAJ4j/AOdJhGMRnvM/TKVpKHPwesAOACeO4mT
OQhSwR1of3xS7HSSvtjGiQc=
=nFFM
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Lock table in non-volatile functions

2007-03-16 Thread Gaetano Mendola

Tom Lane wrote:

Gaetano Mendola [EMAIL PROTECTED] writes:

I'm observing that is not allowed to LOCK a table in a
STABLE/IMMUTABLE function but at same time is allowed
a SELECT FOR UPDATE.


Really?  AFAICS, CommandIsReadOnly() will reject SELECT FOR UPDATE too.


kalman=# select version();
version

 PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 
4.1.1 20070105 (Red Hat 4.1.1-51)

(1 row)

kalman=# CREATE TABLE test( a integer );
CREATE TABLE
kalman=# INSERT INTO test VALUES ( 1 );
INSERT 0 1
kalman=# CREATE OR REPLACE FUNCTION sp_test()
kalman-# RETURNS INTEGER AS $body$
kalman$# DECLARE
kalman$#  my_integer integer;
kalman$#  my_port_set RECORD;
kalman$# BEGIN
kalman$#  FOR my_port_set IN
kalman$#  SELECT a
kalman$#  FROM test
kalman$#  FOR UPDATE
kalman$#  LOOP
kalman$#  my_integer = 0;
kalman$#  END LOOP;
kalman$# RETURN 0;
kalman$# END;
kalman$# $body$ language 'plpgsql'
kalman-# STABLE;
CREATE FUNCTION
kalman=# select sp_test();
 sp_test
-
   0
(1 row)

BTW why forbid the lock in a non volatile function or (if you fix this) 
the SELECT FOR UPDATE ?


Regards
Gaetano Mendola

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-15 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 I can immagine a case when a lower module exports a view to upper layer 
 stating
 the interface as list of fields:
 
 first_name, last_name, 
 
 with an *hidden* field that is a function call that updates the statistics on
 how many time a given record was selected, then this technique can not be 
 used
 anymore starting with 8.2.x.
 
 You're living in a dream world if you think that works reliably in *any*
 version of Postgres.  But for starters, what is your definition of
 selected --- pulled from the physical table?  Accumulated into an
 aggregate?  Delivered as a recognizable row to the client?  Delivered N
 times to the client due to joining N times to some other table?

Well that was a not good example, I don't have any problem in mark from now
on all my function as stable/immutable (the one I use on views) but still
I believe is source of bad performance evaluate a function on rows discarded and
at same time this break the principle of least surprise.

Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF9U8R7UpzwH2SGd4RAhoGAKDSpUSQ3lGEdIdFWLwQjxoZXUAS1ACdGtht
TZg9BKScbzGO0MzpHy0Gr80=
=auwk
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Backend crash in 8.2.3 with plpgsql function

2007-03-15 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
 SM [EMAIL PROTECTED] writes:
 I got a backend crash in Postgresql 8.2.3 with the plpgsql function.
 The following statement in psql causes a signal 11:
 psql=# create function testpl() returns void as 'begin return; 
 end;'language 'plpgsql';
 
 Worksforme ...

For me too.

$ psql
Welcome to psql 8.2.3, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

kalman=# create function testpl() returns void as 'begin return; end;'language 
'plpgsql';
CREATE FUNCTION
kalman=# select version();
version
- 

 PostgreSQL 8.2.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.1 
20070105 (Red Hat 4.1.1-51)
(1 row)




Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF+V2/7UpzwH2SGd4RAk29AJ44FZFMnsFHJV+uOcQZpuD0cGN/YACgjxjY
4lVP/g+/PLs2+RfOFtpBJtE=
=/Vae
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] UPDATE using sub selects

2007-03-15 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

NikhilS wrote:

 I have tried some update-subselect variations and they seem to work. For
 example the case in the src/test/regress/sql/update.sql, which used to
 fail till now, seems to work:
 
 UPDATE update_test SET (a,b) = (select a,b FROM update_test where c =
 'foo')
   WHERE a = 10;

What's the expected result if the tuple from subselect is more than 1?
I expect no update at all in case of void result set, is this the case ?



Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF+Vx47UpzwH2SGd4RAvyVAKCGK5pC9B/lmrNjAFPGXhm5ialwSwCglM2n
DxrxWyvJASX5WSF9B8cAMas=
=AoVF
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-09 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gaetano Mendola wrote:
 Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 [ 8.2 evaluates volatile functions in the targetlist of a view ]
 If I mark the function as STABLE or IMMUTABLE then even with version
 8.2 the function is not evaluated. Is this the intended behavior?
 Yes; people complained that we needed to be more careful about the
 number of times volatile functions get evaluated.
 
 I suspect that functions are evaluated also for record discarded due to
 joins. Is that the case?
 
 Like:
 
 SELECT * FROM ta JOIN tb USING (id) where tb.foo = 4;
 
 If ta is a view with some calculated fields are the function on ta
 evaluated only for record matching the filters or in some case (
 like a full scan on ta ) also for the records discarded due to the join?
 

I did a check on a 8.2 and I can confirm my suspects:

kalman=#  create table ta ( a integer, b integer );
CREATE TABLE
kalman=#  create table tb ( b integer, c integer );
CREATE TABLE
kalman=#
kalman=#  CREATE OR REPLACE FUNCTION sp_test ( INTEGER )
kalman-#  RETURNS INTEGER AS'
kalman'#  DECLARE
kalman'# a_idALIAS FOR $1;
kalman'#  BEGIN
kalman'# RAISE NOTICE ''here'';
kalman'#
kalman'# return 3;
kalman'#  END;
kalman'#  ' LANGUAGE 'plpgsql';
CREATE FUNCTION
kalman=#
kalman=#  CREATE OR REPLACE VIEW v_ta AS
kalman-#SELECT
kalman-#   sp_test(a) AS a,
kalman-#   b  AS b
kalman-#FROM
kalman-#   ta c
kalman-#  ;
CREATE VIEW
kalman=#
kalman=#  insert into ta values (2,3);
INSERT 0 1
kalman=#  insert into ta values (3,4);
INSERT 0 1
kalman=#  insert into tb values (4,5);
INSERT 0 1
kalman=#
kalman=#  select * from v_ta join tb using (b) where c = 5;
NOTICE:  here
NOTICE:  here
 b | a | c
- ---+---+---
 4 | 3 | 5
(1 row)


Is really this what we want? I did a migration 8.0.x = 8.2.3 and I had on 
first hour of service up
lot of queries blocked due to this, consider in my case I have on v_ta 
milions of records and usually
that join extracts 1 row. Is there a way to set till I don't check all my huge 
schema to disable this
behaviour?

Regards
Gaetano Mendola



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF8TAc7UpzwH2SGd4RAgajAKCvIxLH9JSBk4gxSbuaq4WE2y7v2wCfbnRa
jWDV3hlEq/Loye6G+E2S9Ew=
=LR5T
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-09 Thread Gaetano Mendola
Martijn van Oosterhout wrote:
 On Fri, Mar 09, 2007 at 10:59:56AM +0100, Gaetano Mendola wrote:
 Is really this what we want? I did a migration 8.0.x = 8.2.3 and I had on 
 first hour of service up
 lot of queries blocked due to this, consider in my case I have on v_ta 
 milions of records and usually
 that join extracts 1 row. Is there a way to set till I don't check all my 
 huge schema to disable this
 behaviour?
 
 Most people figured it was a improvment. It's configured per function
 now, which wasn't the case before. I dont't think there was ever any
 discussion about having a global switch.

Well it's not an improvement in term of performances but a performance 
degradation in the best case and
in the worst can be devastating:


create table ta ( a integer, b integer );
CREATE TABLE
create table tb ( b integer, c integer );
CREATE TABLE

CREATE OR REPLACE FUNCTION sp_delete_selected_row ( INTEGER )
RETURNS INTEGER AS'
DECLARE
   a_idALIAS FOR $1;
BEGIN
   DELETE FROM ta where a = a_id;
   return 0;
END;
' LANGUAGE 'plpgsql';
CREATE FUNCTION

CREATE OR REPLACE VIEW v_ta AS
  SELECT
 sp_delete_selected_row(a) AS a,
 b AS b
  FROM
 ta
;
CREATE VIEW

insert into ta values (2,3);
INSERT 0 1
insert into ta values (3,4);
INSERT 0 1
insert into tb values (4,5);
INSERT 0 1

select * from v_ta join tb using (b) where c = 5;

 b | a | c
---+---+---
 4 | 0 | 5
(1 row)

select * from ta;
 a | b
---+---
(0 rows)


All rows are gone instead of the only one extracted from that query. IMHO is a 
undesired side effect.
In my case I destroyed my application statistics on how many time a certain row 
was extracted.


Regards
Gaetano Mendola





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-09 Thread Gaetano Mendola
Florian G. Pflug wrote:
 Gaetano Mendola wrote:
 Martijn van Oosterhout wrote:
 On Fri, Mar 09, 2007 at 10:59:56AM +0100, Gaetano Mendola wrote:
 Is really this what we want? I did a migration 8.0.x = 8.2.3 and I
 had on first hour of service up
 lot of queries blocked due to this, consider in my case I have on
 v_ta milions of records and usually
 that join extracts 1 row. Is there a way to set till I don't check
 all my huge schema to disable this
 behaviour?
 Most people figured it was a improvment. It's configured per function
 now, which wasn't the case before. I dont't think there was ever any
 discussion about having a global switch.

 Well it's not an improvement in term of performances but a performance
 degradation in the best case and
 in the worst can be devastating:

 create table ta ( a integer, b integer );
 CREATE TABLE
 create table tb ( b integer, c integer );
 CREATE TABLE

 CREATE OR REPLACE FUNCTION sp_delete_selected_row ( INTEGER )
 RETURNS INTEGER AS'
 DECLARE
a_idALIAS FOR $1;
 BEGIN
DELETE FROM ta where a = a_id;
return 0;
 END;
 ' LANGUAGE 'plpgsql';
 CREATE FUNCTION

 CREATE OR REPLACE VIEW v_ta AS
   SELECT
  sp_delete_selected_row(a) AS a,
  b AS b
   FROM
  ta
 ;
 CREATE VIEW

 insert into ta values (2,3);
 INSERT 0 1
 insert into ta values (3,4);
 INSERT 0 1
 insert into tb values (4,5);
 INSERT 0 1

 select * from v_ta join tb using (b) where c = 5;

  b | a | c
 ---+---+---
  4 | 0 | 5
 (1 row)

 select * from ta;
  a | b
 ---+---
 (0 rows)


 All rows are gone instead of the only one extracted from that query.
 IMHO is a undesired side effect.
 In my case I destroyed my application statistics on how many time a
 certain row was extracted.
 
 This is insane. Whoever creates a view like that on a production system
 should *immediatly* be carried away from his keyboard, to prevent
 further damage. Imagine someone using View Data on  this view in
 pgadmin.. I don't wanna be near him when he clicks Refresh, and
 suddenly all data is gone...
 
 Maybe calling volatile functions in selects and views should be
 forbidden entirely, except for volatile functions in the top-level
 select clause,
 to support things like select ..., nextval('seq') from 
 
 But it's probably not worth the effort - there will always be creative
 ways to shoot yourself into your foot.

I full agree with this, that was just an extreme example of an hidden undesired
call. In my framework I don't have by coding rule any function with side 
effects
applied at view fields, however I have some functions not marked correctly as
STABLE ( mea culpa ) that degraded the performances until I realized what was 
going
on; I'm in the opinion that is not sane call a function not marked as 
stable/immutable
for discarded column (I can in some way accept this ) and most of all on 
discarded
rows.

Regards
Gaetano Mendola
















---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-09 Thread Gaetano Mendola
Florian G. Pflug wrote:
 Martijn van Oosterhout wrote:
 On Fri, Mar 09, 2007 at 10:59:56AM +0100, Gaetano Mendola wrote:
 Is really this what we want? I did a migration 8.0.x = 8.2.3 and I
 had on first hour of service up
 lot of queries blocked due to this, consider in my case I have on
 v_ta milions of records and usually
 that join extracts 1 row. Is there a way to set till I don't check
 all my huge schema to disable this
 behaviour?

 Most people figured it was a improvment. It's configured per function
 now, which wasn't the case before. I dont't think there was ever any
 discussion about having a global switch.
 
 If your function is already marked immutable or stable, then nothing
 changes for you. If you *did* call volatile functions inside your
 select, then you now get
 consistens behaviour. Since you don't want your function to be evaluated
 in all
 cases, I assume that it shouldn't be marked volatile in the first place.

Well some function are volatile and can not be marked as stable.

We develop our applications layering the modules, we didn't have any reason
to forbid as coding rule to put function call on view in low level layers.
After all views are there also to build up your schema layering the info.

I can immagine a case when a lower module exports a view to upper layer stating
the interface as list of fields:

first_name, last_name, 

with an *hidden* field that is a function call that updates the statistics on
how many time a given record was selected, then this technique can not be used
anymore starting with 8.2.x.

The above is not my case but it can be a possible scenario (I admit not a sane
one ).

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-08 Thread Gaetano Mendola
Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 [ 8.2 evaluates volatile functions in the targetlist of a view ]
 If I mark the function as STABLE or IMMUTABLE then even with version
 8.2 the function is not evaluated. Is this the intended behavior?
 
 Yes; people complained that we needed to be more careful about the
 number of times volatile functions get evaluated.

I suspect that functions are evaluated also for record discarded due to
joins. Is that the case?

Like:

SELECT * FROM ta JOIN tb USING (id) where tb.foo = 4;

If ta is a view with some calculated fields are the function on ta
evaluated only for record matching the filters or in some case (
like a full scan on ta ) also for the records discarded due to the join?


Regards
Gaetano Mendola

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Calculated view fields (8.1 != 8.2)

2007-03-06 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm observing this behavior in 8.2:


kalman=# create table test ( a integer, b integer );
CREATE TABLE
kalman=#
kalman=# CREATE OR REPLACE FUNCTION sp_test ( INTEGER )
kalman-# RETURNS INTEGER AS'
kalman'# DECLARE
kalman'#a_id_contractALIAS FOR $1;
kalman'# BEGIN
kalman'#RAISE NOTICE ''here'';
kalman'#
kalman'#return 3;
kalman'# END;
kalman'# ' LANGUAGE 'plpgsql';
CREATE FUNCTION
kalman=#
kalman=#
kalman=# CREATE OR REPLACE VIEW v_test AS
kalman-#   SELECT
kalman-#  a AS a,
kalman-#  sp_test(b) AS b
kalman-#   FROM
kalman-#  test c
kalman-# ;
CREATE VIEW
kalman=#
kalman=# insert into test values (2,3);
INSERT 0 1
kalman=#
kalman=# select * from v_test;
NOTICE:  here
 a | b
- ---+---
 2 | 3
(1 row)

kalman=# select a from v_test;
NOTICE:  here
 a
- ---
 2
(1 row)


In version 8.1 the function sp_test is not evaluated in case of select a from 
v_test.

If I mark the function as STABLE or IMMUTABLE then even with version 8.2 the 
function
is not evaluated. Is this the intended behavior? I didn't see something about 
it in
the release note.


Regards
Gaetano Mendola





-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF7Y1D7UpzwH2SGd4RAv//AJ0dcDPyYIndVMs7pEhzXjVNwKqdLQCeJQnL
oaZVL2JgS/J9lPf+B80+FuY=
=qaCE
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] checking on buildfarm member thrush

2006-06-23 Thread Gaetano Mendola
Tom Lane wrote:

 Could you get a gdb stack trace from that crash?  If the buildfarm
 run is under a suitable ulimit, it should be leaving a core file
 in the test PGDATA directory.

Unfortunately the core size for the user pgfarm is 0:

$ulimit -c
0

However I did a configure, make and make check on regression test
and I got two cores, may be I do something wrong but I'm not able
to get any information.

Configured with:

./configure --prefix=/home/pgfarm/HEAD/inst --enable-cassert --enable-debug 
--enable-nls --enable-integer-datetimes --with-perl --with-python --with-tcl


$ gdb tmp_check/install/home/pgfarm/HEAD/inst/bin/postgres 
./tmp_check/data/core.6516
GNU gdb Red Hat Linux (5.3.90-0.20030710.41rh)
Copyright 2003 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i386-redhat-linux-gnu...Using host libthread_db 
library /lib/tls/libthread_db.so.1.


warning: core file may not match specified executable file.
Core was generated by `postgres: pgfarm regression [local] COMMENT  
 '.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /lib/libcrypt.so.1...done.
Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/tls/libm.so.6...done.
Loaded symbols for /lib/tls/libm.so.6
Reading symbols from /lib/tls/libc.so.6...done.
Loaded symbols for /lib/tls/libc.so.6
Reading symbols from /lib/ld-linux.so.2...done.
Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
#0  0x0005 in ?? ()
(gdb) bt
#0  0x0005 in ?? ()
#1  0x4000 in ?? ()




anything else I can do ?




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] checking on buildfarm member thrush

2006-06-22 Thread Gaetano Mendola
Tom Lane wrote:
 I'm trying to determine why thrush has been failing on PG CVS HEAD for
 the past few days.  Could you try running the attached program on that
 machine, and see what it prints?  I suspect it will dump core :-(
 
 Note: you might need to use -D_GNU_SOURCE to get it to compile at all.
 
   regards, tom lane
 
 
 #include stdio.h
 #include stdlib.h
 #include string.h
 #include errno.h
 #include fcntl.h
 
 int
 main()
 {
   if (posix_fadvise(fileno(stdin), 0, 0, POSIX_FADV_DONTNEED))
   printf(failed: %s\n, strerror(errno));
   else
   printf(OK\n);
   return 0;
 }
 


$ gcc -D_GNU_SOURCE tom.c  -o tom
$ ./tom
failed: Success


Regards
Gaetano Mendola






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] RH9 postgresql 8.0.7 rpm

2006-04-11 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm trying to build the rpms for RH9,
I downloaded the srpm for RH9 but I'm stuck on these errors:

Attempt a:

# rpmbuild --rebuild postgresql-8.0.7-1PGDG.src.rpm
Installing postgresql-8.0.7-1PGDG.src.rpm
error: Failed build dependencies:
tcl-devel is needed by postgresql-8.0.7-1PGDG

why tcl-devel on rh9 version? tcl-devel doesn't exist on rh9 )

Attempt b:
# rpmbuild --nodeps --rebuild postgresql-8.0.7-1PGDG.src.rpm

checking krb5.h presence... no
checking for krb5.h... no
configure: error: header file krb5.h is required for Kerberos 5
error: Bad exit status from /var/tmp/rpm-tmp.73067 (%build)


ok no kerberos now:

Attempt c:
# rpmbuild --nodeps  --rebuild --define 'kerberos 0' 
postgresql-8.0.7-1PGDG.src.rpm
.
checking for zlib.h... yes
checking openssl/ssl.h usability... no
checking openssl/ssl.h presence... no
checking for openssl/ssl.h... no
configure: error: header file openssl/ssl.h is required for OpenSSL
error: Bad exit status from /var/tmp/rpm-tmp.3109 (%build)

actually I have that file:

# locate openssl/ssl.h
/usr/include/openssl/ssl.h


Can someone help me in this ?


Regards
Gaetano Mendola
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEO9nu7UpzwH2SGd4RAi/nAJ9WoyVBUR1aSp0+TCPkNEnXhvSbzwCgmEYf
2xQem+7IA7cAF7HxclNv6Ts=
=Lj75
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Missing files on Postgres8.0.4 Win32 Installation

2005-10-17 Thread Gaetano Mendola
Hi all,
I installed postgres 8.0.4 on a  win32 box and I found out:
libpq-fe.h  and libpqdll.lib are missing.

Is that normal?


Regards
Gaetano Mendola





---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Outer where pushed down

2005-10-06 Thread Gaetano Mendola
Hi all,

consider this view:

CREATE OR REPLACE VIEW v_current_connection AS
SELECT ul.id_user
FROM   user_login ul,
   current_connection cc
WHERE ul.id_user = cc.id_user;


And this is the explain on a usage of that view:

# explain select * from v_current_connection_test where 
sp_connected_test(id_user) = FALSE;
QUERY PLAN
--
 Hash Join  (cost=42.79..1325.14 rows=451 width=5)
   Hash Cond: (outer.id_user = inner.id_user)
   -  Seq Scan on user_login ul  (cost=0.00..1142.72 rows=27024 width=4)
 Filter: (sp_connected_test(id_user) = false)
   -  Hash  (cost=40.49..40.49 rows=919 width=5)
 -  Index Scan using idx_connected on current_connection cc  
(cost=0.00..40.49 rows=919 width=5)
   Index Cond: (connected = true)
(7 rows)

apart my initial surprise to see that function applied at rows not returned by 
the view
( Tom Lane explained me that the planner is able to push down the outer 
condition )
why postgres doesn't apply that function at table current_connection given the 
fact are extimated
only 919 vs 27024 rows?


redefining the view:

CREATE OR REPLACE VIEW v_current_connection AS
SELECT cc.id_user
FROM   user_login ul,
   current_connection cc
WHERE ul.id_user = cc.id_user;

then I obtain the desidered plan.


# explain select * from v_current_connection_test where 
sp_connected_test(id_user ) = FALSE;
QUERY PLAN
--
 Hash Join  (cost=46.23..1193.47 rows=452 width=5)
   Hash Cond: (outer.id_user = inner.id_user)
   -  Seq Scan on user_login ul  (cost=0.00..872.48 rows=54048 width=4)
   -  Hash  (cost=45.08..45.08 rows=460 width=5)
 -  Index Scan using idx_connected on current_connection cc  
(cost=0.00..45.08 rows=460 width=5)
   Index Cond: (connected = true)
   Filter: (sp_connected_test(id_user) = false)
(7 rows)



Is not possible in any way push postgres to apply that function to the right 
table ?
Shall I rewrite the views figuring out wich column is better to expose ?



Regards
Gaetano Mendola
















---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Outer where pushed down

2005-10-06 Thread Gaetano Mendola
Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 CREATE OR REPLACE VIEW v_current_connection AS
 SELECT ul.id_user
 FROM   user_login ul,
current_connection cc
 WHERE ul.id_user = cc.id_user;
 
 # explain select * from v_current_connection_test where 
 sp_connected_test(id_user) = FALSE;
 
 why postgres doesn't apply that function at table current_connection given 
 the fact are extimated
 only 919 vs 27024 rows?
 
 Because the condition is on a field of the other table.
 
 You seem to wish that the planner would use ul.id_user = cc.id_user
 to decide that sp_connected_test(ul.id_user) can be rewritten as
 sp_connected_test(cc.id_user), but in general this is not safe.
 The planner has little idea of what the datatype-specific semantics
 of equality are, and none whatsoever what the semantics of your
 function are.  As a real-world example: IEEE-standard floating
 point math considers that +0 and -0 are different bit patterns.
 They compare as equal, but it's very easy to come up with user-defined
 functions that will yield different results for the two inputs.
 So the proposed transformation is definitely unsafe for float8.

And what about to define for each type when this is safe and let
the planner make his best choice ?
Rewriting that view the execution time passed from 4 secs to 1 sec,
that is not bad if the planner can do it autonomously. In this very
example I can decide if it's better expose one column or the other
one but in other cases not...


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Vacuum and Transactions

2005-10-05 Thread Gaetano Mendola
Rod Taylor wrote:
 I have maintenace_work_mem set to about 1GB in size.

Isn't a bit too much ?


Regards
Gaetano Mendola



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] wrong optimization ( postgres 8.0.3 )

2005-10-05 Thread Gaetano Mendola
Hi all,
take a look at this simple function and view:

CREATE OR REPLACE FUNCTION sp_connected_test ( INTEGER )
RETURNS BOOLEAN AS'
DECLARE
   a_id_user ALIAS FOR $1;
BEGIN
   PERFORM *
   FROM v_current_connection
   WHERE id_user = a_id_user;

   IF NOT FOUND THEN
  RETURN FALSE;
   END IF;

   RETURN TRUE;

END;
' LANGUAGE 'plpgsql';

CREATE VIEW v_current_connection_test
AS SELECT ul.id_user, cc.connected
   FROM current_connection cc,
user_login ul
   WHERE cc.id_user = ul.id_user AND
 connected = TRUE;


SELECT * FROM v_current_connection_test WHERE sp_connected_test(id_user) = 
FALSE;


this line shall produce no row, but randomly does.

If I put a RAISE NOTICE before RETURN NULL with the id_user I notice that
the function is called on records present on user_login but discarged because
the join with current_connectin have connected = FALSE!

I can work_around the problem rewriting the view:

CREATE VIEW v_current_connection_test
AS SELECT cc.id_user, cc.connected
   FROM current_connection cc,
user_login ul
   WHERE cc.id_user = ul.id_user AND
 connected = TRUE;


Regards
Gaetano Mendola











---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] wrong optimization ( postgres 8.0.3 )

2005-10-05 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 What I'm experiencing is a problem ( I upgraded today from
 7.4.x to 8.0.3 ) that I explain here:
 
 The following function just return how many records there
 are inside the view v_current_connection
 
 CREATE OR REPLACE FUNCTION sp_count ( )
 RETURNS INTEGER AS'
 DECLARE
c INTEGER;
 BEGIN
SELECT count(*) INTO c FROM v_current_connection;
RETURN c;
 END;
 ' LANGUAGE 'plpgsql';
 
 I have the following select
 
 # select count(*), sp_count() from v_current_connection;
  count | sp_count
 - ---+--
977 |  978
 
 as you can see the two count are returning different record
 numbers ( in meant time other transactions are updating tables
 behind the view v_current_connection ).
 
 This isn't surprising at all, if other transactions are actively
 changing the table.  See the release notes for 8.0:
 
 : Observe the following incompatibilities: 
 : 
 :  In READ COMMITTED serialization mode, volatile functions now see
 :  the results of concurrent transactions committed up to the
 :  beginning of each statement within the function, rather than up to
 :  the beginning of the interactive command that called the function.
 : 
 :  Functions declared STABLE or IMMUTABLE always use the snapshot of
 :  the calling query, and therefore do not see the effects of actions
 :  taken after the calling query starts, whether in their own
 :  transaction or other transactions.  Such a function must be
 :  read-only, too, meaning that it cannot use any SQL commands other
 :  than SELECT.
 
 If you want this function to see the same snapshot as the calling query
 sees, declare it STABLE.


I think I understood :-(

Just to be clear:

select work_on_connected_user(id_user) from v_connected_user;

if that function is not stable than it can work on an id_user that is not 
anymore
on view v_connected_user. Is this right ?


Regards
Gaetano Mendola











-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFDRDPh7UpzwH2SGd4RAnPVAJ9PdcVoUoOh7U4poR0Hd9uT4l/QgACg9nXg
sebdHozcBV7t7JZslluGzB8=
=rFgE
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] Build Farm: thrush

2005-10-03 Thread Gaetano Mendola
Hi all,
I'm the administrator of that machine and PLCheck is failing.
Is there anything I can do to fix it ?


Regards
Gaetano Mendola

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Build Farm: thrush

2005-10-03 Thread Gaetano Mendola
Tom Lane wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 I'm the administrator of that machine and PLCheck is failing.
 Is there anything I can do to fix it ?
 
 What version of Python have you got on that thing?  It seems to be
 emitting still another spelling of the encoding error message :-(

$ python -V
Python 2.2.3



Regards
Gaetano Mendola






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] postgresql clustering

2005-09-29 Thread Gaetano Mendola
Daniel Duvall wrote:
 While clustering in some circles may be an open-ended buzzword --
 mainly the commercial DB marketing crowd -- there are concepts beneath
 the bull that are even inherent in the name.  However, I understand
 your point.
 
From what I've researched, the concepts and practices seem to fall
 under one of two abstract categorizations: fail-over (ok...
 high-availability), and parallel execution (high-performance... sure).
 While some consider the implementation of only one of these to qualify
 a cluster, others seem to demand that a true cluster must
 implement both.
 
 What I'm really after is a DB setup that does fail-over and parallel
 execution.  Your setup sounds like it would gracefully handle the
 former, but cannot achieve the latter.  Perhaps I'm simply asking too
 much of a free software setup.
 
 Thanks for your response.
 

Also consider the PITR and some work I did last year:
http://archives.postgresql.org/pgsql-admin/2005-06/msg00013.php

With PITR you can have one or more remote machine/s that
continuously replay log from main, and if the main crash
the mirrors can come out from their reply and go on line.

At that time was not possible connect to a replayng engine
to perform ( at least ) queries, dunno if this changed in 8.1

BTW, did someone go further with that idea? If not I'd like rewrite
that stuff in C ( I do prefer C++ ).

Regards
Gaetano Mendola





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] postgresql clustering

2005-09-28 Thread Gaetano Mendola
Daniel Duvall wrote:

 I've looked at PostgreSQL and EnterpriseDB, but I can't find anything
 definitive  as far as clustering capabilities.  What kinds of projects
 are there for clustering PgSQL, and are any of them mature enough for
 commercial apps?

As you well know clustering means all and nothing at the same time.
We do have a commercial failover cluster for provided by Redhat,
with postgres running on it. The Postgres is installed on both nodes and the
data are stored on SAN, only one instance of postgres run at time in one
of two nodes. In last 2 years we had a failure and the service relocation
worked as expected.

Consider also that applications shall have a good behaviour like try to
close the current connection and retry to open a new one for a while

Regards
Gaetano Mendola


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Gaetano Mendola
Gaetano Mendola wrote:
 Alvaro Herrera wrote:
 On Mon, Sep 26, 2005 at 05:41:24PM +0200, Gaetano Mendola wrote:
 Joshua D. Drake wrote:
 Autovacuum is integrated into the backend for 8.1
 Can I set the autovacuum parameter per table instead of per
 engine ?
 Yes.
 

Reading the 8.1 release note I found:


Move /contrib/pg_autovacuum into the main server (Alvaro)

Integrating autovacuum into the server allows it to be automatically started
and stopped in sync with the database server, and allows autovacuum to be
configured from postgresql.conf.


May be it could be useles mention that was not exactly pg_autovacuum moved
because for example you can now set parameter per table and pg_autvacuum did 
not.


Regards
Gaetano Mendola



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Vacuum questions...

2005-09-26 Thread Gaetano Mendola
Joshua D. Drake wrote:
 Hannu Krosing wrote:
 
 On L, 2005-09-24 at 20:25 -0700, Joshua D. Drake wrote:

  

 Actually this also probably would not gain you much in 8.1
 as vacuum in theory is already dealing with itself.
   

 Interesting. Could you explain it in a more detailed way ?
 How does vacuum deal with itself in 8.1 ?
  

 Autovacuum is integrated into the backend for 8.1

Can I set the autovacuum parameter per table instead of per
engine ? I'm using pg_autovacuum right now in 7.4 and is not
enough because some tables ( one that implement a materialized
view for example ) are out of an average engine usage and other
tables are so huge to not be analyzed for months.


Regards
Gaetano Mendola



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] roundoff problem in time datatype

2005-09-26 Thread Gaetano Mendola
Tom Lane wrote:
 Dennis Bjorklund [EMAIL PROTECTED] writes:
 On Mon, 26 Sep 2005, Tom Lane wrote:
 No, I think the usual notation for a leap-second is '23:59:60'.
 We do allow 60 in the seconds field for this purpose.
 
 Yes, and it can go up to 23:59:60.99 (depending on how many fractional 
 seconds one want).
 
 That's an urban legend.  There never have been, and never will be, two
 leap seconds instituted in the same minute.  We really should reject
 anything larger than '23:59:60'.

mmm. The second 60 have is on duration of 1 second so 23:59:60.4 have
is right to exist.


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Vacuum questions...

2005-09-26 Thread Gaetano Mendola
Alvaro Herrera wrote:
 On Mon, Sep 26, 2005 at 05:41:24PM +0200, Gaetano Mendola wrote:
 Joshua D. Drake wrote:
 
 Autovacuum is integrated into the backend for 8.1
 Can I set the autovacuum parameter per table instead of per
 engine ?
 
 Yes.

Finally :-)

good work.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] 8.0.x RPM issues

2005-05-24 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I'm trying to install 8.0.3 on a RH9.0.
I do have few problems:

1) They do not exist for 9.0
2) Using the 8.0.2 rpm I get:

# rpm -Uvh --test *.rpm
warning: postgresql-8.0.2-1PGDG.i686.rpm: V3 DSA signature: NOKEY, key ID 
748f7d0e
error: Failed dependencies:
libpq.so.3 is needed by postgresql-contrib-8.0.2-1PGDG
libecpg.so.3 is needed by postgresql-libs-8.0.2-1PGDG
libpq.so.3 is needed by postgresql-libs-8.0.2-1PGDG
libpq.so.3 is needed by (installed) gnucash-backend-postgres-1.8.1-3
libpq.so.3 is needed by (installed) libdbi-dbd-pgsql-0.6.5-5
libpq.so.3 is needed by (installed) mod_auth_pgsql-0.9.12-14
libpq.so.3 is needed by (installed) perl-DBD-Pg-1.21-2
libpq.so.3 is needed by (installed) php-pgsql-4.2.2-17
libpq.so.3 is needed by (installed) qt-PostgreSQL-3.1.1-6
libpq.so.3 is needed by (installed) xemacs-21.4.12-6
libpq.so.3 is needed by (installed) postgresql-tcl-7.4.2-1PGDG
Suggested resolutions:
postgresql-libs-7.3.2-3.i386.rpm

3) 8.0.2 SRPMS for RH9.0 are not there as well.


If you provide me a way to get the SRPMS I can create the rpms and give them
to you


Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCkx4W7UpzwH2SGd4RAp4uAKCdobiXlhZkc6TzersTA295ne0GggCgreVU
ftk/gM7gvdMSoRkKhSb2Il4=
=63o7
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] 8.0.x RPM issues

2005-05-24 Thread Gaetano Mendola
Dave Cramer wrote:
 Check the archives, this has already been discussed.
 Devrim is posting a compat rpm shortly.

Thx.

Regards
Gaetano Mendola




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] SO_KEEPALIVE

2005-05-22 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
I understood that noone will add that option to pglib,
is it correct ?


Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCkOh27UpzwH2SGd4RAuh1AJ92B4qZiYZmd40+v4QwmCJyM7z0ggCbBSon
g75HfsDdfdSRuIVXEqFe6+g=
=I+X/
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] SO_KEEPALIVE

2005-05-18 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


Oliver Jowett wrote:



 If you're unlucky, the server could go down while you're blocked waiting
 for a query response..




That is exactly what happens to us, and you have to be not so unlucky for
that happen if the engine have ~100 query at time.

Regards
Gaetano Mendola




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCi04C7UpzwH2SGd4RArvMAKDUJEefpsH2CX9E6wjg2j5DcV3JSwCgr/XB
BlTc3y4vE9GjyUl6eypcN00=
=h/Gg
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] SO_KEEPALIVE

2005-05-18 Thread Gaetano Mendola
Oliver Jowett wrote:

 If you're unlucky, the server could go down while you're blocked waiting
 for a query response..
 

That is exactly what happens to us, and you have to be not so unlucky for
that happen if the engine have ~100 query at time.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] keepalive

2005-05-16 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,
this weekend our DB server crashed ( 7.4.x ) and the DB engine was relocated
on another server. So far so well.

Unfortunatelly most of our clients are still on a recv trying to
recv data from the old DB engine instance:

#  netstat -anp | grep ESTABLISHED | grep 54497
tcp0  0 193.251.135.46:5449710.10.50.47:5432ESTABLISHED 
27331/psql

# ps -eafwww | grep 27331
apache   27331 27327  0 May13 ?00:00:00 psql -t -c SELECT 
is_authorized_flag FROM v_ts_quota WHERE login='*' -h x.y.z.w  yyy


as you can see this connection is there since 3 days now.

Digging on it I see these sockets are not using the keepalive options:


netstat -anop | grep ESTABLISHED | grep 54497
tcp0  0 193.251.135.46:5449710.10.50.47:5432ESTABLISHED 
27331/psql  off (0.00/0/0)

normaly on other connections I see:

tcp0  0 127.0.0.1:199   127.0.0.1:32784 ESTABLISHED 
1255/snmpd  keepalive (7102.52/0/0)


The same happens with JDBC connections ( the keepalive missing ).


There is a reason for this or a way to instruct psql to use the keepalive 
option ?


Regards
Gaetano Mendola




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCiJNF7UpzwH2SGd4RAh8aAJ0WOBEzzjYf1gj1OaFGsFBE9mr4hgCfUhna
D1F420Pa94lvrA04xA73tiE=
=muzn
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] One vacuum full is not enough.

2005-03-13 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hannu Krosing wrote:
 Ühel kenal päeval (teisipäev, 8. märts 2005, 00:52+0100), kirjutas
 Gaetano Mendola:
 
Hi all,
running a 7.4.5 it happen to me with another table
where a single vacuum full was not freeing enough pages,
here the verbose vacuum full, as you can see only at
the end:  truncated 8504 to 621 pages.

I use pg_autovacuum and it's not enough. I'll schedule
again a nightly vacuum full.
 
 
 You may have too few fsm pages, so new inserts/updates don't use all the 
 pages freed by vacuums.
 

Is not this the case:


 max_fsm_pages  | 200
 max_fsm_relations  | 1000

and when I was doing the vacuum full these settings were above the
real needs.



Regards
Gaetano Mendola




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCMA++7UpzwH2SGd4RAi0hAJwLBpSWlDTQoAWglK8Dg/IoY3fb8QCfTjKU
wxDSc2VG7B5pRPfCfQqxRtk=
=Ce+9
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] A bad plan

2005-03-13 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Josh Berkus wrote:
 Gaetano,
 
 
Hi all,
running a 7.4.5 engine, I'm facing this bad plan:
 
 
 Please take this to the PGSQL-PERFORMANCE mailing list; that's what that list 
 exists for.
 
 Or IRC, where I know your are sometimes.  But not -Hackers.

Sorry, I was convinced to have sent this email to performances ( as I do
usually ).



Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFCLXih7UpzwH2SGd4RApCYAKCS/1qPYFs7GABfpwAO0c51kg+daQCg/J66
vwv2Z92GtFvOwKFwa8jC838=
=BlCp
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] Cost of XLogInsert CRC calculations

2005-03-07 Thread Gaetano Mendola
Tom Lane wrote:
 Mark Cave-Ayland [EMAIL PROTECTED] writes:
 
Wow, a 64-bit CRC does seem excessive, especially when going back to Zmodem
days where a 50-100k file seemed to be easily protected by a 32-bit CRC. I'm
sure there are some error rates somewhere dependent upon the polynomial and
the types of error detected Try the following link towards the bottom:
http://www.ee.unb.ca/tervo/ee4253/crc.htm for some theory on detection rates
vs. CRC size.
 
 
 When the CRC size was decided, I recall someone arguing that it would
 really make a difference to have 1-in-2^64 chance of failure rather than
 1-in-2^32.  I was dubious about this at the time, but didn't have any
 evidence showing that we shouldn't go for 64.  I suppose we ought to try
 the same example with a 32-bit CRC and see how much it helps.

Continuing this why not a 16-bit then ?


Regards
Gaetano Mendola





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] One vacuum full is not enough.

2005-03-07 Thread Gaetano Mendola
Hi all,
running a 7.4.5 it happen to me with another table
where a single vacuum full was not freeing enough pages,
here the verbose vacuum full, as you can see only at
the end:  truncated 8504 to 621 pages.

I use pg_autovacuum and it's not enough. I'll schedule
again a nightly vacuum full.

Regards
Gaetano Mendola


# vacuum full verbose url;
INFO:  vacuuming public.url
INFO:  url: found 268392 removable, 21286 nonremovable row versions in 8563 
pages
DETAIL:  22 dead row versions cannot be removed yet.
Nonremovable row versions range from 104 to 860 bytes long.
There were 13924 unused item pointers.
Total free space (including removable row versions) is 63818404 bytes.
4959 pages are or will become empty, including 7 at the end of the table.
8296 pages containing 63753840 free bytes are potential move destinations.
CPU 0.33s/0.12u sec elapsed 9.55 sec.
INFO:  index url_pkey now contains 21286 row versions in 2343 pages
DETAIL:  268392 index row versions were removed.
886 index pages have been deleted, 886 are currently reusable.
CPU 0.11s/0.37u sec elapsed 2.68 sec.
INFO:  index idx_url_url now contains 297 row versions in 7412 pages
DETAIL:  268392 index row versions were removed.
6869 index pages have been deleted, 6869 are currently reusable.
CPU 1.02s/2.05u sec elapsed 59.89 sec.
INFO:  index idx_url_name now contains 297 row versions in 3277 pages
DETAIL:  268392 index row versions were removed.
2976 index pages have been deleted, 2976 are currently reusable.
CPU 0.40s/0.72u sec elapsed 27.05 sec.
INFO:  url: moved 2 row versions, truncated 8563 to 8550 pages
DETAIL:  CPU 0.40s/0.52u sec elapsed 28.05 sec.
INFO:  index url_pkey now contains 21287 row versions in 2343 pages
DETAIL:  1 index row versions were removed.
886 index pages have been deleted, 886 are currently reusable.
CPU 0.07s/0.04u sec elapsed 6.22 sec.
INFO:  index idx_url_url now contains 298 row versions in 7412 pages
DETAIL:  0 index row versions were removed.
6956 index pages have been deleted, 6956 are currently reusable.
CPU 0.37s/0.07u sec elapsed 14.30 sec.
INFO:  index idx_url_name now contains 298 row versions in 3277 pages
DETAIL:  0 index row versions were removed.
2979 index pages have been deleted, 2979 are currently reusable.
CPU 0.16s/0.04u sec elapsed 4.79 sec.
INFO:  vacuuming pg_toast.pg_toast_16730637
INFO:  pg_toast_16730637: found 0 removable, 0 nonremovable row versions in 0 
pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 0 to 0 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 0 bytes.
0 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index pg_toast_16730637_index now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
empdb=# vacuum full verbose url;
INFO:  vacuuming public.url
INFO:  url: found 42 removable, 21286 nonremovable row versions in 8550 pages
DETAIL:  22 dead row versions cannot be removed yet.
Nonremovable row versions range from 104 to 860 bytes long.
There were 281879 unused item pointers.
Total free space (including removable row versions) is 63713588 bytes.
4948 pages are or will become empty, including 0 at the end of the table.
8289 pages containing 63705340 free bytes are potential move destinations.
CPU 0.39s/0.15u sec elapsed 5.93 sec.
INFO:  index url_pkey now contains 21286 row versions in 2343 pages
DETAIL:  42 index row versions were removed.
886 index pages have been deleted, 886 are currently reusable.
CPU 0.09s/0.03u sec elapsed 1.44 sec.
INFO:  index idx_url_url now contains 297 row versions in 7412 pages
DETAIL:  42 index row versions were removed.
6956 index pages have been deleted, 6956 are currently reusable.
CPU 0.18s/0.08u sec elapsed 7.30 sec.
INFO:  index idx_url_name now contains 297 row versions in 3277 pages
DETAIL:  42 index row versions were removed.
2979 index pages have been deleted, 2979 are currently reusable.
CPU 0.09s/0.02u sec elapsed 2.58 sec.
INFO:  url: moved 0 row versions, truncated 8550 to 8550 pages
DETAIL:  CPU 0.37s/0.23u sec elapsed 4.39 sec.
INFO:  vacuuming pg_toast.pg_toast_16730637
INFO:  pg_toast_16730637: found 0 removable, 0 nonremovable row versions in 0 
pages
DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 0 to 0 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 0 bytes.
0 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.00s/0.01u sec elapsed 0.00 sec.
INFO:  index pg_toast_16730637_index now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

[HACKERS] A bad plan

2005-03-07 Thread Gaetano Mendola
: 
(outer.id_package = inner.id_package)
 -  Index Scan using 
packages_pkey on packages p  (cost=0.00..131.04 rows=1229 width=103) (actual 
time=12.796..457.520 rows=1248 loops=1)
 -  Index Scan using 
package_security_id_package_key on package_security ps  (cost=0.00..46.83 
rows=855 width=4) (actual time=6.703..283.944 rows=879 loops=1)
   -  Sort  
(cost=696.16..705.69 rows=3812 width=16) (actual time=25.222..25.705 rows=494 
loops=1)
 Sort Key: 
sequences.id_package
 -  Seq Scan on 
sequences  (cost=0.00..469.42 rows=3812 width=16) (actual time=0.017..24.412 
rows=494 loops=1)
   Filter: 
(estimated_start IS NOT NULL)
 Total runtime: 7104.946 ms
(47 rows)






May I know wich parameter may I tune in order to avoid to disable the nested 
loop ?



Regards
Gaetano Mendola







---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] idea for concurrent seqscans

2005-02-28 Thread Gaetano Mendola
Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
 
Assuming you're talkning about You might wonder why we don't order all
the regression test queries explicitly to get rid of this issue once and
for all. The reason is that that would make the regression tests less
useful, not more, since they'd tend to exercise query plan types that
produce ordered results to the exclusion of those that don't., good
point. I can think of 2 ways around this:
 
 
1) Select into a temptable, then select out of it with an order by
 
 
2) Run the output through sort before doing the diff
 
 
Is there any reason one of these wouldn't work?
 
 
 Like I said originally, we could certainly devise a solution if we
 needed to.  I was just pointing out that this is a nontrivial
 consideration, and I don't want to buy into it if the patch proves
 to offer only marginal performance improvements.
 

I'll bet will not offer only marginal performance improvements. I see some
time my 4-CPU server with 3 CPU in holiday and other CPU working on a long
sequential scan. I hope that this patch, if it works correctly will be used
in future Postgresql version

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Help me recovering data

2005-02-17 Thread Gaetano Mendola
Greg Stark wrote:
 Gaetano Mendola [EMAIL PROTECTED] writes:
 
 
We do ~4000 txn/minute so in 6 month you are screewd up...
 
 
 Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
 huge slowdowns from all those dead tuples before that?
 

In my applications yes, for sure I see the huge slowdown after 2 days
without it, but giveng the fact that someone crossed the limit I
immagine that is possible without performance loose


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Help me recovering data

2005-02-16 Thread Gaetano Mendola
Greg Stark wrote:
 Joshua D. Drake [EMAIL PROTECTED] writes:
 
 
Christopher Kings-Lynne wrote:


I wonder if I should point out that we just had 3 people suffering XID
wraparound failure in 2 days in the IRC channel...

I have had half a dozen new customers in the last six months that have
had the same problem. Nothing like the phone call:
 
 
 How are so many people doing so many transactions so soon after installing?
 
 To hit wraparound you have to do a billion transactions? (With a `B') That
 takes real work. If you did 1,000 txn/minute for every minute of every day it
 would still take a couple years to get there.

We do ~4000 txn/minute so in 6 month you are screewd up...



Regards
Gaetano Mendola



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Help me recovering data

2005-02-16 Thread Gaetano Mendola
Stephan Szabo wrote:
 On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
 
 
Once autovacuum gets to the point where it's used by default, this
particular failure mode should be a thing of the past, but in the
meantime I'm not going to panic about it.

I don't know how to say this without sounding like a jerk, (I guess that's
my role sometimes) but would you go back and re-read this sentence?

To paraphrase: I know this causes a catestrophic data loss, and we have
plans to fix it in the future, but for now, I'm not going panic about it.
 
 
 Do you have a useful suggestion about how to fix it?  Stop working is
 handwaving and merely basically saying, one of you people should do
 something about this is not a solution to the problem, it's not even an
 approach towards a solution to the problem.

Is not a solution but between loose data and shutdown the postmaster I
prefer the shutdown.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Help me recovering data

2005-02-16 Thread Gaetano Mendola
Tom Lane wrote:
 Bruno Wolff III [EMAIL PROTECTED] writes:
 
I don't think there is much point in making it configurable. If they knew
to do that they would most likely know to vacuum as well.
 
 
 Agreed.
 
 
However, 100K out of 1G seems too small. Just to get wrap around there
must be a pretty high transaction rate, so 100K may not give much warning.
1M or 10M seem to be better.
 
 
 Good point.  Even 10M is less than 1% of the ID space.  Dunno about you,
 but the last couple cars I've owned start flashing warnings when the gas
 tank is about 20% full, not 1% full...

BTW, why not do an automatic vacuum instead of shutdown ? At least the
DB do not stop working untill someone study what the problem is and
how solve it.


Regards
Gaetano Mendola


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] weird behaviour on DISTINCT ON

2005-02-02 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
| Gaetano Mendola [EMAIL PROTECTED] writes:
|
|my warning was due the fact that in the docs is written nowhere this
|drawback.
|
|
| The SELECT reference page already says that the output rows are computed
| before applying ORDER BY or DISTINCT.
|
|   regards, tom lane
True. I had to say my self: RTFM.
Regards
Gaetano Mendola


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFB/8fP7UpzwH2SGd4RAhpFAJ9x3jhMzJ3f94wnlN1DbxRNRQvOzACfXtVp
+Zg1pVO7SsETwUx6fxCl7qw=
=Q5EW
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[HACKERS] weird behaviour on DISTINCT ON

2005-01-31 Thread Gaetano Mendola
Hi all,
I have a query that is something like this:
SELECT DISTINCT ON ( x ) x, foo(x)
FROM ...
now what do I see is that for each different x value
the foo is executed more than once, I guess this is because
the distinct filter out the rows after executing the query.
Is this behaviour the normal one? Shall be not documented ?

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] weird behaviour on DISTINCT ON

2005-01-31 Thread Gaetano Mendola
Greg Stark wrote:
Gaetano Mendola [EMAIL PROTECTED] writes:

now what do I see is that for each different x value
the foo is executed more than once, I guess this is because
the distinct filter out the rows after executing the query.
Is this behaviour the normal one? Shall be not documented ?

Usually DISTINCT ON is only really useful if you're sorting on something.
Otherwise the choice of which record is output is completely arbitrary.
So the above would typically be something like:
SELECT DISTINCT ON (x), y, foo(x)
 ...
 ORDER BY x, y
Now you can see why every record does need to be looked at to handle that.
In fact the ORDER BY kicks in before output columns are generated so you can
do things like:
SELECT DISTINCT ON (x), y, foo(x)
 ...
 ORDER BY x, y, foo(x)
And of course obviously foo() has to be executed for every record to do this.
Postgres doesn't try to detect cases where it's safe to change the regular
order in which things are done and delay execution of functions whose results
aren't needed right away. 

You could just use
SELECT x, foo(x) from (SELECT x ... GROUP BY x)
I totally agree and this was clear after having seen what was happening,
my warning was due the fact that in the docs is written nowhere this
drawback. A novice, like I was 4 years ago ( I discover it in the code 
only today ), can burn it self.

Regards
Gaetano Mendola






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] uptime() for postmaster

2005-01-03 Thread Gaetano Mendola
Matthias Schmidt wrote:
Hi Tom,
Am 31.12.2004 um 20:18 schrieb Tom Lane:
Matthias Schmidt [EMAIL PROTECTED] writes:
a) is the name uptime() OK?

Probably should use pg_uptime(), or something else starting with pg_.

What about 'pg_starttime()' since it is not a period but a point-in-time?

b) is the return-type 'Interval' OK?

It might be better to return the actual postmaster start time (as
timestamptz) and let the user do whatever arithmetic he wants.
With an interval, there's immediately a question of interpretation
--- what current timestamp did you use in the computation?
I'm not dead set on this, but it feels cleaner.

you're right. Let's go for timestamptz and let the users decide ...
Well, the unix guys have the abit to have the uptime as an interval, I'm
inclined to have boths:  pg_uptime ( interval ) and pg_starttime ( 
timestamptz )

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[HACKERS] RC2 intermittent errors

2004-12-21 Thread Gaetano Mendola
Hi all,
I'm testing now RC2 against our application and I'm experiencing
intermittent errors. I isolated this test:
CREATE TABLE users (
id_login SERIAL PRIMARY KEY,
login TEXT
);
CREATE OR REPLACE FUNCTION sp_id_user ( TEXT  )
RETURNS INTEGER AS $$
DECLARE
a_login ALIAS FOR $1;
my_id INTEGER;
BEGIN
SELECT id_login INTO my_id
FROM users
WHERE login = a_login;
RETURN COALESCE(my_id, -1 );
END;
$$ LANGUAGE 'plpgsql'
STABLE;
CREATE OR REPLACE FUNCTION sp_test ( TEXT  )
RETURNS INTEGER AS $$
DECLARE
a_login ALIAS FOR $1;
my_id INTEGER;
BEGIN
my_id = sp_id_user( a_login );
RAISE NOTICE 'ID %', my_id;
insert into users (login) values ( a_login );
my_id = sp_id_user( a_login );
RAISE NOTICE 'ID %', my_id;
RETURN 0;
END;
$$ LANGUAGE 'plpgsql';
select sp_test('test1');
select sp_test('test2');
The call of the two above functions shall show:
ID -1
ID 1
ID -1
ID 2
instead I have:
test=# select sp_test('test1');
NOTICE:  ID -1
NOTICE:  ID 1
 sp_test
-
   0
(1 row)
test=# select sp_test('test2');
NOTICE:  ID -1
NOTICE:  ID -1
 sp_test
-
   0
(1 row)
some times I get:
test=# select sp_test('test1');
NOTICE:  ID -1
NOTICE:  ID -1
 sp_test
-
   0
(1 row)
test=# select sp_test('test2');
NOTICE:  ID -1
NOTICE:  ID -1
 sp_test
-
   0
(1 row)


Regards
Gaetano Mendola















---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] RC2 intermittent errors

2004-12-21 Thread Gaetano Mendola
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
Argh!  I put a GetTransactionSnapshot() call into exec_eval_simple_expr,
but I forgot CommandCounterIncrement().  Wish I could say it was a copy-
and-paste mistake, but it was pure stupidity...

Can we continue with RC2 or do we need an RC3?

It's a one-liner change (assuming that my theory is right, which I won't
know for a little bit because I had just make distclean'd in order to
verify my tree against the RC2 tarball).  I don't think we should push
an RC3 just for this.  Wait a few days and see what else turns up ...
Did you updated the CVS ?
Regards
Gaetano Mendola



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Stable functions problem

2004-12-20 Thread Gaetano Mendola
Tom Lane wrote:
Gaetano Mendola [EMAIL PROTECTED] writes:
I'm having a bounce of errors because IMMUTABLE and STABLE
attributes for some of my functions. Let me explain with an example,

Hmm.  This particular example is a bug in exec_eval_simple_expr() ...
if we're going to bypass SPI then we'd better do the things SPI does
that are needed to maintain the correct execution environment, and
as of 8.0 one of those things is to advance ActiveSnapshot.
I've applied a patch for this. 
Thank you. I'll try with the CVS version and I'll let you know.
 (Memo to self: I'm beginning to wonder
if exec_eval_simple_expr is worth the trouble at all, compared to just
using SPI.  The amount of overhead it saves seems to get less with each
new release.)

now here I can continue my function using the my_id_user, unfortunatelly
that sp_id_user is declared as IMMUTABLE this mean that at the second call
of sp_id_user my_id_user will not contain the user id.

That actually doesn't have anything to do with it --- the same failure
would have occurred if you'd (correctly) declared sp_id_user as STABLE.
So it's a good bug report. 
Indeed I had the same problem declaring it as STABLE.
 But I trust you do realize you are playing with fire.  While I have been
 heard to suggest mislabeling functions as immutable if they're only 
going
 to be used in interactive queries,
I don't think I have ever failed to mention that you *will* get burnt
if you call such functions from other functions.  When this coding
someday does break for you, I won't have any sympathy at all...
Yes, I'll take your suggestion as gold.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] Stable functions problem

2004-12-19 Thread Gaetano Mendola
Hi all,
after several weeks away I downloaded today the 8.0rc1 and I
tested it with our application.
I'm having a bounce of errors because IMMUTABLE and STABLE
attributes for some of my functions. Let me explain with an example,
what I do is  ( plpgsql )
my_id_user = sp_id_user( a_login );
IF  my_id_user  0 THEN
RETURN -5;  -- error code for existing user
END IF;
INSERT INTO users ( login ) VALUES ( a_login );
my_id_user = sp_id_user( a_login );
now here I can continue my function using the my_id_user, unfortunatelly
that sp_id_user is declared as IMMUTABLE this mean that at the second call
of sp_id_user my_id_user will not contain the user id.
This was working untill 7.4, is there a way to force sp_id_user to
be reevaluated ? That function is declared immutable because is ofthen
used in expresssion like this:
select * from user_data where id_user = sp_id_user('login');
I believe is a good idea write in the release notes, with a bigger font, 
this
optimization about stable and immutable functions.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Documentation on PITR still scarce

2004-11-29 Thread Gaetano Mendola
Simon Riggs wrote:
 On Mon, 2004-11-29 at 13:10, Bruce Momjian wrote:

Or TODO maybe worded as:

*  Allow the PITR process to be debugged and data examined



 Yes, thats good for me...

 Greg's additional request might be worded:

* Allow a warm standby system to also allow read-only queries
Yes, this will shift postgresql in Sybase direction.
Did you solved also all your concerns on my two bash scripts ?
Are that scripts eligibles to be putted in contrib ?
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Opinions on Usenet ...

2004-11-29 Thread Gaetano Mendola
Marc G. Fournier wrote:
If there were a comp.databases.postgresql.hackers newsgroup created and 
carried by all the news servers ... would you move to using it vs using 
the mailing lists?
No.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Beta5 now Available

2004-11-25 Thread Gaetano Mendola
Thomas Hallgren wrote:
Gaetano Mendola wrote:
...so the very first client is the real server
that must be run 24/24.
I don't think this is correct. You need a tracker for downloaders to be 
able to find each other but no client is more important than the others.

I'm sorry to say that you're wrong. A tracker without a client running on
a complete file is completelly useless. The tracker even doesn't know
what you are sharing.
If you want publish your files you have to start a tracker and a client
on each content to distribute. The client that you run on the complete
content will tell you: Downloaded and will stay there waiting for other
client connections.
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Beta5 now Available

2004-11-23 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Marc G. Fournier wrote:
| On Mon, 22 Nov 2004, Thomas Hallgren wrote:
|
| Marc G. Fournier wrote:
|
| What about the Java version that Gavin had mentioned?  Aegus or
| something like that?
|
| http://azureus.sourceforge.net/
|
|
| There is a FreeBSD port of it also but it says A BitTorrent client
| written in Java ... does it work as server too, or, by its nature, are
| servers == clients in Bittorrent? :)
Bittorrent is based on a tracker, the tracker is embedded in the metafile
(.torrent file ) and also is based on the first client that is launched
pointing to the complete file; so the very first client is the real server
that must be run 24/24.
What do you have against the python implementation ?

Regards
Gaetano Mendola

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBo8I97UpzwH2SGd4RAiXcAJ4oa5EAN2QpUnM2ajxXVrkpzWCZlwCgpVyT
hG8UO4kGUZnYBfJRt+SchTs=
=RaCu
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] 7.4.5 / 7.4.6 crash (pg_hba.conf issue)

2004-11-18 Thread Gaetano Mendola
Tom Lane wrote:
 Devrim GUNDUZ [EMAIL PROTECTED] writes:

Off-by-one memory allocation problem --- it only bites you if the string
lengths are just right, which probably explains the lack of prior
reports even though the bug has been there since 7.3.


Is this worth new dot releases?


 I'd say not.
At my knowledge Postgres can read that file even if it's writable by
anyone ( I can not test right now or look at the code), if this is the
case then this is a sort of serious bug :-(
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [HACKERS] Increasing the length of pg_stat_activity.current_query...

2004-11-08 Thread Gaetano Mendola
Tom Lane wrote:
 Greg Stark [EMAIL PROTECTED] writes:

What do you think is broken about fragmented UDP packets?


Fragmentation happens at the IP protocol level, the kernel is responsible for
reassembly. There's nothing for the application level to handle.


 And, by the same token, on platforms where it is broken there is nothing
 we can do about it.
Like what? If the OS can not handle UDP reassembly then we have some other 
problems
around
I think the OS breakage is a non issue here.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] NoMovementScanDirection

2004-11-08 Thread Gaetano Mendola
Tom Lane wrote:
Neil Conway [EMAIL PROTECTED] writes:
Ah, okay. I'll remove gistscancache() then, as this seems to be dead
code.

Is there someone out there that can instrument the code with Rational
Coverage in order to see how much dead code is still there ? Or at least
see how much code is used during the make check.
BTW: is there some free tool that do the Rational Coverage work ?
Regards
Gaetano Mendola

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] Bittorrent

2004-11-07 Thread Gaetano Mendola
Hi all,
it seems that the tracker is down or at least not reachable.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Bittorrent

2004-11-07 Thread Gaetano Mendola
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
David Fetter wrote:
| On Sun, Nov 07, 2004 at 10:53:22PM +0100, Gaetano Mendola wrote:
|
|Hi all,
|it seems that the tracker is down or at least not reachable.
|
|
| Started again.  Thanks for the notice. :)
Indeed now it's working.
| BTW, do you have some (semi-)automated way to monitor this?
I do not, today I was developing a sort of bittorrent gateway and in order
to test it I was tryng it with the postgres one and I just noticed that the
tracker was not responding; however is not that difficult to monitor it from
another computer, but I think that the easiest way is to test localy if the
tracker is up and running and accepting connections.
Regards
Gaetano Mendola






-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.5 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFBjq3O7UpzwH2SGd4RApndAKCZdOmhGybSYGHaE4ze8efoxnbRuwCg0iiw
BqvjqMeC28hUFtpAifec+9g=
=Cnln
-END PGP SIGNATURE-
---(end of broadcast)---
TIP 8: explain analyze is your friend


  1   2   3   4   5   >