R: [HACKERS] [9.1] unusable for large views (SOLVED)

2011-10-24 Thread Omar Bettin
...sorry guys...

was a misconfiguration of database.

9.1.1 is working good.

is ~4% faster than 9.0.5 for same query.

Thanks a lot.

Regards




-Messaggio originale-
Da: Pavel Stehule [mailto:pavel.steh...@gmail.com] 
Inviato: lunedì 24 ottobre 2011 12:13
A: Omar Bettin
Cc: pgsql-hackers@postgresql.org
Oggetto: Re: [HACKERS] [9.1] unusable for large views

Hello

please, send a result of explain analyze on 9.1.1 and older

please, use http://explain.depesz.com/

Regards

Pavel Stehule

2011/10/24 Omar Bettin o.bet...@informaticaindustriale.it:
 Hello,



 I have tried 9.1.1 win64 version and when I am trying to declare a cursor
 for a very large view (lot of joins and aggregate functions),

 postgres is using around 3GB of memory and the query never returns.



 Same proble selecting from the view without cursor.



 Same query worked fine from 8.3.3 to 9.0.5.



 Should I change some configuration params to have the same behavior as
 previous versions?







 Tried on Win2008 server R2  64bit 8GB RAM.

 also on Win7 64bit 8GB RAM.

 default postgresql.conf



 Regards,





 The view (!)



 CREATE OR REPLACE VIEW articoli_rip_ven20110227_view AS

  SELECT a.pk_id, a.articolo, a.articolo_f, a.origine, ( SELECT
 ditte.attivita

FROM ditte

   WHERE ditte.partitario = a.cod_fornit) AS dummy_1, a.descrizion,
 a.taglia, a.peso_spec, a.giacenza_colli, a.giacenza, rim.colli AS
 rimanenza_colli, rim.quantita AS rimanenza_qta, rimass.colli AS
 rimanenza_ass_colli, rimass.quantita AS rimanenza_ass_qta,
 COALESCE(b20.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b21.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b22.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b23.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b24.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b25.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b26.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b27.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b28.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b29.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b30.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b31.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b32.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b33.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b34.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b35.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b36.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b37.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b38.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b39.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b40.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b41.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b42.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b43.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b44.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b45.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b46.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b47.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b48.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b49.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b50.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b51.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b52.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b53.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b54.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b55.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b56.quantita, NULL::numeric, 0::numeric)::double precision +
 (COALESCE(rim.quantita, NULL::numeric, 0::numeric) +
 COALESCE(rimass.quantita, NULL::numeric, 0::numeric))::double precision -
 (COALESCE(b1.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b2.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b3.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b4.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b5.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b6.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b7.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b8.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b9.quantita, NULL::numeric, 0::numeric)::double precision +
 COALESCE(b10.quantita, NULL::numeric, 0::numeric)::double

R: [HACKERS] [9.1] unusable for large views (SOLVED)

2011-10-24 Thread Omar Bettin
Hi Tom,

...are about two hours I am trying to communicate that the problem has been
solved, but I do not see the messages in the mailing list...

Anyway,
the problems was a bad installation of database (pgsql functions).

9.1.1 is working good.

is 4% to 8% faster than 9.0.5.

Thanks a lot to everyone.

Regards,

Omar


-Messaggio originale-
Da: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Inviato: lunedì 24 ottobre 2011 16:46
A: Omar Bettin
Cc: pgsql-hackers@postgresql.org
Oggetto: Re: [HACKERS] [9.1] unusable for large views 

Omar Bettin o.bet...@informaticaindustriale.it writes:
 I have tried 9.1.1 win64 version and when I am trying to declare a cursor
 for a very large view (lot of joins and aggregate functions),

 postgres is using around 3GB of memory and the query never returns.

Could we see a self-contained test case?  I'm not about to try to
reverse-engineer the schema that goes with such a monster query.

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


R: [HACKERS] [9.1] unusable for large views (SOLVED)

2011-10-24 Thread Omar Bettin
...sorry guys...

was a bad configuration of database.

9.1.1 is working good.

is 4% to 8% faster than 9.0.5.

Thanks a lot.

Regards

Omar

P.s.
attached EXPLAIN


Hmm.  A 59-table join is pretty enormous

and is not the biggest, basically are delivery notes for one day seen in
vertical.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Monster query

2011-10-24 Thread Omar Bettin
Hi,

Since the data are of third parties, will prepare a database suitable for the 
purpose.
In any case, the compressed backup will be around 20 MB.

Regards,

Omar Bettin


-Messaggio originale-
Da: Jan Urbański [mailto:wulc...@wulczer.org] 
Inviato: lunedì 24 ottobre 2011 18:56
A: Omar Bettin
Cc: pgsql-hackers@postgresql.org
Oggetto: Re: [HACKERS] [9.1] unusable for large views

On 24/10/11 10:57, Omar Bettin wrote:
 
 [monster query]

I see that your problem is already solved, but incidentially I'm working on a 
join order planning module and I'm looking for real-life examples of humongous 
queries like that to benchmark against them.

Any chance you could share the schema, or at least part of it, that goes with 
this query? Or perhaps you have more of these queries?

Cheers,
Jan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] feature request

2010-02-25 Thread Omar Bettin
hello to everyone,
is a bit late for an italian, but after an long day debugging I had an idea.
Why not introduce a special SQL command like 
STORE WHERE [condition] FROM [table]
removing all data that meet the condition and storing them into another 
database?
Then, if a query that needs the stored data is executed after such command the 
database joins the stored data into the result query.
This can keep the production database lightweight and fast.
Regards

Omar Bettin

Re: [HACKERS] feature request

2010-02-25 Thread Omar Bettin

...could be
STORE WHERE [condition] FROM [table] INTO [database]

regards
Omar Bettin

- Original Message - 
From: Robert Haas robertmh...@gmail.com

To: Omar Bettin o.bet...@informaticaindustriale.it
Cc: pgsql-hackers@postgresql.org
Sent: Thursday, February 25, 2010 11:11 PM
Subject: Re: [HACKERS] feature request



On Thu, Feb 25, 2010 at 4:35 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:

hello to everyone,
is a bit late for an italian, but after an long day debugging I had an 
idea.

Why not introduce a special SQL command like
STORE WHERE [condition] FROM [table]
removing all data that meet the condition and storing them into another
database?
Then, if a query that needs the stored data is executed after such 
command

the database joins the stored data into the result query.
This can keep the production database lightweight and fast.
Regards


DELETE ... RETURNING is useful for this kind of thing, sometimes.  And
you could use it inside a function to go and do something with each
row returned, though that might not be very graceful for large numbers
of rows.  The proposed syntax wouldn't actually work because it
doesn't specify where to put the data.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers 



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] feature request

2010-02-25 Thread Omar Bettin

I have read that 8.5 will have replication, so is just a feature request.

regards
Omar Bettin

- Original Message - 
From: Robert Haas robertmh...@gmail.com

To: Omar Bettin o.bet...@informaticaindustriale.it
Cc: pgsql-hackers@postgresql.org
Sent: Thursday, February 25, 2010 11:22 PM
Subject: Re: [HACKERS] feature request



On Thu, Feb 25, 2010 at 5:17 PM, Omar Bettin
o.bet...@informaticaindustriale.it wrote:

...could be
STORE WHERE [condition] FROM [table] INTO [database]


That still doesn't work, because a PostgreSQL backend doesn't have any
obvious way to access another database.  You'd need to use dblink or
something.  Eventually (but not any time soon) it will probably be
possible to do things like this, which would work for moving data
between tables in the same database:

WITH x AS (DELETE FROM ... RETURNING ...) INSERT INTO ... (...) SELECT
... FROM x

Doing anything with some OTHER database is going to require a lot more
infrastructure.

...Robert


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] diabolic state

2008-03-19 Thread Omar Bettin
I know you are i a C state but,
A string plus a NULL string is A string
mandi TOM


[HACKERS] NULL OR ZERO

2008-02-03 Thread Omar Bettin

Probably I am on the wrong place but for me NULL on numbers means 0 or ZERO.
I know about standards...

but I think that integralism is somewhat wrong.

Omar
a programmer...

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] Strange behavior on plpgsql trigger function in if comparison

2008-02-02 Thread Omar Bettin
Hi,

I have to compare an OLD.Value with a NEW.Value on PostgreSQL 8.2.4 plpgsql 
trigger function:

IF OLD.Value  NEW.Value THEN
...

but, if OLD.Value IS NULL and NOT NEW.Value IS NULL the previous IF does not 
work and I have to use this method:

IF (OLD.Value  NEW.Value) OR (OLD.Value IS NULL  AND NOT NEW.Value IS NULL) 
THEN
...

this works!
Is that normal ?

Thank you

Omar Bettin

Re: [HACKERS] communication protocol

2007-08-10 Thread Omar Bettin



Hannu Krosing wrote:

Ühel kenal päeval, N, 2007-08-09 kell 17:12, kirjutas Omar Bettin:
  

  
Why not just set up an ssh2 tunnel with both encryption and compression

and access your remote db over that ?

  
I know that possibility, but I'm just thinking probably an ad hoc 
protocol for applications  that needs to locate, range, skip, lookup,  
bookmark etc... a lot

of records, could increase the system performance.
By that, I don't want to discuss about PostgreSQL performance by itself 
but about a to much generalist protocol.
With libpq, after you got the result, you have to implement locally some 
find, range etc, routines and on the other side there is a DBMS born to 
do that!


My application need both type of connection LAN/WAN and here (in Italy) 
still some lack of wide-band infrastructure.
Also installing certificates on a LAN contest is a bit a dirty way to 
solve the problem.


Omar

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] comunication protocol

2007-08-09 Thread Omar Bettin



Merlin Moncure wrote:

On 8/8/07, Omar Bettin [EMAIL PROTECTED] wrote:
  

Hi,
I have installed postgresql to my client as a server for a progam
(libpq.dll/VCL based) that I wrote for them.

Every is working fine, but I noted some general slowness,  compared with an
older database system (iAnywhere ADS)
due (I think) to the  text based communication protocol.



you should maybe report a couple of specific things (explain analyze,
etc) for analysis and make sure your expectations are reasonable.  It
is possible simple configuration issues or query changes might be the
answer here, then again, maybe not.
  
   I think this is not necessary because I have used the same 
application with two different servers and you can see the speed 
differences just using it.
  

I know there is the possibility to adopt a ssl compressed connection but I
think a pure compressed connections could be better.



I think you are looking in the wrong direction here.

  

So, I have studied the postgresql sources and I have tried to implement some
compression between the backend and the frontend,
using pglz_compress/pglz_decompress on be-secure.c and fe-secure.c.

At the moment is working good on a local configuration, got some problems on
a remote connection due I think a different way to communicate.



AFAIK, the fastest possible way to get data off the server, skipping
all data and text processing is to write a SPI routine, and stream the
data out locally to the server.   I am doing exactly this in a
particular problem that requires high performance and I can tell you
that SPI is fast.

http://developer.postgresql.org/pgdocs/postgres/spi-examples.html

That way you will bypass the protocol completely.  On my computer, I
get roughly 300k records/sec raw read performance using libpq and
about 1.3m records sec using a hacked SPI and streaming to disk.  This
may not be helpful for your application but if you are exploring ways
to bypass protocol overhead this is where I would start.
  
   Interesting,do you thing that is possible to implement some send() 
in the interface?

By the way, your problem might be the VCL driver you are using to
access the database.  The highest performance driver I have used
(which wraps libpq) is the Zeos library which is very fast.

  
  I have tried ZeosLib and for me is unusable (too slow), I use a 
strong modified PostgresDAC.

There are other ways (other than be-secure and fe-secure) with which the
backend comunicate with the frontend?
And, do you think this solution could speed up something?



Once again, I would start by looking at your application and posting
here to make sure you are looking at the right bottlenecks (you
_suspect_ the protocol is the problem, but is it really?).

this means:
* explain analyze/queries (w/how fast you think it should be going)
* relevant .conf settings
* time measurements from the app

merlin

  
   I just switch form Application1 (IAnywhere Ads) to Application2 
(Postgresql) optimizing the VCL (strong modifications to PostgresDAC 
sources) and sow the results.
   My application needs to full open some tables and with this protocol 
is like to download a long text file.



   omar

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] comunication protocol

2007-08-09 Thread Omar Bettin



Gustavo Tonini wrote:

Em Quarta 08 Agosto 2007 12:02, Omar Bettin escreveu:
  

Hi,
I have installed postgresql to my client as a server for a progam 
(libpq.dll/VCL based) that I wrote for them.



Borland VCL? What component are you using?

Gustavo.
  

  I use a strong modified PostgresDAC component.

 Omar


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] comunication protocol

2007-08-09 Thread Omar Bettin



Merlin Moncure wrote:

On 8/9/07, Omar Bettin [EMAIL PROTECTED] wrote:
  

Merlin Moncure wrote:


AFAIK, the fastest possible way to get data off the server, skipping
all data and text processing is to write a SPI routine, and stream the
data out locally to the server.   I am doing exactly this in a
particular problem that requires high performance and I can tell you
that SPI is fast.

  

Interesting,do you thing that is possible to implement some send()
in the interface?



SPI is an interface which allows you to make sql calls from C code
(PostgreSQL allows you to link C code compiled as a .so to the server
and call -- see numerous examples in contrib).  The routine you need
to exploit is SPI_getbinval which gives you Datum (essentially a
variant) pointing to the internal binary representation of your field.
 In theory you could collect the data into a buffer and send() it off
although thats a lot of work IMO.  Also, I would only advise this for
fast dumps from a single table (no joins, etc).
  

why not joins?
  

   I have tried ZeosLib and for me is unusable (too slow), I use a
strong modified PostgresDAC.



I'm suprised -- I know the ZeosLib internals and it's a very thin
layer over libpq.  Here is what I suggest:

* turn on statement logging in the server (set log_min_duration_statement)
* make some operations in the app which you suggest are slow  -- they
will show up in the log
* 'explain analyze' the query from the psql console

make note of the times and post back (maybe move this thread to the
-performance list)

merlin

  

So, you aren't agree with the compression...

I have sow the network statistics and in some cases, the network traffic 
is very big.
Probably a better setup could increase the performance by a few points 
percent but

I think a compressed protocol could increase the communication by 2/3 times.

I think the biggest bottleneck in the whole system is just that.

:..try to get a query from a remote server with a 56k modem! :)

Omar


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

  


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[HACKERS] Hello

2007-08-08 Thread Omar Bettin
Hello!!!

this is a test!

[HACKERS] comunication protocol

2007-08-08 Thread Omar Bettin
Hi,
I have installed postgresql to my client as a server for a progam  
(libpq.dll/VCL based) that I wrote for them.

Every is working fine, but I noted some general slowness,  compared with an 
older database system (iAnywhere ADS)
due (I think) to the  text based communication protocol.

I know there is the possibility to adopt a ssl compressed connection but I 
think a pure compressed connections could be better.

So, I have studied the postgresql sources and I have tried to implement some 
compression between the backend and the frontend,
using pglz_compress/pglz_decompress on be-secure.c and fe-secure.c.

At the moment is working good on a local configuration, got some problems on a 
remote connection due I think a different way to communicate.

There are other ways (other than be-secure and fe-secure) with which the 
backend comunicate with the frontend?
And, do you think this solution could speed up something?

For the moment I have big troubles to debug the server, have you some 
informations how to debug it with MinGW?

Thanks and sorry form my english!

Omar Bettin