Re: [GENERAL] Success story full text search

2015-05-02 Thread Albe Laurenz
Frank Langel wrote:
 Does someone have a success story of using Postgres Full Search Capability
 with significant data, lets say  50-100 GB  ? 

How about http://www.postgresql.org/search/?m=1

Yours,
Laurenz Albe







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to exclude blobs (large objects) from being loaded by pg_restore?

2015-05-02 Thread Alanoly Andrews
Thanks Vincent. But as I have already mentioned, excluding tables from the dump 
isn't an option. I want to keep the dump complete, but exclude a table (or 
more) from the restore. The procedure I have outlined does work well with any 
table except the ones which have a large object  column.

Alanoly.

Sent from my iPhone

 On May 2, 2015, at 9:39 AM, Vincent Veyron vv.li...@wanadoo.fr wrote:

 On Fri, 1 May 2015 14:12:08 +
 Alanoly Andrews alano...@invera.com wrote:

 Hello,

 We have a database that has been unloaded using pg_dump. This database has a 
 table with a field defined as lo. When restoring this database to another 
 system, we want to avoid loading this particular table as it is very large 
 (about 8GB of blob data) and is not needed on the target system. I tried the 
 following:
 1.  Create a list of all the tables in the pg_dump file using the -l 
 option of pg_restore
 2.  Edit out the lines corresponding to the said table (with the lo 
 column)
 3.  Run the pg_restore with the -L option to use the edited list of 
 tables.

 use the -T option of pg_dump?

 http://www.postgresql.org/docs/9.4/static/app-pgdump.html

 -T table
 --exclude-table=table

 Do not dump any tables matching the table pattern.


 I have found that what this does is to exclude only the non-lo fields of the 
 table. So after the load, the table itself is not visible in the target 
 system. But the actual blob data does get loaded since they are contained in 
 the pg_largeobject system table. This table does not occur in the listing 
 produced in step 1 above and so cannot be edited out. Besides it is a system 
 table and should not be excluded anyway.

 I'd appreciate some input on how I can get the blob data of a specific table 
 to be excluded from a pg_restore. This is an operation that we need to do on 
 a monthly basis. We do not want to exclude the blobs from the dump itself 
 (since the whole database is to be preserved as a monthly record), but only 
 from the restore.

 Postgres 9.1.4 on AIX.

 Thanks.

 Alanoly Andrews.
 Invera Inc.
 Montreal, Canada.






  

 If you no longer wish to receive any of our emails, click on 
 UNSUBSCRIBE.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** This 
 e-mail may be privileged and/or confidential, and the sender does not waive 
 any related rights and obligations. Any distribution, use or copying of this 
 e-mail or the information it contains by other than an intended recipient is 
 unauthorized. If you received this e-mail in error, please advise me (by 
 return e-mail or otherwise) immediately.




 Si vous ne d?sirez plus recevoir de nos courriels, veuillez appuyer sur 
 D?SABONNEMENT.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** Ce 
 courriel est confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits 
 et obligations qui s'y rapportent. Toute diffusion, utilisation ou copie de 
 ce message ou des renseignements qu'il contient par une personne autre que 
 le (les) destinataire(s) d?sign?(s) est interdite. Si vous recevez ce 
 courriel par erreur, veuillez m'en aviser imm?diatement, par retour de 
 courriel ou par un autre moyen.


 --
Salutations, Vincent Veyron

 https://legalcase.libremen.com/
 Legal case, contract and insurance claim management software



If you no longer wish to receive any of our emails, click on 
UNSUBSCRIBE.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** This 
e-mail may be privileged and/or confidential, and the sender does not waive any 
related rights and obligations. Any distribution, use or copying of this e-mail 
or the information it contains by other than an intended recipient is 
unauthorized. If you received this e-mail in error, please advise me (by return 
e-mail or otherwise) immediately.

Si vous ne désirez plus recevoir de nos courriels, veuillez appuyer sur 
DÉSABONNEMENT.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** Ce 
courriel est confidentiel et protégé. L'expéditeur ne renonce pas aux droits et 
obligations qui s'y rapportent. Toute diffusion, utilisation ou copie de ce 
message ou des renseignements qu'il contient par une personne autre que le 
(les) destinataire(s) désigné(s) est interdite. Si vous recevez ce courriel par 
erreur, veuillez m'en aviser immédiatement, par retour de courriel ou par un 
autre moyen.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to exclude blobs (large objects) from being loaded by pg_restore?

2015-05-02 Thread Vincent Veyron
On Fri, 1 May 2015 14:12:08 +
Alanoly Andrews alano...@invera.com wrote:

 Hello,
 
 We have a database that has been unloaded using pg_dump. This database has a 
 table with a field defined as lo. When restoring this database to another 
 system, we want to avoid loading this particular table as it is very large 
 (about 8GB of blob data) and is not needed on the target system. I tried the 
 following:
 1.  Create a list of all the tables in the pg_dump file using the -l 
 option of pg_restore
 2.  Edit out the lines corresponding to the said table (with the lo 
 column)
 3.  Run the pg_restore with the -L option to use the edited list of 
 tables.
 

use the -T option of pg_dump?

http://www.postgresql.org/docs/9.4/static/app-pgdump.html

-T table
--exclude-table=table

Do not dump any tables matching the table pattern.


 I have found that what this does is to exclude only the non-lo fields of the 
 table. So after the load, the table itself is not visible in the target 
 system. But the actual blob data does get loaded since they are contained in 
 the pg_largeobject system table. This table does not occur in the listing 
 produced in step 1 above and so cannot be edited out. Besides it is a system 
 table and should not be excluded anyway.
 
 I'd appreciate some input on how I can get the blob data of a specific table 
 to be excluded from a pg_restore. This is an operation that we need to do on 
 a monthly basis. We do not want to exclude the blobs from the dump itself 
 (since the whole database is to be preserved as a monthly record), but only 
 from the restore.
 
 Postgres 9.1.4 on AIX.
 
 Thanks.
 
 Alanoly Andrews.
 Invera Inc.
 Montreal, Canada.
 
 
 
 
 
 
   
 
 If you no longer wish to receive any of our emails, click on 
 UNSUBSCRIBE.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** This 
 e-mail may be privileged and/or confidential, and the sender does not waive 
 any related rights and obligations. Any distribution, use or copying of this 
 e-mail or the information it contains by other than an intended recipient is 
 unauthorized. If you received this e-mail in error, please advise me (by 
 return e-mail or otherwise) immediately.
 
 
 
 
 Si vous ne d?sirez plus recevoir de nos courriels, veuillez appuyer sur 
 D?SABONNEMENT.mailto:unsubscr...@invera.com?subject=***Unsubscribe*** Ce 
 courriel est confidentiel et prot?g?. L'exp?diteur ne renonce pas aux droits 
 et obligations qui s'y rapportent. Toute diffusion, utilisation ou copie de 
 ce message ou des renseignements qu'il contient par une personne autre que le 
 (les) destinataire(s) d?sign?(s) est interdite. Si vous recevez ce courriel 
 par erreur, veuillez m'en aviser imm?diatement, par retour de courriel ou par 
 un autre moyen.


-- 
Salutations, Vincent Veyron 

https://legalcase.libremen.com/ 
Legal case, contract and insurance claim management software


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to exclude blobs (large objects) from being loaded by pg_restore?

2015-05-02 Thread Andreas Kretschmer
Vincent Veyron vv.li...@wanadoo.fr wrote:

 On Fri, 1 May 2015 14:12:08 +
 Alanoly Andrews alano...@invera.com wrote:
 
  Hello,
  
  We have a database that has been unloaded using pg_dump. This database has 
  a table with a field defined as lo. When restoring this database to 
  another system, we want to avoid loading this particular table as it is 
  very large (about 8GB of blob data) and is not needed on the target system. 
  I tried the following:
  1.  Create a list of all the tables in the pg_dump file using the -l 
  option of pg_restore
  2.  Edit out the lines corresponding to the said table (with the lo 
  column)
  3.  Run the pg_restore with the -L option to use the edited list of 
  tables.
  
 
 use the -T option of pg_dump?

i think, this isn't a solution.

i HAVE a complete dump, but now i need a restore without the lo - data.
And - you can't create a new dump with -T for several reasons now.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to exclude blobs (large objects) from being loaded by pg_restore?

2015-05-02 Thread Melvin Davidson
I'm not exactly positive, but I believe pg_extractor has an exclude type
option that might fit your requirement. You can download it here:

https://github.com/omniti-labs/pg_extractor

Also note that this is a Python program, so you might even be able to hack
some change into it to do what you need.

Good luck, and may the bytes be with you.

On Sat, May 2, 2015 at 10:27 AM, Alanoly Andrews alano...@invera.com
wrote:

 Thanks Vincent. But as I have already mentioned, excluding tables from the
 dump isn't an option. I want to keep the dump complete, but exclude a table
 (or more) from the restore. The procedure I have outlined does work well
 with any table except the ones which have a large object  column.

 Alanoly.

 Sent from my iPhone

  On May 2, 2015, at 9:39 AM, Vincent Veyron vv.li...@wanadoo.fr wrote:
 
  On Fri, 1 May 2015 14:12:08 +
  Alanoly Andrews alano...@invera.com wrote:
 
  Hello,
 
  We have a database that has been unloaded using pg_dump. This database
 has a table with a field defined as lo. When restoring this database to
 another system, we want to avoid loading this particular table as it is
 very large (about 8GB of blob data) and is not needed on the target system.
 I tried the following:
  1.  Create a list of all the tables in the pg_dump file using the
 -l option of pg_restore
  2.  Edit out the lines corresponding to the said table (with the
 lo column)
  3.  Run the pg_restore with the -L option to use the edited list of
 tables.
 
  use the -T option of pg_dump?
 
  http://www.postgresql.org/docs/9.4/static/app-pgdump.html
 
  -T table
  --exclude-table=table
 
  Do not dump any tables matching the table pattern.
 
 
  I have found that what this does is to exclude only the non-lo fields
 of the table. So after the load, the table itself is not visible in the
 target system. But the actual blob data does get loaded since they are
 contained in the pg_largeobject system table. This table does not occur in
 the listing produced in step 1 above and so cannot be edited out. Besides
 it is a system table and should not be excluded anyway.
 
  I'd appreciate some input on how I can get the blob data of a specific
 table to be excluded from a pg_restore. This is an operation that we need
 to do on a monthly basis. We do not want to exclude the blobs from the dump
 itself (since the whole database is to be preserved as a monthly record),
 but only from the restore.
 
  Postgres 9.1.4 on AIX.
 
  Thanks.
 
  Alanoly Andrews.
  Invera Inc.
  Montreal, Canada.
 
 
 
 
 
 
   
 
  If you no longer wish to receive any of our emails, click on
 UNSUBSCRIBE.mailto:unsubscr...@invera.com?subject=***Unsubscribe***
 This e-mail may be privileged and/or confidential, and the sender does not
 waive any related rights and obligations. Any distribution, use or copying
 of this e-mail or the information it contains by other than an intended
 recipient is unauthorized. If you received this e-mail in error, please
 advise me (by return e-mail or otherwise) immediately.
 
 
 
 
  Si vous ne d?sirez plus recevoir de nos courriels, veuillez appuyer sur
 D?SABONNEMENT.mailto:unsubscr...@invera.com?subject=***Unsubscribe***
 Ce courriel est confidentiel et prot?g?. L'exp?diteur ne renonce pas aux
 droits et obligations qui s'y rapportent. Toute diffusion, utilisation ou
 copie de ce message ou des renseignements qu'il contient par une personne
 autre que le (les) destinataire(s) d?sign?(s) est interdite. Si vous
 recevez ce courriel par erreur, veuillez m'en aviser imm?diatement, par
 retour de courriel ou par un autre moyen.
 
 
  --
 Salutations, Vincent Veyron
 
  https://legalcase.libremen.com/
  Legal case, contract and insurance claim management software

 

 If you no longer wish to receive any of our emails, click on
 UNSUBSCRIBE.mailto:unsubscr...@invera.com?subject=***Unsubscribe***
 This e-mail may be privileged and/or confidential, and the sender does not
 waive any related rights and obligations. Any distribution, use or copying
 of this e-mail or the information it contains by other than an intended
 recipient is unauthorized. If you received this e-mail in error, please
 advise me (by return e-mail or otherwise) immediately.

 Si vous ne désirez plus recevoir de nos courriels, veuillez appuyer sur
 DÉSABONNEMENT.mailto:unsubscr...@invera.com?subject=***Unsubscribe***
 Ce courriel est confidentiel et protégé. L'expéditeur ne renonce pas aux
 droits et obligations qui s'y rapportent. Toute diffusion, utilisation ou
 copie de ce message ou des renseignements qu'il contient par une personne
 autre que le (les) destinataire(s) désigné(s) est interdite. Si vous
 recevez ce courriel par erreur, veuillez m'en aviser immédiatement, par
 retour de courriel ou par un autre moyen.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 

Re: [GENERAL] Success story full text search

2015-05-02 Thread Paul Jungwirth

Does someone have a success story of using Postgres Full Search
Capability with significant data, lets say  50-100 GB  ?


This is a recent and very complete article on using Postgres for 
full-text search:


http://blog.lostpropertyhq.com/postgres-full-text-search-is-good-enough/

see also the discussion here:

https://news.ycombinator.com/item?id=8381748
https://news.ycombinator.com/item?id=8714477

That should give you a good sense of the abilities and limitations vs 
using Lucene etc.


On scanning that article I don't see any mention of size, but you could 
always ask the author!


Paul




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] plpgsql functions organisation

2015-05-02 Thread Yves Dorfsman

I find my plpgsql functions becomes unreadable very quickly. I want to break
them up in smaller functions.

What is the best way to organised them?
Is there any way to define functions inside functions?
When I list functions in psql, I can see them all at the same level, is there
any way to organise them in packages like with python and other languages, so
the smaller functions are hidden away in a package/directory?


Thanks.

-- 
http://yves.zioup.com
gpg: 4096R/32B0F416



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to exclude blobs (large objects) from being loaded by pg_restore?

2015-05-02 Thread Daniel Verite
Alanoly Andrews wrote:

 This table does not occur in the listing produced in step 1 above and so
 cannot be edited out

Yes, pg_largeobject is not mentioned but the TOC should have an entry named
BLOBS, as well as other entries related to each large object.

If BLOBS is removed or commented,  pg_restore -L from that list should not
create any large object into the destination database.

As for the table that contain the lo column with the references to large
objects, it can be kept or removed from the restore list, it's up to you, but
it does not determine whether blobs are restored or not.

If the table is kept, its lo column will have dangling references to
non-existing large objects, but it's irrelevant for pg_restore, as it doesn't
track those references. You can always update that column to NULL after the
load, or ignore it, or drop it, whatever is more convenient.

Best regards,
-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Yves Dorfsman
On 2015-05-02 11:12, Melvin Davidson wrote:
 AFAIK, you cannot package functions in  PostgreSQL, but it is possible to
 call a function from within a function.
 
 That being said, I would seriously look at how and why you are writing your
 functions
 as functions that call other functions are not very efficient.

Simply to make long procedures easier to read and follow the logic.

 
 Also note that PostgreSQL allows you define functions using Python, so that
 might be a possible compromise.


Thanks.

-- 
http://yves.zioup.com
gpg: 4096R/32B0F416



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Melvin Davidson
AFAIK, you cannot package functions in  PostgreSQL, but it is possible to
call a function from within a function.

That being said, I would seriously look at how and why you are writing your
functions
as functions that call other functions are not very efficient.

Also note that PostgreSQL allows you define functions using Python, so that
might be a possible compromise.

On Sat, May 2, 2015 at 12:53 PM, Yves Dorfsman y...@zioup.com wrote:


 I find my plpgsql functions becomes unreadable very quickly. I want to
 break
 them up in smaller functions.

 What is the best way to organised them?
 Is there any way to define functions inside functions?
 When I list functions in psql, I can see them all at the same level, is
 there
 any way to organise them in packages like with python and other languages,
 so
 the smaller functions are hidden away in a package/directory?


 Thanks.

 --
 http://yves.zioup.com
 gpg: 4096R/32B0F416



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Adrian Klaver

On 05/02/2015 09:53 AM, Yves Dorfsman wrote:


I find my plpgsql functions becomes unreadable very quickly. I want to break
them up in smaller functions.

What is the best way to organised them?
Is there any way to define functions inside functions?
When I list functions in psql, I can see them all at the same level, is there
any way to organise them in packages like with python and other languages, so
the smaller functions are hidden away in a package/directory?


The  only thing I can think of is to use SCHEMAs;

http://www.postgresql.org/docs/9.4/interactive/sql-createschema.html




Thanks.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Bill Moran
On Sat, 2 May 2015 14:07:31 -0700
Jeff Janes jeff.ja...@gmail.com wrote:

 On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
 wrote:
 
  On 05/02/2015 10:12 AM, Melvin Davidson wrote:
 
  AFAIK, you cannot package functions in  PostgreSQL, but it is possible
  to
  call a function from within a function.
 
  That being said, I would seriously look at how and why you are writing
  your functions
  as functions that call other functions are not very efficient.
 
 
  I am not following. That is what packaging is about, separating out 'units
  of work' so they can be combined as needed. Part of that is using existing
  functions in new functions/classes. In fact in the Postgres source I see
  this in many places. Now it is entirely possible I missed a memo, so I am
  open to a more detailed explanation of the inefficiencies involved.
 
 
 The Postgres source is written in C, not in plpgsql.  C has a good
 optimizing compiler and plpgsql doesn't.

Maybe that's a roundabout way of saying that if your functions are
complex enough to require calling sub-functions they might be
justifying being writting in C?

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Adrian Klaver

On 05/02/2015 10:12 AM, Melvin Davidson wrote:

AFAIK, you cannot package functions in  PostgreSQL, but it is possible to
call a function from within a function.

That being said, I would seriously look at how and why you are writing
your functions
as functions that call other functions are not very efficient.


I am not following. That is what packaging is about, separating out 
'units of work' so they can be combined as needed. Part of that is using 
existing functions in new functions/classes. In fact in the Postgres 
source I see this in many places. Now it is entirely possible I missed a 
memo, so I am open to a more detailed explanation of the inefficiencies 
involved.




Also note that PostgreSQL allows you define functions using Python, so
that might be a possible compromise.

On Sat, May 2, 2015 at 12:53 PM, Yves Dorfsman y...@zioup.com
mailto:y...@zioup.com wrote:


I find my plpgsql functions becomes unreadable very quickly. I want
to break
them up in smaller functions.

What is the best way to organised them?
Is there any way to define functions inside functions?
When I list functions in psql, I can see them all at the same level,
is there
any way to organise them in packages like with python and other
languages, so
the smaller functions are hidden away in a package/directory?


Thanks.

--
http://yves.zioup.com
gpg: 4096R/32B0F416



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Jeff Janes
On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 05/02/2015 10:12 AM, Melvin Davidson wrote:

 AFAIK, you cannot package functions in  PostgreSQL, but it is possible
 to
 call a function from within a function.

 That being said, I would seriously look at how and why you are writing
 your functions
 as functions that call other functions are not very efficient.


 I am not following. That is what packaging is about, separating out 'units
 of work' so they can be combined as needed. Part of that is using existing
 functions in new functions/classes. In fact in the Postgres source I see
 this in many places. Now it is entirely possible I missed a memo, so I am
 open to a more detailed explanation of the inefficiencies involved.


The Postgres source is written in C, not in plpgsql.  C has a good
optimizing compiler and plpgsql doesn't.

Cheers,

Jeff


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Adrian Klaver

On 05/02/2015 02:07 PM, Jeff Janes wrote:

On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:

On 05/02/2015 10:12 AM, Melvin Davidson wrote:

AFAIK, you cannot package functions in  PostgreSQL, but it is
possible to
call a function from within a function.

That being said, I would seriously look at how and why you are
writing
your functions
as functions that call other functions are not very efficient.


I am not following. That is what packaging is about, separating out
'units of work' so they can be combined as needed. Part of that is
using existing functions in new functions/classes. In fact in the
Postgres source I see this in many places. Now it is entirely
possible I missed a memo, so I am open to a more detailed
explanation of the inefficiencies involved.


The Postgres source is written in C, not in plpgsql.  C has a good
optimizing compiler and plpgsql doesn't.


Does this actually matter?  I am a biologist that backed into computing, 
so I realize I am weak on the fundamentals. Still the scientist in me 
wants data backing assertions. As I understand it plpgsql works close to 
the server and is optimized to do so.  I know writing in C would be a 
better solution. Still is calling plpgsql functions inside plpgsql 
really a bad thing when just considering plpgsql?





Cheers,

Jeff



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Melvin Davidson
Further to the point of saying functions are ineffiencent, consider the
fact that as of the current version of PostgreSQL, plpgsql
functions cannot be pre-optimized. So when they are referenced in a SQL
statement, PostgreSQL (optimizer) has load the
function from the catalogs, which involves overhead. If the function calls
another function, then the process has to be repeated, which involves
additional overhead. Ergo, that is not the most efficient way of doing
things.

On Sat, May 2, 2015 at 5:17 PM, Bill Moran wmo...@potentialtech.com wrote:

 On Sat, 2 May 2015 14:07:31 -0700
 Jeff Janes jeff.ja...@gmail.com wrote:

  On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
 
  wrote:
 
   On 05/02/2015 10:12 AM, Melvin Davidson wrote:
  
   AFAIK, you cannot package functions in  PostgreSQL, but it is
 possible
   to
   call a function from within a function.
  
   That being said, I would seriously look at how and why you are writing
   your functions
   as functions that call other functions are not very efficient.
  
  
   I am not following. That is what packaging is about, separating out
 'units
   of work' so they can be combined as needed. Part of that is using
 existing
   functions in new functions/classes. In fact in the Postgres source I
 see
   this in many places. Now it is entirely possible I missed a memo, so I
 am
   open to a more detailed explanation of the inefficiencies involved.
  
 
  The Postgres source is written in C, not in plpgsql.  C has a good
  optimizing compiler and plpgsql doesn't.

 Maybe that's a roundabout way of saying that if your functions are
 complex enough to require calling sub-functions they might be
 justifying being writting in C?

 --
 Bill Moran




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Bill Moran
On Sat, 02 May 2015 15:06:24 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 05/02/2015 02:07 PM, Jeff Janes wrote:
  On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
  mailto:adrian.kla...@aklaver.com wrote:
 
  On 05/02/2015 10:12 AM, Melvin Davidson wrote:
 
  AFAIK, you cannot package functions in  PostgreSQL, but it is
  possible to
  call a function from within a function.
 
  That being said, I would seriously look at how and why you are
  writing
  your functions
  as functions that call other functions are not very efficient.
 
 
  I am not following. That is what packaging is about, separating out
  'units of work' so they can be combined as needed. Part of that is
  using existing functions in new functions/classes. In fact in the
  Postgres source I see this in many places. Now it is entirely
  possible I missed a memo, so I am open to a more detailed
  explanation of the inefficiencies involved.
 
 
  The Postgres source is written in C, not in plpgsql.  C has a good
  optimizing compiler and plpgsql doesn't.
 
 Does this actually matter?  I am a biologist that backed into computing, 
 so I realize I am weak on the fundamentals. Still the scientist in me 
 wants data backing assertions. As I understand it plpgsql works close to 
 the server and is optimized to do so.  I know writing in C would be a 
 better solution. Still is calling plpgsql functions inside plpgsql 
 really a bad thing when just considering plpgsql?

The answer to that is the same answer to so many other things: it depends.

plpgsql functions are slower than C. They also lack a lot of language
features that C has. That being said, if they're meeting your needs, then
don't worry about it. plpgsql is around because for most people, it works
well enough. There are certainly cases when you want to create very complex
logic in the database and plpgsql is liable to make that difficult. But
there are a lot of cases where having to manage pointers and a build
environment and all the things that go with C aren't justified, because
plpgsql has none of that complexity. There are advantages both ways.

The beauty of PostgreSQL is that you have both available and you
can choose whichever is best for your situation.

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Adrian Klaver

On 05/02/2015 03:10 PM, Melvin Davidson wrote:

Further to the point of saying functions are ineffiencent, consider the
fact that as of the current version of PostgreSQL, plpgsql
functions cannot be pre-optimized. So when they are referenced in a SQL
statement, PostgreSQL (optimizer) has load the
function from the catalogs, which involves overhead. If the function
calls another function, then the process has to be repeated, which
involves additional overhead. Ergo, that is not the most efficient way
of doing things.



Yeah, I see the explanation here:

http://www.postgresql.org/docs/9.4/interactive/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

Which has this:

As each expression and SQL command is first executed in the function, 
the PL/pgSQL interpreter parses and analyzes the command to create a 
prepared statement, using the SPI manager's SPI_prepare function. 
Subsequent visits to that expression or command reuse the prepared 
statement. Thus, a function with conditional code paths that are seldom 
visited will never incur the overhead of analyzing those commands that 
are never executed within the current session


So it still not clear to me whether a monolithic function is better or 
worse than one that calls other functions as needed. Probably over 
thinking this, but it would make a good experiment. Just have to figure 
out a realistic scenario to test. Thanks for the input.


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Adrian Klaver

On 05/02/2015 03:28 PM, Bill Moran wrote:

On Sat, 02 May 2015 15:06:24 -0700
Adrian Klaver adrian.kla...@aklaver.com wrote:


On 05/02/2015 02:07 PM, Jeff Janes wrote:

On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver adrian.kla...@aklaver.com
mailto:adrian.kla...@aklaver.com wrote:

 On 05/02/2015 10:12 AM, Melvin Davidson wrote:

 AFAIK, you cannot package functions in  PostgreSQL, but it is
 possible to
 call a function from within a function.

 That being said, I would seriously look at how and why you are
 writing
 your functions
 as functions that call other functions are not very efficient.


 I am not following. That is what packaging is about, separating out
 'units of work' so they can be combined as needed. Part of that is
 using existing functions in new functions/classes. In fact in the
 Postgres source I see this in many places. Now it is entirely
 possible I missed a memo, so I am open to a more detailed
 explanation of the inefficiencies involved.


The Postgres source is written in C, not in plpgsql.  C has a good
optimizing compiler and plpgsql doesn't.


Does this actually matter?  I am a biologist that backed into computing,
so I realize I am weak on the fundamentals. Still the scientist in me
wants data backing assertions. As I understand it plpgsql works close to
the server and is optimized to do so.  I know writing in C would be a
better solution. Still is calling plpgsql functions inside plpgsql
really a bad thing when just considering plpgsql?


The answer to that is the same answer to so many other things: it depends.

plpgsql functions are slower than C. They also lack a lot of language
features that C has. That being said, if they're meeting your needs, then
don't worry about it. plpgsql is around because for most people, it works
well enough. There are certainly cases when you want to create very complex
logic in the database and plpgsql is liable to make that difficult. But
there are a lot of cases where having to manage pointers and a build
environment and all the things that go with C aren't justified, because
plpgsql has none of that complexity. There are advantages both ways.

The beauty of PostgreSQL is that you have both available and you
can choose whichever is best for your situation.


Agreed, though in my case I drop into plpythonu when I want more complex 
solutions.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] delete is getting hung when there is a huge data in table

2015-05-02 Thread David G. Johnston
On Saturday, May 2, 2015, Mitu Verma mitu.ve...@ericsson.com wrote:


 still this delete operation is not working and not a single row has been
 deleted from the table.


Because of MVCC other sessions are not able to see partial deletions...and
as you aluded to knowing the data itself is not actually removed by a
deletion though a vacuum will eventually allow the same space to be reused
thus deferring the increase in size from future insertions.

David J.


Re: [GENERAL] delete is getting hung when there is a huge data in table

2015-05-02 Thread Tom Lane
Mitu Verma mitu.ve...@ericsson.com writes:
 1. If postgreSQL has some limitations for deletion of large data?

Not as such, but you've not given us any details that would permit
comment.

A reasonably likely bet is that this table is referenced by a foreign key
in some other table, and that other table has no index on the referencing
column.  That would make the FK is-it-ok-to-delete checks very slow.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql functions organisation

2015-05-02 Thread Melvin Davidson
OK, Here is a simple example that shows the difference between using a self
contained function  and
one that calls sub functions.

After loading all the functions below, repeat each of the EXPLAIN
statements a few times and note that
callsubs takes almost TWICE as long to execute as nosub.

CREATE OR REPLACE FUNCTION nosub(text)
  RETURNS void AS
$BODY$

DECLARE

p_in_strALIAS FOR $1;

BEGIN
IF LENGTH(p_in_str) = 6
THEN RAISE NOTICE 'Hi %', p_in_str;
ELSE
RAISE NOTICE 'Hello %', p_in_str;
END IF;

RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION nosub(text)
  OWNER TO postgres;

 CREATE OR REPLACE FUNCTION called1(text)
  RETURNS void AS
$BODY$

DECLARE

p_in_str1ALIAS FOR $1;

BEGIN
RAISE NOTICE 'Hi %', p_in_str1;

RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION called1(text)
  OWNER TO postgres;

 CREATE OR REPLACE FUNCTION called2(text)
  RETURNS void AS
$BODY$

DECLARE

p_in_str2ALIAS FOR $1;

BEGIN
RAISE NOTICE 'Hello %', p_in_str2;

RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION called2(text)
  OWNER TO postgres;

 CREATE OR REPLACE FUNCTION callsubs(text)
  RETURNS void AS
$BODY$

DECLARE

p_in_strALIAS FOR $1;

BEGIN
IF LENGTH(p_in_str) = 6
THEN PERFORM CALLED1(p_in_str);
ELSE
PERFORM CALLED2(p_in_str);
END IF;

RETURN;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION callsubs(text)
  OWNER TO postgres;

EXPLAIN ANALYZE SELECT nosub('melvin');

EXPLAIN ANALYZE SELECT callsubs('melvin');

On Sat, May 2, 2015 at 7:37 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 05/02/2015 03:28 PM, Bill Moran wrote:

 On Sat, 02 May 2015 15:06:24 -0700
 Adrian Klaver adrian.kla...@aklaver.com wrote:

  On 05/02/2015 02:07 PM, Jeff Janes wrote:

 On Sat, May 2, 2015 at 1:05 PM, Adrian Klaver 
 adrian.kla...@aklaver.com
 mailto:adrian.kla...@aklaver.com wrote:

  On 05/02/2015 10:12 AM, Melvin Davidson wrote:

  AFAIK, you cannot package functions in  PostgreSQL, but it is
  possible to
  call a function from within a function.

  That being said, I would seriously look at how and why you are
  writing
  your functions
  as functions that call other functions are not very efficient.


  I am not following. That is what packaging is about, separating out
  'units of work' so they can be combined as needed. Part of that is
  using existing functions in new functions/classes. In fact in the
  Postgres source I see this in many places. Now it is entirely
  possible I missed a memo, so I am open to a more detailed
  explanation of the inefficiencies involved.


 The Postgres source is written in C, not in plpgsql.  C has a good
 optimizing compiler and plpgsql doesn't.


 Does this actually matter?  I am a biologist that backed into computing,
 so I realize I am weak on the fundamentals. Still the scientist in me
 wants data backing assertions. As I understand it plpgsql works close to
 the server and is optimized to do so.  I know writing in C would be a
 better solution. Still is calling plpgsql functions inside plpgsql
 really a bad thing when just considering plpgsql?


 The answer to that is the same answer to so many other things: it depends.

 plpgsql functions are slower than C. They also lack a lot of language
 features that C has. That being said, if they're meeting your needs, then
 don't worry about it. plpgsql is around because for most people, it works
 well enough. There are certainly cases when you want to create very
 complex
 logic in the database and plpgsql is liable to make that difficult. But
 there are a lot of cases where having to manage pointers and a build
 environment and all the things that go with C aren't justified, because
 plpgsql has none of that complexity. There are advantages both ways.

 The beauty of PostgreSQL is that you have both available and you
 can choose whichever is best for your situation.


 Agreed, though in my case I drop into plpythonu when I want more complex
 solutions.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] delete is getting hung when there is a huge data in table

2015-05-02 Thread Mitu Verma
Hi,

I am facing an issue with the deletion of huge data.
We have a cronscript which is used to delete the data of last 3 months from one 
of the tables.
Data in the table is large (8872597 as you can see the count below) since it is 
from last 3 months.

fm_db_Server3=# select count(*) from audittraillogentry ;
 
 
  count
-
8872597
(1 row)

Now issue is that when this script for the deletion of data is launched , it is 
taking more than 7 days and doing nothing i.e not a single row has been deleted.

Then we stopped the script,terminated the database sessions by using SELECT 
pg_terminate_backend(proc pid) and run the following command

delete from audittraillogentry where intime = to_timestamp('2015-01-30 
23:59:59.999', '/MM/DD-HH24:MI:SS.FF3') OR outtime  = 
to_timestamp('2015-01-30 23:59:59.999', '/MM/DD-HH24:MI:SS.FF3');
still this delete operation is not working and not a single row has been 
deleted from the table.

Now i have following questions -

1. If postgreSQL has some limitations for deletion of large data?
2. If i should run the vacumm, after stopping the cron script ? because 
probably to get the smaller table?
3. if dropping the indexes can help here? now sure.
4.if i should think about partitioning , if there is any limitation while 
delaing with large data in postgreSQL?

regards
Mitu

_
___


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general