Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-20 Thread Luca Ferrari
On Fri, Nov 17, 2017 at 9:41 PM, Justin Pryzby  wrote:
> I think you want something like this ?
>
> postgres=# SELECT schemaname, tablename, attname, 
> unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 
> 9;
>  pg_catalog | pg_pltemplate | tmplname| plperl  | 
> {plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}

I don't think it is what I'm looking for, I would like something ,like:

 select unnest( histogram_bounds::text::text[] ), unnest(
most_common_freqs ) from pg_stats

but with correlation between the two array indexes. Is it something
achievable in SQL? Or should I use a plpgsql loop with an index?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-17 Thread Luca Ferrari
Hi all,
maybe this is trivial, but I need an hint on a way to see a table form
of the MCVs and MCFs out of pg_stats with a query. Is it possible to
get a set of rows each with a most common value on one column and the
corresponding column on the the other? (assuming I can cast the array
of MCVs to the right type array)

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmin increasing within a transaction block?

2017-11-06 Thread Luca Ferrari
On Mon, Nov 6, 2017 at 3:26 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Alvaro Herrera <alvhe...@alvh.no-ip.org> writes:
>> Luca Ferrari wrote:
>>> Why is xmin greater than the current transaction id (and most notably
>>> not "fixed")?
>
>> Something is using subtransactions there.  My first guess would be that
>> there are triggers with EXCEPTION blocks, but your example doesn't show
>> any.  Or maybe you have event triggers.
>
> I can reproduce the example if I "\set ON_ERROR_ROLLBACK on" in psql.
>


Shame on me, I did forgot to have enabled that in my ~/.psqlrc file
(and did not hit an error within the transaction block to see it was
aborting). And in fact, the manual page for psql reports that
ON_ERROR_ROLLBACK:

The error rollback mode works by issuing an implicit SAVEPOINT for you,
just before each command that is in a transaction block, and
then rolling back to the savepoint if the command fails.

Sorry for the noise.
Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmin increasing within a transaction block?

2017-11-06 Thread Luca Ferrari
On Mon, Nov 6, 2017 at 2:29 PM, Andres Freund  wrote:
> That doesn't look like plain postgres behaviour to me. Any chance you're
> using a pooler in statement mode in front of postgres?


None I'm aware of, since the machine is using postgresql locally and
I'm connecting to it using the port 5432.
I did have in the past enabled wal_level to logical but now it is set
to minimal and I don't have any replication slot (in the case it could
do something related to this behvior):

# SELECT * FROM pg_replication_slots;
(0 rows)

Any idea?
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmin increasing within a transaction block?

2017-11-06 Thread Luca Ferrari
On Mon, Nov 6, 2017 at 1:53 PM, Alvaro Herrera  wrote:
> Something is using subtransactions there.  My first guess would be that
> there are triggers with EXCEPTION blocks, but your example doesn't show
> any.  Or maybe you have event triggers.

Thanks, but I don't see any event trigger:

> \dy
  List of event triggers
 Name | Event | Owner | Enabled | Procedure | Tags
--+---+---+-+---+--
(0 rows)


Please note that I tested it on a freshly created database obtaining
the same behavior.
I did have in the past event trigger, but I have dropped long time
before this behavior.

Any other idea?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] xmin increasing within a transaction block?

2017-11-06 Thread Luca Ferrari
Hi all,
I suspect this has a trivial explaination, but this is what I'm experiencing:

> CREATE TABLE foo( i int );
> BEGIN;
* > INSERT INTO foo(i) VALUES( 1 );
* > INSERT INTO foo(i) VALUES( 2 );
* > SELECT xmin, cmin, xmax, cmax, i FROM foo;
 xmin | cmin | xmax | cmax | i
--+--+--+--+---
 2466 |0 |0 |0 | 1
 2467 |1 |0 |1 | 2
(2 rows)

* > SELECT txid_current();
 txid_current
--
 2465
(1 row)


Why is xmin greater than the current transaction id (and most notably
not "fixed")?
What am I missing here?
I'm running 9.6.5.

> select version();

version
--
 PostgreSQL 9.6.5 on amd64-portbld-freebsd11.0, compiled by FreeBSD
clang version 3.8.0 (tags/RELEASE_380/final 262564) (based on LLVM
3.8.0), 64-bit
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] template1 being accessed

2016-04-05 Thread Luca Ferrari
(sorry, back to the list)

On Tue, Apr 5, 2016 at 6:11 AM, John R Pierce  wrote:
> its also possible some management software might use it as a default place
> to connect so they can get a list of databases or whatever .

This is probably the most common case for continuos access to template1.


Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle baseline/baseplan/outplan in Postgres

2015-08-06 Thread Luca Ferrari
On Thu, Aug 6, 2015 at 3:52 PM, Uriy Zhuravlev
u.zhurav...@postgrespro.ru wrote:
 Hello all.
 What do you think about adding functionality baseplan/outplan in Postgres? Who
 needs it in postgres?

I suspect they will not be introduced for the same reason of query hints.
I trust the optimizer to do the right choice, or at least a better
choice I could do (especially with regard to some future event).

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql sqlstate return code access

2015-03-16 Thread Luca Ferrari
On Mon, Mar 16, 2015 at 4:23 PM, Little, Doug C
doug.lit...@vend.frb.org wrote:
 insert into x select … from y;



 insert into log(message, code,timestamp)  values('insert into
 x',:SQLSTATE,now());



I'm pretty sure you have to wrap it into a plpgsql function:
http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html
Moreover, if the command executes correctly you would have no
exception and sql state will be 0.

Hope this helps.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3.5 failing to compile with dtrace on FreeBSD 10.1

2015-02-23 Thread Luca Ferrari
Thanks,
I'm going to try this solution and report back also to the bug tracker
I opened time ago:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=196053

Luca

On Sat, Jan 24, 2015 at 8:53 AM, Keith Fiske ke...@omniti.com wrote:
 Just wanted to thank Lacey for the assistance. I set up my first BSD server
 recently and installing things via ports had been going great until this.
 The docs don't mention how to enable the kernel modules permanently either,
 so thanks also for that additional note.
 https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/dtrace-enable.html

 Perhaps adding a section on BSD ports and/or expanding on the Dtrace section
 (15.7.6.6) in this part of the postgresql install docs may be helpful for
 others?
 http://www.postgresql.org/docs/current/static/installation-platform-notes.html

 None of the other steps besides the kernel loading are needed for
 compilation and installation, but could be handy.

 --
 Keith Fiske
 Database Administrator
 OmniTI Computer Consulting, Inc.
 http://www.keithf4.com

 On Mon, Dec 22, 2014 at 2:22 PM, Lacey Powers lacey.lea...@gmail.com
 wrote:

 Hello Luca,

 I had some success getting PostgreSQL 9.4 to build on FreeBSD 10.1-p2
 RELEASE with DTrace. The probes look fully functional, and fun to work with,
 though I haven't had time to get incredibly deep into using them.This is
 what worked for me, after much digging around.

 I was building and running PostgreSQL and dtrace directly on my
 workstation, not in a jail. I'm not sure what additional hoops would need to
 hopped through off-hand to run dtrace on FreeBSD in a jail.

 Initially, I got compile errors, but I tracked those down and realized
 that I needed to load the dtrace kernel modules, which I had expected to be
 loaded automatically, but weren't. Otherwise, you get cryptic compile
 errors.

 I had to load the dtrace kernel modules with:

 kldload dtraceall (and enable it in /boot/loader.conf with
 dtraceall_load=YES, if you want it to hang around)

 Add the following lines to your /etc/make.conf (because detailed
 backtraces are helpful)

 STRIP=
 CFLAGS+=-fno-omit-frame-pointer

 Once that's loaded, change the permissions to 0666 on /dev/dtrace/helper
 (and add that to /etc/devfs.conf with perm /dev/dtrace/helper 0666 if you
 want it to persist)

 Though be aware that there are security implications for that, since other
 users could make a ton of probes and exhaust kernel memory. Alternately, you
 could add pgsql to wheel, since root and wheel are the owner and group for
 that devfs node.

 Also be sure to not use devel/libelf from ports, because that doesn't seem
 to play nice with dtrace, and keeps probes from registering.

 After doing all that, I could get dtrace to build, link, and register
 userland probes with the database/postgresql94-server and
 database/postgresql94-client, and successfully count transaction times in a
 dtrace script.

 One final note, the freebsd-dtr...@freebsd.org list, is very helpful with
 esoteric DTrace issues.

 I hope this all helps. =)

 Regards,

 Lacey



 On Tue, Dec 16, 2014 at 4:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 If you want to push on this I think you'll need to find a BSD dtrace
 expert.  You shouldn't need to show him/her much except the above
 dtrace invocation and the probes.d file.

 I've filled a bug report and I'll report back here if I get any update
 on the matter.
 In the meantime I've tried 9.4 beta and the same issue arises.

 Luca





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.3.5 failing to compile with dtrace on FreeBSD 10.1

2014-12-16 Thread Luca Ferrari
Hi all,
I'm trying to compile 9.3.5 from ports on FreeBSD 10.1-release with
dtrace enabled.
It was a long time ago I looked at dtrace + PostgreSQL + FreeBSD (see
[1], [2]), so I'm not updated on how far it got.
The problem I have in compilation is as follows:

gmake[2]: Entering directory
'/mnt/ada1a/ports/databases/postgresql93-server/work/postgresql-9.3.5/src/backend'
gmake -C utils probes.h
gmake[3]: Entering directory
'/mnt/ada1a/ports/databases/postgresql93-server/work/postgresql-9.3.5/src/backend/utils'
dtrace -C -h -s probes.d -o probes.h.tmp
dtrace: failed to compile script probes.d: /usr/lib/dtrace/ip.d,
line 2: type redeclared: struct devinfo
Makefile:38: recipe for target 'probes.h' failed
gmake[3]: *** [probes.h] Error 1
gmake[3]: Leaving directory
'/mnt/ada1a/ports/databases/postgresql93-server/work/postgresql-9.3.5/src/backend/utils'
Makefile:157: recipe for target 'utils/probes.h' failed
gmake[2]: *** [utils/probes.h] Error 2
gmake[2]: Leaving directory
'/mnt/ada1a/ports/databases/postgresql93-server/work/postgresql-9.3.5/src/backend'
*** Error code 2


Am I missing something?

Thanks,
Luca

[1] http://www.postgresql.org/message-id/28743.1301771...@sss.pgh.pa.us
[2] 
https://forums.freebsd.org/threads/dtrace-userland-and-postgresql.22914/#post-129378


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.3.5 failing to compile with dtrace on FreeBSD 10.1

2014-12-16 Thread Luca Ferrari
On Tue, Dec 16, 2014 at 4:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 If you want to push on this I think you'll need to find a BSD dtrace
 expert.  You shouldn't need to show him/her much except the above
 dtrace invocation and the probes.d file.


I've filled a bug report and I'll report back here if I get any update
on the matter.
In the meantime I've tried 9.4 beta and the same issue arises.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] another trigger problem

2014-03-08 Thread Luca Ferrari
On Sat, Mar 8, 2014 at 12:36 AM, Susan Cassidy
susan.cass...@decisionsciencescorp.com wrote:
 You would think that postgres could have output a more helpful error
 message, though.


I believe that is the correct message: you were concatenating a null
string to something, and so nullifying the string you were using for
execute. In other words, it is not a wrong error message, or it could
not have been better since the instruction before the execute was
doing what you asked for. I believe in such cases it is better to
check against the query string and execute it only if valid.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Will modifications to unlogged tables also be flused to disk?

2014-02-15 Thread Luca Ferrari
On Sat, Feb 15, 2014 at 12:38 PM, Clemens Eisserer linuxhi...@gmail.com wrote:
 Ok thats really bad news :/
 After reading the discussion about calling unlogged tables in memory
 or cached I actually had high hopes pgql would take advantage of the
 fact that data of unlogged tables are not preserved at recovery.

Sorry, I could be misunderstanding here, but if the final aim is to
reduce the writes, why not tweaking wal settings and checkpoints? I
mean, is it possible to find a good solution or you need a kind of in
memory storage?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In a moment of madness I destroyed...

2014-02-03 Thread Luca Ferrari
On Mon, Feb 3, 2014 at 1:33 AM, Jerry Levan jerry.le...@gmail.com wrote:
 Would a  text pg_dumpall from the mac be portable to the 9.3.x version on the 
 PAE kerneled
 fedora box?


Of course.

 I guess I could then possibly use the dblink stuff to suck the tables over 
 one at a time...

Well, if data is still there, why not instlaling the binaries (e.g.
from source) related to such version and configure the daemon to run
against such pgdata? At least, this would be my first try.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] manual and autovacuum

2014-02-01 Thread Luca Ferrari
On Sat, Feb 1, 2014 at 9:20 AM, prashant Pandey
prashantpandeyfu...@gmail.com wrote:
 I am looking for actual value. And please if you mention the database
 condition along with the recommendation ,it will be helpfull.

Maybe the question has to be reversed: what is the condition you are
aiming at optimizing?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] manual and autovacuum

2014-02-01 Thread Luca Ferrari
On Sat, Feb 1, 2014 at 2:16 PM, prashant Pandey
prashantpandeyfu...@gmail.com wrote:
 Thankyou for your reply. I just want to know that what are the strting
 recommended value for the vacuum parameters.

The defaults values. Or at least, I will trust such values as good
starting values.

 As in what values are
 unacceptable and cause problems to our database and which values will work
 for a normal database

Optimizing vacuum and autovacuum sounds to me like a very complex job,
in particular because there are no normal databases. Each
database/cluster has different needs and behavior, so I guess the only
chance is to focus on the main needs you want to achieve (throughput,
cpu utilization, disk spped, etc.) and work on that adjusting related
autovaccum parameters (that is parameters can work against your aim).
In short: I believe there no one answer to you question.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wal archive peak during pg_dump

2014-01-10 Thread Luca Ferrari
On Thu, Jan 9, 2014 at 11:42 AM, Willy-Bas Loos willy...@gmail.com wrote:

 It doesn't seem logical to me that pg_dump should generate wal, but i
 haven't been able to find a different explanation so far.
 So to make sure, i want to ask you people: can it be that running pg_dump
 creates a lot of wal?

Interesting. pg_dump works in a transactional model, but it should be
on read only. Is it possible you have dirty pages that are made
persistent due to pg_dump? Or could it be a vacuum process or
something related to freezing the tuples?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL: listing db/role and user/role relationships

2013-11-17 Thread Luca Ferrari
On Sat, Nov 16, 2013 at 1:19 AM, Felipe Gasper fel...@felipegasper.com wrote:
 Hi all,

 How can I retrieve:


 1) each role’s privileges on a given DB


Do you mean pg_database.datacl?
http://www.postgresql.org/docs/current/static/catalog-pg-database.html

 2) which users have access to a given role
 3) which roles a given user can access

I guess playing with pg_roles and pg_auth_member will provide you all
the information.

Hope this helps.
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL: listing db/role and user/role relationships

2013-11-17 Thread Luca Ferrari
On Mon, Nov 18, 2013 at 4:00 AM, Felipe Gasper fel...@felipegasper.com wrote:

 One more question: how “stable” are these interfaces? Are they by chance
 available via information_schema?

Enough stable that they have not changed so much since 8.3. But I'm
not used to the information_schema, so I don't know which are
exported/available via such schema.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumpall from a script

2013-10-22 Thread Luca Ferrari
On Tue, Oct 22, 2013 at 7:20 AM, James Sewell james.sew...@lisasoft.com wrote:

 I need a way to backup either from SQL in PSQL (possibly \!) or from a 
 PG/PLSQL function to a file with a name set from a :variable.

Could it be something like this:

# \setenv myFile 'filename'
# \! pg_dump  $myFile

Unfortunately there seems to be no unsetenv command to remove the
variable within the script.


Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to get the connected session pointer ( Archive * AH)

2013-10-22 Thread Luca Ferrari
On Tue, Oct 22, 2013 at 1:32 AM, AI Rumman rumman...@gmail.com wrote:
 Hi,

 Is there a way in Postgresql C function to get the connected session pointer
 ( Archive * AH)
 and use it for further execution?

If I read pg_archiver.c correctly, the AH pointer is used only during
the archiving and is not globally stored anywhere, and therefore I
suspect the answer is NO (at least unless you modify the archiver).
What is the aim of having such pointer?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgreSQL query via JDBC in different OS taking different running time?

2013-10-08 Thread Luca Ferrari
On Tue, Oct 8, 2013 at 3:48 AM, Aftab Ahmed Chandio afta...@siat.ac.cn wrote:
 What do u suggest me, where I need to make performance tuning? w hich
 configuration setting must need to modify in the linux?

Well, others have already pointed out that you should first measure
your query on the server. I would point out that the JVM itself could
be different or differently configured on Linux/win machines, and this
will lead to different results.
Second it is not clear to me why are you measuring the same query on
different machines and OSs, or better, why are you assuming the
resulting time should be the same.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning V schema

2013-09-21 Thread Luca Ferrari
On Fri, Sep 20, 2013 at 4:38 AM, Julian temp...@internode.on.net wrote:
 However, I tend to go with partitions when required to be generated on
 demand dynamically and automatically (which probably isn't the case
 here). SCHEMAs have other uses, provide a level of security (GRANT) and
 useful in design when partitioning off blocks of related datasets
 completely.

I would do a partition in this case for the same reason: schemas are
much more logical divisions of data due to different grantings, search
paths, users, and so on. Partition is more a data-level split so it
does make sense when you want the data to stay in a single point but
for performance reason split it across multiple tables.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to compare the results of two queries?

2013-09-17 Thread Luca Ferrari
On Tue, Sep 17, 2013 at 4:59 PM, Juan Daniel Santana Rodés
jdsant...@estudiantes.uci.cu wrote:

 For example the execution of the function would be something like ...

 select compare('select * from table1', 'select * from table2');

 For this case the result is false, then the queries are executed on
 different tables.

I suppose you are taking for sure that both queries references tables
with the same structure, in such case why not computing an hash of
each row to check against the other result set?
About how many rows are we talking? Because it sounds to me like a
good job for a diff-like external tool, is this a possible solution?
Have you considered that the tables could have a different structure
or even just a different layout, in such case a select * will return
different results while the data is actually the same?
What is the aim of this?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump question (exclude schemas)

2013-08-29 Thread Luca Ferrari
On Wed, Aug 28, 2013 at 9:30 PM, Jay Vee jvsr...@gmail.com wrote:

 $pg_dump other_options  --exclude-schema='sch_*'

 this does not seem to exclude all schemas with this pattern  ( 'sch_*' ),
 anything wrong here?

The option is fine and works for me on 9.2.4, I suspect the other
options could do a clash with the -N (exclude-schema).

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2013 at 1:08 PM, Torello Querci tque...@gmail.com wrote:
 Again, is very strange that this data is in the database  moreover this
 data came from a import procedure and this data is not present in the source
 import file.
 Really, I think that I get some kind of data corruption


I'm sure you got some kind of data corruption because the date is
invalid and it was in a primary key (if I get it right).

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-26 Thread Luca Ferrari
On Mon, Aug 26, 2013 at 4:57 AM, 高健 luckyjack...@gmail.com wrote:
 But why writes the entire content of each disk page to WAL ?


The documentation states that: The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during post-crash recovery.. I guess that a mixed page (i.e., a page
that contains old and new data) cannot be safely recovered with
deltas, so you need to have a clean page image to which start
recovery.

Someone with more internal knowledge can comment on this.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Luca Ferrari
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci tque...@gmail.com wrote:
 ERROR:  unexpected end of tape

Really strange, if I get it right something went wrong while sorting tuples.
Is it possible to test with an incremented work_mem value?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to know detailed information about HOT(Heap-Only Tuples)?

2013-08-22 Thread Luca Ferrari
On Thu, Aug 22, 2013 at 4:20 AM, 高健 luckyjack...@gmail.com wrote:

 according to  a table, How many tuples are heap only tuples , and how many
 are not?


I believe there are not hot tuples, a tuple is updated using HOT
depending on the indexes defined and the type of update itself.
Anyway, you can get an information of how many tuples have been
hot-touched via pg_stat_user_tables.n_tup_hot_upd


 And also , Is there any options which can influence HOT occurrance?

The storage setting (fillfactor) will change the hotness of your
tables, but of course this depends on the workload.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.2 Logging

2013-08-22 Thread Luca Ferrari
On Wed, Aug 21, 2013 at 9:55 PM, carlosinfl . carlos.menn...@gmail.com wrote:

 #debug_print_parse = off
 #debug_print_rewritten = off
 #debug_print_plan = off
 #debug_pretty_print = on
 #log_checkpoints = off
 #log_connections = off
 #log_disconnections = off


The debug_* will log, well, debug information (e.g., the query being
rewritten and the query tree).
The log_*connections can be useful to see who is using the system.

I would also evaluate:
#log_min_messages = warning


Also
#log_statement = 'none'

that can be used to log all commands (query and utility), only ddl
(e.g., alter table),  and so on.

Hope this helps.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Denormalized field

2013-08-19 Thread Luca Ferrari
On Sun, Aug 18, 2013 at 5:56 AM, Robert James srobertja...@gmail.com wrote:
 What's the best way to do this automatically? Can this be done with
 triggers? (On UPDATE or INSERT, SET slow_function_f =
 slow_function(new_f) ) How?


Define  a before trigger that updates your column. For instance:

CREATE OR REPLACE FUNCTION f_trigger() RETURNS TRIGGER AS $$ BEGIN
NEW.f_field := f_function( NEW.pk ); RETURN NEW; END $$ LANGUAGE
plpgsql;

CREATE TRIGGER tr_foo BEFORE INSERT OR UPDATE ON foo FOR EACH ROW
EXECUTE PROCEDURE f_trigger();

Of course, adjust the trigger and the trigger function to check
against some conditions (e.g., insert, update, nulls).

 Will creating an index on slow_function(f) do this?


You can create the index on the function result, assuming it is immutable.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery.conf and PITR

2013-08-10 Thread Luca Ferrari
On Fri, Aug 9, 2013 at 2:17 PM, ascot.m...@gmail.com
ascot.m...@gmail.com wrote:

 Is there a way to query the master (it is up and running) about the actual 
 commit sequence by transaction IDs?


The information is within the clogs, but nothing comes into my mind as
to inspect from an admin point of view the clog sequence.
Anyway, I suepect it is wrong the approach you have with recovery: you
are asking the database to recover at least up to transaction x, so
why worrying about other interleaved transactions?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery.conf and PITR

2013-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2013 at 10:09 AM, ascot.m...@gmail.com
ascot.m...@gmail.com wrote:
 Hi,

 I am trying PITR in a test machine (same PG version 9.2.4 same O/S Ubuntu 
 12.04 64 bit).  All archived WAL files are shipped and saved in 
 /var/pgsql/data/archive, the latest transaction txid of them is 75666.  I 
 want to recover PG at a point of time that if  XIDs are equal or smaller than 
 '75634' so I  have the following recovery.conf (only two lines):

 restore_command = 'cp /var/pgsql/data/archive/%f %p'
 recovery_target_xid = '75634'


 After the restart of PG, the recovery.conf is processed and it is renamed to 
 recovery.done.  However it restored all (75666) instead of '75634'.


Any chance the 75666 committed before the one you specified as target?
From the docs 
(http://www.postgresql.org/docs/9.1/static/recovery-target-settings.html):

The transactions that will be recovered are those that committed
before (and optionally including) the specified one.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recovery.conf and PITR

2013-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2013 at 12:40 PM, wd w...@wdicc.com wrote:
 Try add these settings,

 pause_at_recovery_target=true

Be warned that this would require a manual completion of the recovery
and requires hot_standby that is not specified in the original post.

 recovery_target_inclusive=false


Uhm...I guess the problem is not about the txid being included or not:
the recovery target was 75634 and the transaction 75666 appeared, so
the problem seems to be an out-of-order commit of the transactions. In
such case making the inclusive false will not cause 75666 to appear if
it has committed before the target xid, or am I wrong?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Staging Database

2013-08-07 Thread Luca Ferrari
On Wed, Aug 7, 2013 at 6:00 AM, BladeOfLight16 bladeofligh...@gmail.com wrote:
 The company I work for has a client who has expressed interest in having a
 staging database for their data. Staging as in they add data, do some QCing,
 then push to a database with an identical schema to make it public.
 Fortunately, we're not doing that right now, but it's something they may
 want later on. So I wanted to at least start thinking about it. What are
 some ways of going about it? Some kind of replication set up comes to mind,
 but it would need to be on demand, not constant. It also could be as
 infrequent as quarterly, if that makes a difference. Preferably, it would
 only require one server with PostgreSQL installed, but I'm not ruling out
 multiple servers.


Not really helpful, but here are my considerations.
The low frequency and the preference for a single server suggest me a
dump and restore cycle on two databases, assuming this is possible due
to not high volume data.
I would also consider some way of data partitioning in order to
isolate the data that has to be pushed from staging into the master
(you say data is only added or queried).
The problem for replication is that both the staging and the master
would be in read-write mode, so sounds to me like a multi-master
setup.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] incremental dumps

2013-08-05 Thread Luca Ferrari
On Fri, Aug 2, 2013 at 6:55 PM,  haman...@t-online.de wrote:

 thanks for the hint - this is probably one of the things to do.
 I have something else in mind, but at present I just suspect that this might 
 happen:
 when I modify data and select _without an ordering_, I am pretty sure to get 
 the data
 in a different sequence than before. So I wonder whethet forcing the dump to 
 honor
 a particular ordering (if that is at all possible) would also reduce the size 
 of dumps ... or the
 time diff takes to produce them



May I ask what is the final purpose? Because if it is to take a backup
I guess this is not the right way to go, while if it is keeping (and
rebuilding) an history of data, than using a more specific approach
(like logging) could give you less headaches.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] incremental dumps

2013-08-01 Thread Luca Ferrari
On Thu, Aug 1, 2013 at 10:59 AM,  haman...@t-online.de wrote:

 However, the diff files seem to be considerably larger than one would expect.
 One obvious part of the problem is the fact that diff shows old and new text,
 so e.g. changing the amount of stock for a product with a 1kB description
 would generate at least 2kb of text in the diff file.

 What would be a better approach?

I suppose wal archiving or PITR would be better, but assuming you want
text files I guess you need to change your database structure to
either:
1) include a watermark on data and dump only data since the last dump
(to do manually);
2) partition your tables and backup specific tables/partitions
depending on the timing.

Hope this helps.
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to know what columns are getting updated

2013-07-30 Thread Luca Ferrari
On Tue, Jul 30, 2013 at 6:05 AM, Sajeev Mayandi
sajeev_maya...@symantec.com wrote:
 Hi,

 I have bunch of rules created for tables to implement upsert functionality.
 My problem is our tables gets updated from multiple places ,  non
 –necessarily with the same no of columns. I want to figure out  columns are
 being asked to be updated

 E.g. The rule for the the table base-table is

 CREATE OR REPLACE RULE base-table-rule  AS
 ON INSERT TO base-table
WHERE (EXISTS ( SELECT 1
FROM base-table
   WHERE bas-table::x1 = new.x1  ))
   DO INSTEAD  UPDATE base-table  SET x1=new.x1,x2=new.x2,x3
 =new.x3,x4=new.x4
   WHERE base-table.x1= new.x1;


I suppose this is for loggin purposes, so I would suggest to DO ALSO
and add a debuggin statement, like for instance a log entry in a table
or a raise instruction. Could it solve the problem?
Anyway it seems to me there's a design problem: essentially you are
converting an insert on duplicated key into an update, would not be
better to use the right statement for the right purpose?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-25 Thread Luca Ferrari
On Thu, Jul 25, 2013 at 2:57 AM, Some Developer
someukdevelo...@gmail.com wrote:
 The added advantage of removing load from the app servers so they can
 actually deal with serving the app is a bonus.

Uhm...I don't know what application you are developing, but I don't
buy your explaination.
While it is true that you are moving CPU cycles from the application
server to the database server, you will probably end with the
application server waiting for the database to acknowledge (and
therefore not serving requests) and usually the computation is not
that heavy for an online transaction (it would be better to do it as
batch if that is really heavy). Therefore this is not an advantage for
me.
Again, the only reason to use database facilities (like stored
procedures) is to arm the database so that even a different
application/connection/user will interact following as much business
rules as possible.

Moreover, please also note that one reason developers tend to avoid
database facilities is that they are using some kind of
stack/orm/automagical library that does not allow the usage of deep
features in sake of portability.




 I'm not planning on creating a complex application in the database in its
 own right, just augmenting what is already available with a few time savers
 and (a couple of) speed optimisations for commonly carried out tasks.


I don't understand the time saving argument: you have to implement
the logic either in the application or the database, so let's say the
time of the implementation is the same. The only advantage of the
database is the code reuse. But take into account that there are
drawbacks, like debugging that is not always so simple.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rule Question

2013-07-25 Thread Luca Ferrari
On Thu, Jul 25, 2013 at 8:44 AM, Andrew Bartley ambart...@gmail.com wrote:

 create rule cats_test as on update to cats do set a = new.b;


I would use a column trigger attached to the 'a' column. Rules are
better for query rewriting rather than from semantic changes.
That's my opinion.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tablespace on Postgrsql

2013-07-25 Thread Luca Ferrari
On Thu, Jul 25, 2013 at 2:53 AM, devonline devonl...@backpost.com wrote:
 Our current database size is

 1 Terabyte


The idea behind tablespaces is to gain I/O scattering data (and
therefore requests) across different devices. Therefore you have to
identify first if your database can be scattered across different
devices (not different partiion of the same hdd) and then decide which
parts need to get more bandwidth ajnd which can get fewer.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rule Question

2013-07-25 Thread Luca Ferrari
The original post was related to the update of b, so I guess it is
better to limit the trigger scope to update on such column:

CREATE OR REPLACE FUNCTION b_mirror() RETURNS TRIGGER AS
 $mirror$
 BEGIN
 NEW.a = NEW.b;
 RETURN NEW;
 END;
 $mirror$ LANGUAGE plpgsql;

CREATE TRIGGER tr_b_mirror AFTER UPDATE OF b ON cats FOR EACH ROW
EXECUTE PROCEDURE b_mirror();

It is worth noting that the trigger could be an after one, since if I
get it right the tuple has to be always updated, and there is no
condition that prevents that. Moreover, it is possible to check for
null values as in the suggested example below.

Luca


On Thu, Jul 25, 2013 at 1:18 PM, Giuseppe Broccolo
giuseppe.brocc...@2ndquadrant.it wrote:

 I am trying to do something like this

 create table cats (a text,b text);

 create rule cats_test as on update to cats do set a = new.b;

 Can i manipulate column a sort of like this...  or is  there a better
 way.

 I think the easiest way to do this is to use a trigger like this:

 CREATE FUNCTION update_column() RETURNS TRIGGER AS $update_column$
 BEGIN
 IF TG_OP = 'INSERT' OR
 (TG_OP = 'UPDATE' AND
 (NEW.b != OLD.b OR
 (NEW.b IS NULL AND OLD.b IS NOT NULL) OR
 (NEW.b IS NOT NULL AND OLD.b IS NULL)
 )
 ) THEN
 NEW.a = NEW.b;
 END IF;
 RETURN NEW;
 END;
 $update_column$ LANGUAGE plpgsql;

 CREATE TRIGGER update_column BEFORE INSERT OR UPDATE ON cats
 FOR EACH ROW
 EXECUTE PROCEDURE update_column();

 So for instance, if you insert a new column b value

 INSERT INTO cats (b) VALUES ('byebye');

 you'll get a='byebye' and b='byebye', and if you update this value

 UPDATE cats SET b = 'goodbye' WHERE a = 'byebye';

 you'll get a='goodbye' and b='goodbye'. Anyway, this is just an example. I
 suggest that you look at the CREATE TRIGGER page in the documentation

 http://www.postgresql.org/docs/9.2/static/sql-createtrigger.html

 as you can also consider conditional triggers to be executed, for example,
 only when the b column is updated.

 Hope it can help.

 Giuseppe.

 --
 Giuseppe Broccolo - 2ndQuadrant Italy
 PostgreSQL Training, Services and Support
 giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rule Question

2013-07-25 Thread Luca Ferrari
On Thu, Jul 25, 2013 at 3:02 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Luca Ferrari fluca1...@infinito.it writes:
 The original post was related to the update of b, so I guess it is
 better to limit the trigger scope to update on such column:

 CREATE OR REPLACE FUNCTION b_mirror() RETURNS TRIGGER AS
  $mirror$
  BEGIN
  NEW.a = NEW.b;
  RETURN NEW;
  END;
  $mirror$ LANGUAGE plpgsql;

 CREATE TRIGGER tr_b_mirror AFTER UPDATE OF b ON cats FOR EACH ROW
 EXECUTE PROCEDURE b_mirror();

 It is worth noting that the trigger could be an after one,

 No, it has to be a BEFORE trigger, else it's too late to affect the
 row value that gets stored.

Ops..I wrote it without my brain with me: of course it has to be a before one.

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-24 Thread Luca Ferrari
On Wed, Jul 24, 2013 at 2:29 AM, Some Developer
someukdevelo...@gmail.com wrote:
 I've done quite a bit of reading on stored procedures recently and the
 consensus seems to be that you shouldn't use them unless you really must.

I believe because most developers are not DBAs, and therefore are
scared about something they cannot control.
Placing as much logic as possible in the database is, in my opinion,
good since it will prevent any accidental (?) connection to the
database to corrupt your data. By accidental connection I mean a
developer/dba connecting to the database to change some value and
corrupting some constraint (that reside in the application) or by an
aside application or a refactoring of the application (e.g., in order
to change the application technology).
Thanks to the PostgreSQL support to many pl languages, you can even
reuse some existing application logic into the database, but it does
not mean this is the smarter choice (performance? OOP vs procedural?).
Of course, as placing business logic into the database makes the
database code more complex, it is required to do unit testing on the
code itself (e.g. pgtap).
Finally, another point in being scared of using stored procedure is
portability: a lot of frameworks claim to be portable across database
because they use a minimal survival subset of SQL features that are
almost supported on any decent database. Using a stored procedure will
make more complex the portability, since pl procedures need to be
translated from one database to another.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Different transaction log for database/schema

2013-07-23 Thread Luca Ferrari
On Mon, Jul 22, 2013 at 10:09 AM, Ondrej Chaloupka cha...@seznam.cz wrote:
 Hello,

 I would like kindly ask for an advice whether and how the transaction log
 behavior could be configured.

 I would like to have possibility to differentiate transaction logs for two
 databases or schema. I would need such configuration for my testing
 environment where I need to run simultaneously different tests using
 transactions. And I need to separate those tests for being sure that they do
 not interfere with each other. I'm using XA transactions in Java.


It is not clear to me to which logs you are referring: the commit logs
or the WALs? Anyway, transaction are numbered in an unique way around
the cluster, and therefore it does not matter on which database you
are going to run the transaction, it will not interfere with other
databases within the same cluster.
I don't get what advantage will give you having logs different for
each database/schema, considering that it is not clear what kind of
maintenance you are going to do on such logs. Can you please better
explain what is your aim and use case?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin for ubuntu

2013-07-18 Thread Luca Ferrari
On Wed, Jul 17, 2013 at 4:02 PM, Muhammad Bashir Al-Noimi
mbno...@gmail.com wrote:

 Failed to fetch
 bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_quantal-updates_main_binary-amd64_Packages
  Hash Sum mismatch

I guess this is the problem. It happened to me sometimes when working
behind a proxy, but I don't have a good workaround. It is better to
ask to someone on an ubuntu mailing list.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Parameter for query

2013-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2013 at 6:25 AM, Robert James srobertja...@gmail.com wrote:
 Is there any way to set a variable or parameter for a query?

 I have a long query where a certain variable needs to be easy to
 change.  I'd like to do something like:

 threshold = 10.3

 SELECT... WHERE x  $threshold... AND y * 1.3  $threshold...


Using prepared statements? See here:
http://www.postgresql.org/docs/9.2/static/sql-prepare.html


 Currently, I need to do this in a scripting language.  Is there a
 straightforward way to do this directly in Postgres?

well, if you are doing this in a script(ing) it should be quite easy
to do string interpolation.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Driver Question

2013-07-17 Thread Luca Ferrari
On Tue, Jul 16, 2013 at 5:36 PM, Corbett, James
james.corb...@cra-arc.gc.ca wrote:
 Hello all:

 My first official message so please be gentle with me.

 I’m attempting to make a new JDBC Connection Profile via my Eclipse IDE.
 Apparently it’s looking for the following jar in the driver wizard:

 postgresql-8.1-404.jdbc2.jar.

 However when I burrow down to locate the driver in my Jboss server as
 indicated below I only see the following.

 C:\APPS\lse_jboss-4.2.3.GA-1.1\server\default\lib\postgresql-9.0-801.jdbc4.jar


You have to edit the driver definition removing the 8.1 jar file from
the JAR List tab leaving only the recent driver.
I tried it with 9.1-902.jbdc4.jar and Eclipse Juno. I don't know why
Eclipse asks for such an old driver, I suppose is something not update
in the Eclipse pre-built configuration.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question re contribs of Julian Assange

2013-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2013 at 10:15 AM, ERR ORR rd0...@gmail.com wrote:
 I noticed on the postgresql website that a certain Julian Assange is
 mentioned among the contributors to Postgresql.

 Out of curiosity, could anybody post in short what exactly he contributed to
 the DB?


http://lmgtfy.com/?q=julian+assange+postgresql


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] last_vacuum field is not updating

2013-07-16 Thread Luca Ferrari
On Mon, Jul 15, 2013 at 6:43 PM, Giuseppe Broccolo
giuseppe.brocc...@2ndquadrant.it wrote:

 Are you sure you are the table's owner?

It should not be a permission problem: it works even after a revoke
all on 9.2.4. Interestingly also the autovacuum is really old. Have
you tried to do a simple vacuum?  From the documentation
(http://www.postgresql.org/docs/current/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE):

Last time at which this table was manually vacuumed (not counting VACUUM FULL)

Are you sure we are looking at the same table?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] last_vacuum field is not updating

2013-07-16 Thread Luca Ferrari
On Tue, Jul 16, 2013 at 3:22 PM, AI Rumman rumman...@gmail.com wrote:
 Yes, I am sure that I am looking for the same table.


What if you analyze the table? Does the column on the stats get updated?
Have you tested such behavior against another (even dummy) table?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2013-07-15 Thread Luca Ferrari
On Sun, Jul 14, 2013 at 8:36 PM, Vincenzo Romano
vincenzo.rom...@notorand.it wrote:

 I am only concerned about how late is done the binding between a table
 name and the actual OID for other functions, views and triggers.


Well, it should work like this: once the parser decides that a query
looks good, it seaches for the catalogue to find out all the names of
implied relations. Therefore, the binding you mention should happen as
late as possible. However, I'm not sure if there a kind of caching
mechanism that can invalidate such sentence. Someone with more
experience can detail better.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2013-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2013 at 8:33 AM, Vincenzo Romano
vincenzo.rom...@notorand.it wrote:

 The alternative is to do things the good ol' way by DELETING+INSERTING
 (http://tapoueh.org/blog/2013/07/05-archiving-data-fast.html)
 Where I'd fear for longer LOCKs.


I don't know if this is an option for your case study, but you could
also exploit schemas to achieve the result: placing the new table into
a new schema and changing the search path (disallowing access to the
old schema). Of course this means you are able to lock out your
clients during the migration or you need to use some rule to redirect
queries.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2013-07-14 Thread Luca Ferrari
On Fri, Jul 12, 2013 at 1:23 PM, Vincenzo Romano
vincenzo.rom...@notorand.it wrote:
 Hi all
 I'm making some experiments with table archiving and I'd like to
 replace a full table F with an empty one E.
 In order to do this I see only one way:

 ALTER TABLE F RENAME TO T;
 ALTER TABLE E RENAME TO F;
 ALTER TABLE T RENAME TO E; -- optional

 This implies there's a moment when the full table doesn't exist.
 Would a transaction enclosure ensure that the table F will be always
 available to all clients?


If I get it right using transaction boundaries around the DDL will
prevent clients to query the F table until the transaction ends, and
this is due to the locking of the alter table. In other words, a query
performed against the F table while the transaction is running will
simply locks without generating any error.

Hope this helps.
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query on query

2013-07-05 Thread Luca Ferrari
On Fri, Jul 5, 2013 at 5:09 AM, Jayadevan M
jayadevan.maym...@ibsplc.com wrote:


 So each student may get counted many times, someone with 99 will be counted
 10 times. Possible to do this with a fat query? The table will have many
 thousands of records.



Not sure I got the point, but I guess this is a good candidate for a CTE:

WITH RECURSIVE t(n) AS (
VALUES (10)
  UNION ALL
SELECT n+10 FROM t WHERE n  50
)
select count(*), t.n from m, t where mark  t.n group by t.n;

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Analyzing last run query in psql

2013-07-03 Thread Luca Ferrari
On Wed, Jul 3, 2013 at 7:05 AM, Joe Van Dyk j...@tanga.com wrote:
 I frequently need to analyze the last query in psql:
 select * from table where id = 1;
 explain analyze select * from table where id = 1;

 It would be nice to be able to do this:
 explain analyze $LAST

 (or can I do something like that already?)

 I'm not using psql interactively, I pipe files into psql (if it matters).


I don't know of any feature like that, but maybe you can do the
following in your script/files:

\set lastquery 'your-query-here'

so that you can do something like

explain :lastquery;

But if you are able to manipulate your scripts you can just insert the
explain by hand on queries you are interested in, or use autoexplain
for all the queries.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the difference between cmin and cmax

2013-07-02 Thread Luca Ferrari
On Tue, Jul 2, 2013 at 5:19 AM, 高健 luckyjack...@gmail.com wrote:
 Hello:
 I looked into the source code, and I think I now understand it:
 cmin and cmax are same! The documentation is too old now.

Yes, you figured it out.
For short: cmin and cmax are overlapped fields and are used within the
same transaction to identify the command that changed a tuple, so that
in-transaction commands can be ordered and, therefore, tuple
visibility can be calculated.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] databse version

2013-06-11 Thread Luca Ferrari
On Mon, Jun 10, 2013 at 9:52 AM, Philipp Kraus
philipp.kr...@flashpixx.de wrote:
 Hello,

 I'm creating a database and I have got a table with a version field.

Not sure, but if the version field is something like the version row
indicator used by some frameworks (e.g., Hibernate), then you are
going to place the updated version into all your records, that does
not sound as versioning at all!

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] BLOB updates - database size explodes

2013-05-30 Thread Luca Ferrari
On Thu, May 30, 2013 at 12:49 AM, Stephen Scheck
singularsyn...@gmail.com wrote:

 If this hypothesis is correct, doing a vacuum should free up dead pages and
 your size expectations should be more accurate. And if that's the case
 putting more intelligence into the application could mitigate some of the
 update growth (predicting what page temporally similar updates will go to
 and grouping them into a single transaction, for instance).


Seems correct to me, according to this
http://www.postgresql.org/docs/current/static/lo.html
I would give a try to vacuumlo to see if the size decreases, in such
case that is the right explaination.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] explain doubt

2012-06-25 Thread Luca Ferrari
Hi all,
imagine the following simple situation:

# CREATE TABLE test( pk serial not null, description text, primary key(pk));
# INSERT INTO test(pk) VALUES(generate_series(1,100 ) );
# VACUUM FULL ANALYZE test;
# EXPLAIN SELECT * FROM test WHERE pk = 1 OR pk = 100;
  QUERY PLAN
--
 Bitmap Heap Scan on test  (cost=8.69..16.59 rows=2 width=36)
   Recheck Cond: ((pk = 1) OR (pk = 100))
   -  BitmapOr  (cost=8.69..8.69 rows=2 width=0)
 -  Bitmap Index Scan on test_pkey  (cost=0.00..4.35 rows=1
width=0)
   Index Cond: (pk = 1)
 -  Bitmap Index Scan on test_pkey  (cost=0.00..4.35 rows=1
width=0)
   Index Cond: (pk = 100)

Now, what is the .35 in the cost of the bitmap index scan nodes? I
mean it seems that the system has to walk 23 index tuples on each
index page but I'm not sure about this, does it mean that this is the
tree high? Since the value is the same for both the first and a
middle key I suspect it is an average count, but on what?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] explain doubt

2012-06-25 Thread Luca Ferrari
On Mon, Jun 25, 2012 at 4:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 The short answer though is that this is probably coming from CPU cost
 components not disk-access components.

Yes of course they are cpu costs, but I'm not able to understand which
ones. Is there a way to make PostgreSQL to log the values of the
single parts of the cost computation (e.g., minIOCost, maxIOCost,
...)?

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help understanding the bitmap heap scan costs

2012-05-21 Thread Luca Ferrari
Hi all,
I don't fully understand how is the cost of a bitmap heap scan
computed. For instance when the explain output node is similar to the
following:

 Bitmap Heap Scan on test  (cost=17376.49..48595.93 rows=566707 width=6)
   Recheck Cond: ((text1 = 'A'::text) OR (text1 = 'C'::text))
   Filter: (num1  1)

how is the cost of the node (48595.93 - 17376.49) computed? I think it
should be something like:
(reltuples * ( index_filtering_factor_A + index_filtering_factor_B) )
* (cpu_tuple_cost + cpu_operator_cost)
+ (reltuples * ( index_filtering_factor_A + index_filtering_factor_B)
) / tuples_per_pages

but this does not equal the optimizer cost, so I guess I'm doing
something wrong. Suggestions?

Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help understanding explain output

2011-02-15 Thread Luca Ferrari
Hello,
I've got a doubt about partial indexes and the path chosen by the optimizer.
Consider this simple scenario:

CREATE TABLE p( pk serial NOT NULL , val2 text, val1 text, b boolean, PRIMARY 
KEY (pk) );
INSERT INTO p(pk, val1, val2, b) VALUES( generate_series(1,100), 'val1b', 
'val2b', true );
INSERT INTO p(pk, val1, val2, b) VALUES( generate_series(101,200), 
'val1Notb', 'val2Notb', false );
CREATE INDEX i_p_b ON p (b) WHERE b = true;
ANALYZE p;

So I create a table with 2-million rows, the first million with b = true and 
the second one with b = false.
Now doing an explain for a query that selects only on the b attribute I got:

EXPLAIN SELECT * FROM p WHERE b = false;
 QUERY PLAN 

 Seq Scan on p  (cost=0.00..34706.00 rows=1000133 width=28)
   Filter: (NOT b)


So a sequential scan. I know that the optimizer will not consider an index if 
it is not filtering, but I don't understand exactly why in this case. In fact, 
considering that the above query could remove the first half data pages (where 
b = true), and considering that:

SELECT reltype, relval1, relpages, reltuples
FROM pg_class WHERE relval1 IN ('p', 'i_p_b');
 reltype | relval1  | relpages | reltuples 
-+--+--+---
  615079 | p|14706 | 2e+06
   0 | i_p_b | 2745 |999867

The sequential access requires 14706 pages, while using the index for filtering 
almost the half of those, we've got 2745 + 7353 = around 1 pages.
I've tried to change the index type to an hash, but the situation did not 
change. Even with enable_seqscan = off the above query is executed 
sequentially, but with a different initial cost:


EXPLAIN SELECT * FROM p WHERE b = false;
 QUERY PLAN 

 Seq Scan on p  (cost=100.00..1034706.00 rows=1000133 width=28)
   Filter: (NOT b)


And here comes the second doubt: since in both cases the planner is doing a 
sequential access, why the first case has an initial cost = 0 and this one has 
a cost of 1 million?
I'm getting lost here, I need some hint to understand what is happening.

I'm running
PostgreSQL 9.0.2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20080704 (Red Hat 4.1.2-46), 64-bit


Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] TupleDesc and HeapTuple

2010-07-07 Thread Luca Ferrari
On Tuesday, July 06, 2010 09:58:45 pm Alvaro Herrera's cat walking on the 
keyboard wrote: 
 You're supposed to know which relation you got the HeapTuple from, so
 you get the TupleDesc from there.

True, but if I want to pass the heaptuple around I must pass also its tuple 
desc, or the receiver will not know how the tuple is composed. Maybe this case 
never happens/happened.

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] TupleDesc and HeapTuple

2010-07-06 Thread Luca Ferrari
Hi,
I don't see any direct link between the TupleDesc structure and the HeapTuple 
one, and it seems strange to me since to manipulate a tuple you often need the 
descriptor. What is the trick here?

Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] what is the meaning of Datum?

2010-06-23 Thread Luca Ferrari
Hi all,
ok this is a silly question, but I've got a doubt: what is the exact meaning 
of Datum? I see having a look at the macroes (e.g., PG_RETURN_XXX) that a 
Datum can be used as a pointer or as a single data, that is it can be a 
reference or a value. Is this right? So for instance the fact that each stored 
procedure returns a Datum means that the semantic of the Datum is interpreted 
depending on how the procedure is defined in the SQL language (i.e., it returns 
a integer, a tuple, ...). Am I right?

Moreover, is there a documentation (aside the source code) that explains and 
links each internal data structure like HeapTuple, HeapTupleHeader, and so on?

Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump cannot connect when executing by a script

2010-05-25 Thread Luca Ferrari
Hi all,
I've found in the net a lot of problems similar to mine, but not the solution 
for my case: when I executed pg_dump against a database from a script (that 
will be executed by cron) I got the following error:

pg_dump: [archiver (db)] connection to database webcalendardb failed: could 
not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.5432?


and the line the script is executing is the following:

/usr/local/bin/pg_dump -f 
/backup/sedeldap/postgresql//webcalendardb2010_05_25.sql -h sedeldapa -U 
postgres webcalendardb

and of course, if executed interactively, the above line works. The pg_dump is 
for 8.4.0 (installed from freebsd ports). 
Moreover, if in the script I add the option -F t than the script complains 
that:

pg_dump: too many command-line arguments (first is webcalendardb)

and of course the following:
/usr/local/bin/pg_dump -F t-f 
/backup/sedeldap/postgresql//webcalendardb2010_05_25.sql -h sedeldap -U 
postgres webcalendardb

works.

Any idea? The only thing I suspect is that I change the IFS in the shell 
script, but I also restore it back before trying to pg_dump.

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump cannot connect when executing by a script

2010-05-25 Thread Luca Ferrari
On Tuesday 25 May 2010 01:13:40 pm A. Kretschmer's cat walking on the keyboard 
wrote:
 Your unix-scket isn't in /tmp.
 
 Start psql -h localhost and type:
 
 show unix_socket_directory;
 
 This will show you the corrent path to the unix-socket. You can use that
 for pg_dump with option -h /path/to/the/socket-dir


Not sure if I get it right: on the machine executing the script postgresql is 
not installed, only the client is.
However, it seems to work specifying in the pg_dump the port to which the 
script must connect: -p 5432. It is interesting to know why the psql command 
is working fine even without such parameter and pg_dump is not.

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] different execution times of the same query

2009-10-20 Thread Luca Ferrari
Hi all,
I'm testing a proprietary driver that connects my old applications to a 
postgresql database. The problem is that I've got very strange issues about 
execution times. For instance, the following query:

cogedb= explain analyze SELECT *  FROM GMMOVART  WHERE DATA  = '01/01/2006' 
AND DATA  = '31/12/2006' ORDER BY DATA, CONTATORE, RIGA;
   QUERY PLAN   






 Sort  (cost=152440.12..152937.79 rows=199069 width=340) (actual 
time=1734.550..1827.006 rows=214730 loops=1)
   Sort Key: data, contatore, riga
   -  Bitmap Heap Scan on gmmovart  (cost=6425.18..134919.15 rows=199069 
width=340) (actual time=135.161..721.679 rows=214730 loops=1)
 Recheck Cond: ((data = '2006-01-01'::date) AND (data = 
'2006-12-31'::date))
 -  Bitmap Index Scan on gmmovart_index03  (cost=0.00..6375.42 
rows=199069 width=0) (actual time=128.400..128.400 rows=214730 loops=1)
   Index Cond: ((data = '2006-01-01'::date) AND (data = 
'2006-12-31'::date))
 Total runtime: 1893.026 ms
(7 rows)


Executes in 1,8 seconds. Now, the same query launched thru the driver produces 
a log with the following entry:

cogedb LOG:  duration: 5265.103 ms  statement:  SELECT *  FROM GMMOVART  WHERE 
DATA  = '01/01/2006' AND DATA  = '31/12/2006' ORDER BY DATA, CONTATORE, RIGA

with a duration of 5,2 seconds, that is 3+ times longer than the query run in 
the psql prompt! Please note that the query is always executed locally.
Now, I don't have access to driver internals, so I don't know how it works and 
what could be the difference of time due to. Is there something I can work on 
my postgresql server in order to better investigate or to tune to shrink down 
execution times?
I suspect that the driver uses a cursor, could it be a pause between 
consecutive fetches that produces such time difference? Please note that I've 
tested different queries with similar results, even among database restarts (in 
order to avoid result caching).

Any help is appreciated.

cogedb= select * from version();
   version
--
 PostgreSQL 8.2.9 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3 
20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)
(1 row)


Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different execution times of the same query

2009-10-20 Thread Luca Ferrari
On Tuesday 20 October 2009 10:44:13 am Scott Marlowe's cat walking on the 
keyboard wrote:
 Two things.  1: Actually running the query and receiving the results
 isn't the same as just running it and throwing them away (what explain
 analyze does) and 2: The query may be getting cached in psql if you're
 running it more than once, but it may not run often enough on that
 data set to get the same caching each time.
 


You are right, in fact executing:

psql -h localhost -U dataflex cogedb -c SELECT *  FROM GMMOVART  WHERE DATA  
= '01/01/2006' AND DATA  = '31/12/2006' ORDER BY DATA, CONTATORE, RIGA -o 
/dev/null

produces a log like the following:

cogedb LOG:  duration: 8841.152 ms  statement: SELECT *  FROM GMMOVART  WHERE 
DATA  = '01/01/2006' AND DATA  = '31/12/2006' ORDER BY DATA, CONTATORE, RIGA

so 8,8 seconds against 7 seconds, now it sounds compatible. But I was always 
trusting the time of explain analyze, this make me doubt about it. So how is 
such time (explain analyze) to mind?

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump table space

2009-06-01 Thread Luca Ferrari
Hi,
is there a way to dump an entire database which has a specific table space 
without having in the sql file any reference to the tablespace? This can be 
useful when moving the database from one machine to another (that does not use 
the tablespace). Any way to achieve that with pg_dump?

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] drop table but file still exists

2009-05-15 Thread Luca Ferrari
Hi,
I'm just curious to know why after a drop table the disk file is emptied but 
still existent. What is the reason why the file is not deleted immediately?

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] empty table explain...

2008-10-27 Thread Luca Ferrari
Hi all,
I'm curious to know why, if a table is empty, it seems that an ANALYZE of the 
table does not insert any stats in the pg_stats table, since maybe this could 
be useful to solve joins including this table. Second, if I execute an EXPLAIN 
on an empty table, even after an ANALYZE of the table, I got an explain that 
proposes me a row numbers and size that I cannot understand (since it seems to 
be different depending on the table definition).
As an example:

# create table test(id serial, descrizione character varying(20));

# explain  select * from test;
   QUERY PLAN

 Seq Scan on test  (cost=0.00..18.80 rows=880 width=62)

# analyze verbose test;
INFO: ?analyzing public.test
INFO: ?test: scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 
rows in sample, 0 estimated total rows
ANALYZE

# explain select * from test;
? ? ? ? ? ? ? ? ? ? ? ?QUERY PLAN

?Seq Scan on test ?(cost=0.00..18.80 rows=880 width=62)
(1 row)

# select count(*) from test;
?count
---
? ? ?0
(1 row)

I know it does not make sense having an empty table or worrying about, but I'm 
just curious to know if there's a particular reason for the above behaviours.

Thanks,
Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump is ignoring my pgpass file

2008-10-22 Thread Luca Ferrari
On Tuesday 21 October 2008 Tom Lane's cat, walking on the keyboard, wrote:

 AFAICT the matching of .pgpass entries to a connection attempt is
 strictly textual.  sedeldap != 192.168.1.2 therefore none of
 these entries apply.  

Thanks, I'm able to make entries work only with the ip address, and not a 
hostname. I guess this is a lookup problem, however with ip addresses it 
works.

Luca




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump is ignoring my pgpass file

2008-10-20 Thread Luca Ferrari
Hi all,
I'm running 8.2.9, and I've got a curious problem on a database of my cluster. 
I've got my pgpass file:

[EMAIL PROTECTED]:~$ cat ~/.pgpass
192.168.1.2:5432:raydb:ray:xxx
192.168.1.2:5432:hrpmdb:hrpm:x
192.168.1.2:5432:vatcontrollerdb:vatcontroller:xx

and if I connect from the command line to any database I'm not prompted for a 
password. But if I try to execute the following:

pg_dump --create --column-inserts -v -f raydb_ott_20_08.sql  -U ray -h 
sedeldap  raydb

I'm prompted for a password immediatly. But if I execute the same command with 
another database (and another user) I'm not prompted for a password at all. 
I've checked and the ray user is also owner of the raydb.any idea on what 
I'm missing?

Thanks,
Luca



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OR or IN ?

2008-10-17 Thread Luca Ferrari
On Friday 17 October 2008 Scott Ribe's cat, walking on the keyboard, wrote:
 Older versions of PG were inefficient with larger numbers of elements in an
 IN query, and in fact would error out with something about lack of stack
 space if you used very many (a few hundred IIRC).

 8.x something was supposed to have improved that. Using 8.3 recently, after
 an oopsie with some development data, I inadvertently confirmed that it
 works and performance is not too bad with 34,000 items in an IN clause ;-)

Interesting, since my queries sometimes expand to a few thousands of ORs (or 
INs), and I had in fact stack problems and I had to expand it to around 20 
MB.

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OR or IN ?

2008-10-14 Thread Luca Ferrari
Hi all,
I've got a query with a long (50) list of ORs, like the following:

SELECT colB, colC FROM table WHERE colA=X OR colA=Y OR colA=Z OR 

Is there any difference in how postgresql manages the above query and the 
following one?

SELECT colB, colC FROM table WHERE colA IN (X,Y,Z,...)

Which is the suggested index to use on colA to get better performances?

Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_replace in two times?

2008-05-09 Thread Luca Ferrari
On Thursday 8 May 2008 Tom Lane's cat, walking on the keyboard, wrote:
 Maybe the original strings had more than one instance of 'TIF'?

Opsyou're right, I've checked with a backup copy and I found four records 
with the double tif pattern.
I should have get it beforesorry!

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp_replace in two times?

2008-05-08 Thread Luca Ferrari
Hi all,
I used the regexp_replace function to make a substitution over a table, but I 
got a strange behaviour (please consider I'm not an expert of regex). The 
idea is to remove the final part of a code, that could be TIF, ISTTIF, tif, 
isttif, and at the same time consider only the records depending on the join 
with another table. Now, the strange thing is that the first query updated 
the most of records, but then 4 records are still there, and in fact 
executing again the same update provides me another substitution. What could 
be the reason?

db= begin;
BEGIN
raydb= update elementi_dettagliset codice = regexp_replace( 
upper(codice), '(IST)*TIF$', '')
where id_elemento in (
select ed.id_elemento
from elementi_dettagli ed, elementi e
where ed.id_elemento = e.id_elemento
and e.categoria = 'bozzetti'
and ed.codice ~* '(IST)*TIF$'
);
UPDATE 4679

db= select ed.id_elemento,ed.codice from elementi_dettagli ed, elementi e 
where ed.codice like '%TIF' and ed.id_elemento = e.id_elemento and 
e.categoria='bozzetti';
 id_elemento |codice
-+--
   68904 | 0M0809532TIF
   67732 | Y07236TIF
   67608 | 0D0731744TIF
   65687 | 0M0708711TIF
(4 rows)

db= update elementi_dettagliset codice = regexp_replace( 
upper(codice), '(IST)*TIF$', '')
where id_elemento in (
select ed.id_elemento
from elementi_dettagli ed, elementi e
where ed.id_elemento = e.id_elemento
and e.categoria = 'bozzetti'
and ed.codice ~* '(IST)*TIF$'
);
UPDATE 4





db= select version();
   version
--
 PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3 
20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)
(1 row)


Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] split pg_dumpall backups per database

2008-03-11 Thread Luca Ferrari
Hi all,
is it possible to instrument pg_dumpall to produce separate sql files for each 
database it is going to backup? I'd like to keep separate backups of my 
databases, but using pg_dump can lead to forgetting a database.

Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] process pool

2008-02-25 Thread Luca Ferrari
Hi,
sorry for this question, but as far as I know postgresql does not use a 
process pool, rather a new process is created for any connection on demand. 
If this is true, what is the reason for this? 

Thanks,
Luca

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] hibernate + postgresql ?

2007-12-05 Thread Luca Ferrari
On Saturday 1 December 2007 David Fetter's cat, walking on the keyboard, 
wrote:
 You'd only think so if you hadn't actually seen these things in
 action.  They save no time because of the silly, unreasonable
 assumptions underlying them, which in turn cause people to do silly,
 unreasonable things in order to make them work.


I guess this is a problem you can have with all the middlewares, since they 
can improve things putting on abstractions, but when they start doing things 
in a way that is not tied to the lower level they must be general and start 
imposing a methodology rather than a technology. By the way, is there 
something in particular you are talking about?


 You'll wind up writing each SQL statement anyway, so just start out
 with that rather than imagining that a piece of software can pick the
 appropriate level of abstraction and then finding out that it can't. :)

Uhm...even if you write the SQL statements by hand you will end up (probabily) 
writing your own piece of software that gives you any kind of abstraction, so 
there's a risk you can find it inadeguate too later in the development 
process. By the way I don't still understand if you find them inadeguate 
because you'll write SQL statements to keep performances, data integrity, 
both.


Luca

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] hibernate + postgresql ?

2007-11-29 Thread Luca Ferrari
On Thursday 29 November 2007 Joshua D. Drake's cat, walking on the keyboard, 
wrote:
 If you are not lazy you can push outside the standard hibernate methods
 and produce very usable code but then you have to wonder why you have
 hibernate there at all.

What do you mean with this? I think ORM could save developers' time especially 
when there are a lot of relationships among objects, that should be manually 
mapped thru SQL statements. Now what do you mean with push outside hibernate 
methods? You write the each SQL statement or simply skip some Hibernate 
facilities?

Thanks,
Luca

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Join between tables of two or more databases

2007-10-31 Thread Luca Ferrari
On Wednesday 31 October 2007 T.J. Adami's cat, walking on the keyboard, wrote:
 The question is: can I do this using remote database servers
 (different hosts)? If does not, can I do it at least on local
 databases on the same server?

I guess the dblink module could help you.

Luca

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_class.relfilenode for large tables

2007-08-20 Thread Luca Ferrari
Hi all,
when a table becomes large a new file on disk is created. Such file has the 
name compound by the pg_class.relfilenode attribute and an incremental index. 
However it seems to me this does not appears in the pg_class table. Is there 
any place where this extra file appears?

As an example:

[EMAIL PROTECTED]:~$ ls -l /opt/database/24601/41098* -h
-rw--- 1 postgres postgres 1,0G 2007-08-20 08:48 /opt/database/24601/41098
-rw--- 1 postgres postgres 202M 2007-08-20 
08:49 /opt/database/24601/41098.1


select oid, * from pg_class where relname='large';

  oid  | relname | relnamespace | reltype | relowner | relam | relfilenode | 
reltablespace | relpages |  reltuples  | reltoastrelid | reltoastidxid | 
relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | 
relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | 
relhassubclass | relfrozenxid | relacl | reloptions
---+-+--+-+--+---+-+---+--+-+---+---+-+-+-+--+---+-+--+--+-+++-++--++
 41098 | large  | 2200 |   41099 |16386 | 0 |   41098 | 

0 |   156813 | 1.12921e+07 | 0 | 0 | f   | 
f   | r   |9 | 0 |   0 |0 |
0 |   0 | f  | f  | f   | f  | 
11412913 ||
(1 riga)


Thanks,
Luca

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] a few questions (and doubts) about xid

2007-08-01 Thread Luca Ferrari
On Friday 27 July 2007 Alvaro Herrera's cat, walking on the keyboard, wrote:
 Consider an open cursor; you open it and leave it there.  Then you
 delete something from the table.  Then you read from the cursor.  The
 deleted row must be in the cursor.

Thanks fot these details. Now a few other questions come into my mind (I hope 
not to bother you guys!).

In chapter 49 of the documentation (index access) I read that an index stores 
pointers to any version of the tuple that is present in the database. Now I 
remember that the t_ctid field of the HeapTupleHeaderData points to the newer 
versione of a tuple (if exists) and that it is possible to follow the t_ctid 
to get the newer tuple version. Now since a new version tuple is stored at 
the end of a table, chances are that the tuple is stored into another page 
that the older one. If this is right, the index is required to know exactly 
in which page a tuple version is, rather then following the t_ctid link, thus 
what is the purpose of such chain?
The second question is why the index returns all the tuple version without 
considering time (I guess MVCC) constraints? What are the problems of 
evaluationg the xmin,xmax stuff within the index amget methods? Maybe this is 
not done due to concurrency issues?
Third, I read about not valid MVCC snapshots (e.g., SnapshotNow). What is the 
meaning of such strange snapshots? Because postgresql should always 
guarantee at least read committed isolation, and thus this should be done 
thru MVCC.

Thanks,
Luca

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] a few questions (and doubts) about xid

2007-08-01 Thread Luca Ferrari
On Wednesday 1 August 2007 Gregory Stark's cat, walking on the keyboard, 
wrote:

 You're right, the index contains pointers to *every* version of the tuple.
 So in a regular SELECT statement you don't need to look at the update chain
 at all.

 The main use of the update chain is when you want to perform an UPDATE or
 DELETE. In that case when you come across a record which is being updated
 by another transaction you must wait until that other transaction finishes
 and then update the resulting record (if you're in read-committed mode).

So, just to see if I got this, if the HeapTupleSatisfiesUpdate returns 
HeapTupleUpdated the chain update must be walked, right?


  Third, I read about not valid MVCC snapshots (e.g., SnapshotNow). What is
  the meaning of such strange snapshots? Because postgresql should always
  guarantee at least read committed isolation, and thus this should be done
  thru MVCC.

 They're needed for things like the above UPDATE chain following

uhm...so first I get a tuple version using HeapSatisfiesUpdate, then if the 
tuple is HeapTupleUpdated I follow the update chain, then (may be) recheck 
with HeapTupleSatisfiesSnapshot with a SnapshotNow?? Something like this?

Thanks,
Luca




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PG Admin

2007-07-31 Thread Luca Ferrari
On Tuesday 31 July 2007 Bob Pawley's cat, walking on the keyboard, wrote:
 Can anyone tell me why a table developed through the PG Admin interface
 isn't found by SQL when accessing it through the SQL interface??

Maybe it is a problem of case-sensitive names? Check in the table definition 
of pgadmin is the table name is quoted (like in myTable). Nothing comes 
into my head now.

Luca

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [pgsql-advocacy] European users mailing list

2007-07-30 Thread Luca Ferrari
On Monday 30 July 2007 Robert Treat's cat, walking on the keyboard, wrote:
 *shrug*  I wasn't there either, but seems all the emails I have seen
 reffered to it as the European PostgreSQL Users Group, so I expected it
 to look more like other users groups, ie. [EMAIL PROTECTED]

This mailing list is supposed to be a coordination infrastructure for 
discussing about the  eupug to be or not to be (of course I hope the former).
Moreover I don't see any difference with the mailing list template related 
to french, german, ... mailing lists.


  But for now, I was asked to arrange a general mailing list, which I have
  done.

 Except we already had a general european mailing list, so I'm really not
 clear on what the above is meant for accomplishing.

Where is this mailing list? Maybe I'm becoming blind but I cannot see it on 
the mailing list pageand what is the main purpose of the list you are 
talking about?

Luca

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] a few questions (and doubts) about xid

2007-07-27 Thread Luca Ferrari
Thanks all for your comments. Just another little hint here (sorry for trivial 
questions):

if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmax(tuple)))
{
if (tuple-t_infomask  HEAP_IS_LOCKED)
return true;
if (HeapTupleHeaderGetCmax(tuple) = GetCurrentCommandId())
return true;/* deleted after scan started */
else
return false;   /* deleted before scan started 
*/
}

what does that deleted after scan started means? How is possible that the 
current transaction has deleted the tuple with a command higher than the one 
that is still executing? An example could clearify

Thanks,
Luca

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] granting acces to an external client

2007-07-26 Thread Luca Ferrari
On Wednesday 25 July 2007 Sorin N. Ciolofan's cat, walking on the keyboard, 
wrote:
 Hello!

 I'd like to ask you what line should be added in pg_hba.conf file in order
 to grant access to a user with ip
 139.100.99.98 to a db named myDB with user scott with password
 mikepwd?


This should work:
hostmyDB   scott  139.100.99.98   md5

For the password you must use the $HOME/.pgpass file storing a line like the 
following:
dbHost:5432:myDB:scott:mikepwd

being dbHost the host that is running postgres. After that issue a reload.

Hope this helps.

Luca

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] a few questions (and doubts) about xid

2007-07-26 Thread Luca Ferrari
On Wednesday 25 July 2007 Gregory Stark's cat, walking on the keyboard, wrote:
 If you really want to understand how snapshots work at this level you could
 read (slowly -- it's pretty dense stuff) through
 src/backend/utils/time/tqual.c:HeapTupleSatisfiesMVCC()

Ok, I need a little hint here:

if (tuple-t_infomask  HEAP_XMAX_COMMITTED)
{
if (tuple-t_infomask  HEAP_IS_LOCKED)
return true;
return false;
}

if the tuple xmin has committed and the xmax is also committed the tuple is 
not visible (return false). But if it is locked then the tuple is visible. 
Now htup.h says that if the tuple is locked it has not been really deleted by 
xmax, but only locked. Does this means that xmax is going to release locks? 
In other words the tuple will not be visible while it is locked, even if the 
xmax has committed but still not released the locks? Is this a situation due 
to a delay between the commit and the lock release?

Thanks,
Luca

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] a few questions (and doubts) about xid

2007-07-25 Thread Luca Ferrari
HI all,
apologize me for my stupid questions but I'd like to better understand how 
mvcc works. Now, each tuple has xmin (insert xid) and xmax (delete/update 
xid). In short each transaction with xmin=xid=xmax can see such tuple, 
otherwise it cannot (of course beeing xmin and xmax different transtaction 
from xid and beeing committed), isn't it?

Now, for subtrans the xid is laizyly obtained, due to efficiency purposes. But 
in such way subtrans xid should be greater than each other xid of concurrent 
(main) transactions. If the subtrans inserts a record is the subtrans xid 
placed in xmin? Because in this case the xmin value makes the tuple invisible 
to every other concurrent transaction started with the parent one. Is this 
true or do the subtrans commit with the parent xid (in this case why the 
subtrans should have a xid?)? Ok, I'm bit confused here

Finally, the pg_class.relfrozenxid should not be set to the frozen value 
specified in transam.h when vacuum is executed? And what is its meaning for a 
newly created table (I see a value that I cannot understand)?

Thanks,
Luca

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a few questions (and doubts) about xid

2007-07-25 Thread Luca Ferrari
Thanks Gregory, thanks Simon.
I'm trying to read the tqual.c source file to better understand.

On Wednesday 25 July 2007 Gregory Stark's cat, walking on the keyboard, wrote:
 If you really want to understand how snapshots work at this level you could
 read (slowly -- it's pretty dense stuff) through
 src/backend/utils/time/tqual.c:HeapTupleSatisfiesMVCC()

I don't find it, I think you mean HeapTupleSatisfiesNow and 
HeapTupleSatisfiesSnapshot.
Just for confirmation: the relfrozenxid of a fresh table is the xid of the 
transaction that created it, isn't it?

Luca

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] query optimizer

2007-07-24 Thread Luca Ferrari
On Saturday 21 July 2007 Tom Lane's cat, walking on the keyboard, wrote:
 Beyond that, the GEQO chapter provides several references, and IMHO
 you should not be all that resistant to looking into the source code.
 Even if you don't read C well, many of the files provide a wealth of
 info in the comments.

Thanks for you integration and, even if I'm not a C-expert, I'd like to read 
the source code to better understand how postgres works. Nevertheless, since 
the source code can be very long, it should be better to have a kind of uml 
diagram or something similar to understand on which point of code to focus 
on. That's what I was looking for.

Luca

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] query optimizer

2007-07-20 Thread Luca Ferrari
On Thursday 19 July 2007 Tom Lane's cat, walking on the keyboard, wrote:
 http://developer.postgresql.org/pgdocs/postgres/overview.html
 (particularly 42.5)

I have already read this, thanks.


 src/backend/optimizer/README

I've read this yesterday, very interesting, but I'm looking for something 
similar related to geqo. I mean, is there any presentation/demo that 
step-y-step explains how geqo could take decisions?

Thanks,
Luca

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] query optimizer

2007-07-19 Thread Luca Ferrari
On Tuesday 17 April 2007 Tom Lane's cat, walking on the keyboard, wrote:
 jungmin shin [EMAIL PROTECTED] writes:
  As I see the documentation of postgres, postgres use genetic algorithm
  for query optimization rather than system R optimizer. right?

 Only for queries with more than geqo_threshold relations.  The join
 search algorithm for smaller queries is a System-R-like dynamic
 programming method.



Hi, 
I'd like to better understand how the optimizer works and is implemented. Is 
there any available documentation (before start reading the source!) to 
understand concepts about geqo and system r? Any chance about any demo or 
presentation with detailed examples (about how the optimizer makes and 
discards choices, not about how to read the planner output)?

Thanks,
Luca

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Acces via applets

2007-04-11 Thread Luca Ferrari
On Wednesday 11 April 2007 Marc's cat, walking on the keyboard, wrote:
 The bottom line question is can an applet served to a client machine other
 than the one the postrgres db resides on read that db?

 An applet I've written and tested on the same box as my database runs fine.

Marc's, due to security restriction an applet cannot connect to a server 
different from the one it has been dowloaded. Exceptions are signed applets. 
You can provide more functionalities with a n-tier server, for example a 
servlet running on your web machine that connects to the database server 
(another machine) and provides data to the applet (that can connect only to 
the web server). I read below that you have signed the applet and that you've 
done the same thing with sql server.have you tried such applet on your 
sql server configuration (if possible) to ensure that it works and is a 
postgresql only related problem and not a java one? Could you be more 
specific on the problem you have?

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] database design and refactoring

2007-01-03 Thread Luca Ferrari
Hi all,
in my database I've got a table with a key that is char string, since it was 
supposed to support values like strings. Running the database, the users 
decided to place numbers as strings, so values like 00110002 and so on.
Now I was wondering to refactor my database and change the char field into a 
numeric one but here comes problems: the field is referenced by other tables 
and views.
I guess there's not, but you'll never know until you try: is there a tool or a 
way to easily do such refactoring or should I write a program on my own to do 
this?

And moreover a database design question: is a better idea to choose always 
(when possible) numeric keys? I'm thinking about the database portability, 
since not all databases support sequences (that can be used with a concat to 
provide an automatic string key).

Thanks,
Luca


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


  1   2   >