The cook book is indeed helpful, but i doubt if it was ever properly reviewed.
Many typos, many apparent errors,
code for the cache utils pg_cacheutils is mentioned to exist somewhere but it's
not there, and many more
Στις Thursday 29 September 2011 08:07:34 ο/η Venkat Balaji έγραψε:
Thanks
On 29/09/2011 02:34, J.V. wrote:
Is it possible to group a bunch of methods and functions into a single file (stored
procedures functions) and have a main method that can be called
to orchestrate the methods, pass in params, get back results, log to a file?
I know this can be done with Oracle
On 29/09/11 02:33, J.V. wrote:
Is is possible within a stored procedure to read all the tables in a
schema into a list?
[snip]
I need to extract this meta-data for a project.
Apart from information_schema mentioned elsewhere, start psql with -E
and then try \dt and similar - it will show
On 28/09/11 18:51, Roger Niederland wrote:
To add the extension required me to change the search_path.
Is it required that all any schema added to the search_path exist in all
databases?
If set in the configuration file, yes (unless you want errors).
You can set it per-database or per-user
Until v. 9.1 enum modification was not officially supported, although hacking
pg_enum made it somewhat workable. Since v. 9.1 there is a way to add values to
enum types. However,
- alter type ... add value doesn't work in transaction blocks, which severely
limits its usefulness
- there is
On Tue, Sep 27, 2011 at 20:01, David North d...@corefiling.co.uk wrote:
testdb=# select * from problem_table;
ERROR: invalid memory alloc request size 2003676411
Is there some reason why my data can be stored in 1GB but triggers the
allocation of 2GB of memory when I try to read it back? Is
On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:
On Wed, Sep 28, 2011 at 3:28 AM, David North d...@corefiling.co.uk
wrote:
On 28/09/11 01:50, Craig Ringer wrote:
On 09/28/2011 01:01 AM, David North wrote:
testdb=# select * from problem_table;
ERROR: invalid memory alloc request
On 29 September 2011 13:12, Radosław Smogura rsmog...@softperience.eu wrote:
sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the transmission is not 100% binary. I
don't know if the JDBC sends/receives bytea as binary, but it may not.
If
Dear List,
we are currently updating our application to use multiple database
backends, the main backend on our site will be Postgres, though.
Some of our columns exceed the limit of the 'integer' type, now we are
discussing the alternatives. Personally, I would use bigint since it
should
On Thu, Sep 29, 2011 at 7:30 AM, Alban Hertroys haram...@gmail.com wrote:
On 29 September 2011 13:12, Radosław Smogura rsmog...@softperience.eu wrote:
sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the transmission is not 100% binary. I
On Thu, Sep 29, 2011 at 15:15, Gregor Vollmer
voll...@ekp.uni-karlsruhe.de wrote:
We do not do any arithmetic on the columns, only saving and retrieval, is
decimal as fast as bigint in that case?
It's slightly slower because numeric is passed around by pointer, not
by value. Bigint is by-value
On Thu, Sep 29, 2011 at 6:12 AM, Radosław Smogura
rsmog...@softperience.eu wrote:
On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:
sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the transmission is not 100% binary. I
don't know
Gregor
On Thu, 2011-09-29 at 14:15 +0200, Gregor Vollmer wrote:
Dear List,
we are currently updating our application to use multiple database
backends, the main backend on our site will be Postgres, though.
Some of our columns exceed the limit of the 'integer' type, now we are
Using 9.1 SR, the pg_stat_replication data's are available in Primary server.
The table is not replicated to standby server. Is it intensionally done. if
so why ?
Senthil
--
View this message in context:
On Thu, Sep 29, 2011 at 15:48, Jon Nelson jnelson+pg...@jamponi.net wrote:
especially if my attempt
to do so kills the backend I am using (which triggers a shutdown of
all other backends, no?).
No, this is just an ereport(ERROR) that's handled gracefully by
rolling back the transaction.
On 29/09/11 14:55, Merlin Moncure wrote:
On Thu, Sep 29, 2011 at 6:12 AM, Radosław Smogura
rsmog...@softperience.eu wrote:
On Wed, 28 Sep 2011 10:18:27 -0500, Merlin Moncure wrote:
sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the
On Thu, Sep 29, 2011 at 10:51 AM, David North d...@corefiling.co.uk wrote:
I'll also apply GZip to keep the amount of data in any given bytea/lob as
small as possible.
Aren't bytea fields compressed by postgresql anyway (when EXTENDED or
MAIN is used) (by default).
Hello Guys,
I have the following problem and I solved it in the following way, I need to
see if there are a better approaches to do that. any suggestion is welcome
The problem
I have function have this logic
FOR record1 in SELECT LOOP
FRO record2 in SELECT ... LOOP
On Thu, Sep 29, 2011 at 10:54 AM, Jon Nelson jnelson+pg...@jamponi.net wrote:
On Thu, Sep 29, 2011 at 10:51 AM, David North d...@corefiling.co.uk wrote:
I'll also apply GZip to keep the amount of data in any given bytea/lob as
small as possible.
Aren't bytea fields compressed by postgresql
I updated the firewall rules on a streaming replication standby server
and thought nothing of it. I later happened to notice on the primary
that ps aux | grep stream didn't show streaming to that server
anymore. On the standby that command still showed the wal receiver
patiently waiting for new
On Thu, Sep 29, 2011 at 1:26 PM, senthilnathan
senthilnatha...@gmail.com wrote:
Using 9.1 SR, the pg_stat_replication data's are available in Primary server.
The table is not replicated to standby server. Is it intensionally done. if
so why ?
pg_stat_replication is a dynamic view of a
Hello Postgresql Community Members,
I am stumped trying to install a few 'c' language functions
on a particular Solaris server (64-bit, amd cpu arch (not sparc)). I
actually
have 5 Postgresql servers, and the .so loads fine into 4 of them, but
refuses to load into the 5th. I've quintuple
I am baffled. Both PDT and WEST appear as valid timezone abbreviations, and
each have unique values, but:
test=# select timestamp with time zone '2011-09-29 18:00 PDT';
timestamptz
2011-09-29 18:00:00-07
(1 row)
test=# select timestamp with time zone
On 09/29/11 10:17 AM, Christophe Pettus wrote:
Both PDT and WEST appear as valid timezone abbreviations...
WEST? Really? where does this appear, I've never seen that.
--
john r pierceN 37, W 122
santa cruz ca mid-left coast
--
Sent via
Christophe Pettus x...@thebuild.com writes:
I am baffled. Both PDT and WEST appear as valid timezone abbreviations, and
each have unique values, but:
Where do you see WEST as a valid timezone abbrevation? It's not listed
in the Default abbreviation list. (Perhaps it should be, since there
-Original Message-
From: Ondrej Ivanič [mailto:ondrej.iva...@gmail.com]
Sent: Wednesday, September 28, 2011 6:47 PM
To: pgsql-general@postgresql.org
Subject: Re: Rules going away
Hi,
folks, don't use RULES! use triggers -- and as much as possible, keep
triggers simple, short,
On Thu, 2011-09-29 at 12:08 -0500, dennis jenkins wrote:
ERROR: could not load library /db/pgsql_micr_parser_64.so: ld.so.1:
postgres: fatal: /db/pgsql_micr_parser_64.so: Permission denied
for a different shared object, but may provide clues...
Error: - adding iplike database function... snip
On Sep 29, 2011, at 10:50 AM, Tom Lane wrote:
Christophe Pettus x...@thebuild.com writes:
I am baffled. Both PDT and WEST appear as valid timezone abbreviations, and
each have unique values, but:
Where do you see WEST as a valid timezone abbrevation?
Voila, Western Europe Summer Time:
On 09/29/2011 10:50 AM, Tom Lane wrote:
Christophe Pettusx...@thebuild.com writes:
I am baffled. Both PDT and WEST appear as valid timezone abbreviations, and
each have unique values, but:
Where do you see WEST as a valid timezone abbrevation? It's not listed
in the Default abbreviation
On 09/29/11 11:44 AM, Steve Crawford wrote:
There are 56 records and 3 different offsets in pg_timezone_names for
the abbreviation 'CST'.
yeah, we had some internal java software crashing on CST when it was
deployed in China :-/
I suggested the developer switch to using ISO format, and the
On Sep 29, 2011, at 11:44 AM, Steve Crawford wrote:
There are 56 records and 3 different offsets in pg_timezone_names for the
abbreviation 'CST'.
That's actually how this popped up for me; using 'IST' was giving rather
unexpected results...
--
-- Christophe Pettus
x...@thebuild.com
--
Christophe Pettus x...@thebuild.com writes:
That's actually how this popped up for me; using 'IST' was giving rather
unexpected results...
IST is one of the ones where there's a real conflict, ie it means
different things to different people. That was what drove us to invent
the timezone
On Sep 29, 2011, at 12:11 PM, Tom Lane wrote:
IST is one of the ones where there's a real conflict, ie it means
different things to different people.
Indeed; just noting that the search for a non-conflicting abbreviation is what
lead me to find the WEST thing.
--
-- Christophe Pettus
Steve Crawford scrawf...@pinpointresearch.com writes:
Actually, given that pg_timezone_abbrevs is based on the
timezone_abbreviations GUC, I'm not surprised that it is a subset of
what is in pg_timezone_names. But I am a bit surprised that the opposite
is true.
For zones that observe DST,
Hello
I am facing problem,after restarting server ,i could not populate data from
tablespace in postgres db using version 8.4.
I am not getting any error .
Could you please help ,how to retrieve data ?
Thx
Prashant
On 09/29/11 1:19 PM, Prashant Bharucha wrote:
I am facing problem,after restarting server ,i could not populate data
from tablespace in postgres db using version 8.4.
I am not getting any error .
Could you please help ,how to retrieve data ?
Siva,
in addition to what others said, please note that underscore matches any
character. to change it use escape char.
http://www.postgresql.org/docs/9.1/static/functions-matching.html#FUNCTIONS-LIKE
2011/9/28 Siva Palanisamy siv...@hcl.com
Hi All,
** **
I am trying to retrieve
Achilleas Mantzios wrote:
code for the cache utils pg_cacheutils is mentioned to exist somewhere but it's
not there
At this point the pgfincore project is the most popular way to do the
work that pg_cacheutils was suggested for there. See
http://pgfoundry.org/projects/pgfincore/ and
On 09/29/2011 11:44 AM, Steve Crawford wrote:
But 61 abbreviations that appear in pg_timezone_names do not have a
corresponding entry in pg_timezone_abbrevs and 69 abbreviations in
pg_timezone_abbrevs that don't appear in pg_timezone_names.
Actually, given that pg_timezone_abbrevs is
On Wed, 2011-09-28 at 08:52 +0200, Guillaume Lelarge wrote:
On Wed, 2011-09-28 at 09:04 +0800, Craig Ringer wrote:
On 09/28/2011 04:51 AM, Jason Long wrote:
I have an application with a couple hundred views and a couple hundred
tables.
Is there some way I can find out which views
I started an application around 5 years ago using Hibernate and writing
my queries in HQL.
The primary search screen has many options to filter and joins many
tables and views.
As the application grew the SQL Hibernate is generating is out of hand
and needs optimization.
As with other parts of
On Fri, Sep 30, 2011 at 1:35 AM, Ian Harding harding@gmail.com wrote:
I updated the firewall rules on a streaming replication standby server
and thought nothing of it. I later happened to notice on the primary
that ps aux | grep stream didn't show streaming to that server
anymore. On the
Hello,
I had the same issue before and I used the PostgreSQL statistics to
see weather the tables are used or not. One thing that I could not solve is how
to check if the schema design and semantics are good. i.e. table a references
table b, table c references table b, and table c
Hi,
I'm toying with the idea of making the Pyrseas utilities a true
PostgreSQL extension. Currently, the utilities (dbtoyaml and yamltodb)
rely on a library of Python functions: over 16 modules and 4000+ lines
of code. The programs would remain as Python command line front-ends,
but there would
I'm in the process of testing out Postgres 9.0 for production use. I've been
using it for development on my mac, a build from EnterpriseDB. We've just
installed a 9.0.5 on an Ubuntu (Ubuntu 10.04.3 LTS) machine from a backport
from lucid. There's an existing 8.4.8 postgres install also on
On Sep 29, 2011, at 4:57 PM, Jason Long wrote:
I thought I had read somewhere that Postges could ignore a join if it
was not necessary because there were no columns from the table or view
selected in the query. Is this possible?
This sounds like incorrect logic to me, so I would be surprised
On 30/09/2011, at 8:57 AM, Jason Long wrote:
I thought I had read somewhere that Postges could ignore a join if it
was not necessary because there were no columns from the table or view
selected in the query. Is this possible?
You might be thinking of this enhancement introduced in 9.0:
On 30/09/2011, at 8:57 AM, Jason Long wrote:
I thought I had read somewhere that Postges could ignore a join if it
was not necessary because there were no columns from the table or view
selected in the query. Is this possible?
You might be thinking of this enhancement introduced in
Hello
We are facing a deadlock kind of issue in PostgresSQL 7.4
We have 2 databases with 3 tables each. DB contains about 250 records. We
observed deadlock when 2 different clients are performing REINDEX and SELECT
start their operations near simultaneously.
Client 1 performs following
I had faced the same problem 2 days earlier and that was for
pg_freespacemap contrib module.
I did not know the way to ignore these functions and installed THE contrib
modules and restored. It worked !
I am also looking for a way to ignore these functions.
Thanks
VB
On Fri, Sep 30, 2011 at
On 09/29/11 8:49 AM, Tendulker, Shivanand G Prabhu (SSTL) wrote:
Please provide help in resolving this issue.
7.4 is an ancient dead end release. update to something released this
century, like 8.4 or 9.0... where, btw, vacuum, analyze, and reindex are
now automatic and no longer need
51 matches
Mail list logo