Hi All
Thanks for your suggestions . But the problem here seems to be with the way
slony applies triggers .
The result of below queries in the database gives the following results :
SELECT tgrelid,tgname,tgfoid from pg_trigger;
tgrelid|tgname | tgfoid
-
On Thu, Jul 22, 2010 at 1:02 PM, Scott Marlowe scott.marl...@gmail.com wrote:
Why on Earth would I want to store this sort of stuff in a bit string?!
Because you are manipulating bits and not integers? I guess there are
10 kinds of people, those who like think in binary and those who
I two Unix/ process related questions:
Is there a documented way to pass around an (opened) PGconn * structure
between two processes on Unix?
When a process forks() and both the parent and child process continue to
use a previously opened PGconn * structure, is that behaviour defined?
Thanks,
On Tue, 2010-07-20 at 20:48 +0100, Thom Brown wrote:
Or you can use pg_controldata /path/to/pgdata and look at Time of
latest checkpoint.
Right. Thanks :)
--
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community:
On Thu, Jul 22, 2010 at 08:35, Marc Balmer m...@msys.ch wrote:
I two Unix/ process related questions:
Is there a documented way to pass around an (opened) PGconn * structure
between two processes on Unix?
No.
You can probably hack up something yourself but you'd have to look
inside the
On 21 Jul 2010, at 23:14, Joe Conway m...@joeconway.com wrote:
If you want something simple, and not requiring PostGIS, but plpgsql
instead, see:
http://archives.postgresql.org/pgsql-sql/2003-12/msg00193.php
For completeness, the earthdistance module also provides the distance between
two
On 22 Jul 2010, at 8:35, Marc Balmer wrote:
When a process forks() and both the parent and child process continue to
use a previously opened PGconn * structure, is that behaviour defined?
I recall having done this successfully, but you have to take care to
synchronise access to the
Hi all,
I'm looking into some ecpg part of the official manual,
and I have found some strange things.
I'm now investigating SQLCODE and SQLSTATE, and I have found
that some of the errno definitions don't match to the manual.
For example, the manual says that ECPG_CONVERT_BOOL could be `-207'.
Hi.
I'm trying to index the contents of word-documents, extracted text,
which leads to quite large documents sometimes. This resutls in the
following Exception:
Caused by: org.postgresql.util.PSQLException: ERROR: index row requires
10376 bytes, maximum size is 8191
I have the following
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Or you can use pg_controldata /path/to/pgdata and look
at Time of latest checkpoint.
Assuming your system is using English. Otherwise, you'll
have to build a collection of .po strings as we did for
check_postgres.pl. Needless to say, I'd
Oliver Kohll - Mailing Lists wrote:
On 21 Jul 2010, at 23:14, Joe Conway m...@joeconway.com
mailto:m...@joeconway.com wrote:
If you want something simple, and not requiring PostGIS, but plpgsql
instead, see:
http://archives.postgresql.org/pgsql-sql/2003-12/msg00193.php
For completeness,
Oliver Kohll - Mailing Lists wrote:
On 21 Jul 2010, at 23:14, Joe Conway m...@joeconway.com
mailto:m...@joeconway.com wrote:
If you want something simple, and not requiring PostGIS, but plpgsql
instead, see:
http://archives.postgresql.org/pgsql-sql/2003-12/msg00193.php
For completeness,
Oliver Kohll - Mailing Lists wrote:
On 21 Jul 2010, at 23:14, Joe Conway m...@joeconway.com
mailto:m...@joeconway.com wrote:
If you want something simple, and not requiring PostGIS, but plpgsql
instead, see:
http://archives.postgresql.org/pgsql-sql/2003-12/msg00193.php
For completeness,
Hello,
is it possible to get statistics on the usage of different sql
statements, e.g. how many INSERT or UPDATE statements per day?
log_statement_stats doesn't seem to be the right parameter or I haven't
found the output statistic for this command.
Any ideas or suggestions?
Thanks,
On 07/22/2010 03:31 PM, Andreas Joseph Krogh wrote:
Hi.
I'm trying to index the contents of word-documents, extracted text,
which leads to quite large documents sometimes. This resutls in the
following Exception:
Caused by: org.postgresql.util.PSQLException: ERROR: index row
requires 10376
Hi. It's not clear to me if the simple dictionary uses stopwords or
not, does it?
Can someone please post a complete description of what the simple
dict. does?
--
Andreas Joseph Kroghandr...@officenet.no
Senior Software Developer / CTO
On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers h...@diznix.com wrote:
On Thu, Jul 22, 2010 at 1:02 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
Why on Earth would I want to store this sort of stuff in a bit string?!
Because you are manipulating bits and not integers? I guess
I have been struggling with this for a while now, have even gone down
a few paths but struck out, so I turn now to the community for ideas.
First, the problem: Store six daily variables for ~ 25 years for cells
in a grid.
* Number of vars = 6
* Number of cells ~ 13 million
* Number of days ~
I have a daily job that pushes data from the production database into
the reporting database, which right now, is an exact copy.
I have a webapp that builds lots of reports for users. Most of these
reports involve elaborate joins of lookup tables and lots of summations,
and they take too long to
On Jul 22, 2010, at 4:50 AM, Stefan-Michael Guenther wrote:
Hello,
is it possible to get statistics on the usage of different sql statements,
e.g. how many INSERT or UPDATE statements per day?
log_statement_stats doesn't seem to be the right parameter or I haven't found
the output
Tim,
Thank you.
It can be done in SQL: SUM(kogus) OVER (PARTITION BY toode
ORDER BY ID) - kogus (*1) will give you the running sum of
the product up to that row. You can then subtract that value
from the delivered quantity to calculate the delivered quan-
tity for the current row.
But
The easiest way to look at this is to give the simple dictionary a
document with to_tsvector() and see if stopwords pop out.
In my experience they do. In my experience, the simple dictionary
just breaks the document down into the space etc. separated words in
the document. It doesn't
Hi all. I had no luck finding a previous message or documentation related
to the effective transaction isolation of subqueries, specifically identical
subqueries in union statements. Consider the following statement executed
without a transaction.
select true as from_one, table_one.*
from
On 07/22/2010 06:27 PM, John Gage wrote:
The easiest way to look at this is to give the simple dictionary a
document with to_tsvector() and see if stopwords pop out.
In my experience they do. In my experience, the simple dictionary
just breaks the document down into the space etc. separated
Don't guess, but read docs
http://www.postgresql.org/docs/8.4/interactive/textsearch-dictionaries.html#TEXTSEARCH-SIMPLE-DICTIONARY
12.6.2. Simple Dictionary
The simple dictionary template operates by converting the input token to lower
case and checking it against a file of stop words. If it
On 07/22/2010 07:44 PM, Oleg Bartunov wrote:
Don't guess, but read docs
http://www.postgresql.org/docs/8.4/interactive/textsearch-dictionaries.html#TEXTSEARCH-SIMPLE-DICTIONARY
12.6.2. Simple Dictionary
The simple dictionary template operates by converting the input token
to lower case and
Hi,
I know this issue was controversed discussed. Some one see no really
benefits of using varchar against text var. But i'm asking me what if i
use it as array values. Is there any difference between varchar[126],
varchar[1] and text[] else than the number of chars i can store in?
This was discussed yesterday and previously. Please read the archives.
There is no positive performance reason to use varchar instead of text.
On 2010-07-22 05:38:14PM +0200, Armand Turpel wrote:
Hi,
I know this issue was controversed discussed. Some one see no really
benefits of using
On 22 Jul 2010, at 12:57, Geoffrey wrote:
For completeness, the earthdistance module also provides the distance
between two lat/longs, the point@point syntax is simple to use:
http://www.postgresql.org/docs/8.3/static/earthdistance.html
Disgregard my last post, Surely as soon as I hit
Matthew Wilson wrote:
I've heard people talking about using materialized views for this, but
that was with Oracle.
You can build those manually with PostgreSQL if you really want them:
http://wiki.postgresql.org/wiki/Materialized_Views
The fundamental architecture is sound for a lot of
On Thu, Jul 22, 2010 at 8:45 AM, Matthew Wilson m...@tplus1.com wrote:
I have a daily job that pushes data from the production database into
the reporting database, which right now, is an exact copy.
I have a webapp that builds lots of reports for users. Most of these
reports involve
Excerpts from Derrick Rice's message of jue jul 22 12:27:31 -0400 2010:
Is it possible for the contents of reference_table to differ from the first
select to the select on the right hand side of the union? (e.g. because
some other transaction committed additional rows).
No.
If it is not
2010/7/22 Vinícius Soares vinicius...@yahoo.com.br:
Hey,
thanks for your response.
I did it:
S8 sql[1500] = insert into t values ( E';
U8 *msg;
msg = PQescapeByteaConn(conn, pending_cmd-cmd.value,
sizeof(msg_cmd_t), to_length);
for (i=0; i
(anonymous) wrote:
It can be done in SQL: SUM(kogus) OVER (PARTITION BY toode
ORDER BY ID) - kogus (*1) will give you the running sum of
the product up to that row. You can then subtract that value
from the delivered quantity to calculate the delivered quan-
tity for the current row.
But doing
Oliver Kohll - Mailing Lists wrote:
On 22 Jul 2010, at 12:57, Geoffrey wrote:
For completeness, the earthdistance module also provides the distance
between two lat/longs, the point@point syntax is simple to use:
http://www.postgresql.org/docs/8.3/static/earthdistance.html
Disgregard my
Andreas,
I'd create myself copy of dictionary to be independent on system changes.
Oleg
On Thu, 22 Jul 2010, Andreas Joseph Krogh wrote:
On 07/22/2010 07:44 PM, Oleg Bartunov wrote:
Don't guess, but read docs
Hello,
I have strange problem.
I test/optimize my queries with EXPLAIN ANALYZE. I get for example:
Total runtime: 40.794 ms
But when I run query without EXPLAIN ANALYZE i get, for example:
Time: 539.252 ms
Query returns 33 rows. Why?
I do checks with psql connected using socket to
W dniu 22 lipca 2010 21:24 użytkownik Piotr Gasidło
qua...@barbara.eu.orgnapisał:
Hello,
I have strange problem.
I test/optimize my queries with EXPLAIN ANALYZE. I get for example:
Total runtime: 40.794 ms
But when I run query without EXPLAIN ANALYZE i get, for example:
Time: 539.252
By default it has no Init options, so it doesn't check for stopwords.
In the first place, this functionality is a rip-snorting home run on
Postgres. I congratulate Oleg who I believe is one of the authors.
In the second, I too had not read (carefully) the documentation and am
very
W dniu 22 lipca 2010 21:34 użytkownik Szymon Guz mabew...@gmail.com napisał:
maybe the query waits on a lock or maybe the returned rows are very big.
So shouldn't EXPLAIN ANALYZE be also affected by waiting for lock?
The row has width = 313, so it's not big.
I've reduced it to witdh = 12 (only
Hi all,
I am doing some tests on the file level copying for one of database in
windows platform. Here is what I want to achieve.
copy files within PG tablespace folder where PG database resides to target
machine. I want to attache the copied data to target PG database.
What I have done is
1.
On Thu, Jul 22, 2010 at 2:08 PM, Steeles stee...@gmail.com wrote:
Hi all,
I am doing some tests on the file level copying for one of database in
windows platform. Here is what I want to achieve.
copy files within PG tablespace folder where PG database resides to target
machine. I want to
On Thu, Jul 22, 2010 at 4:09 PM, Howard Rogers h...@diznix.com wrote:
On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
peter.hunsber...@gmail.com wrote:
On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers h...@diznix.com wrote:
there's a room-full of users who can look
at code '4097' and
W dniu 22 lipca 2010 21:46 użytkownik Piotr Gasidło
qua...@barbara.eu.org napisał:
W dniu 22 lipca 2010 21:34 użytkownik Szymon Guz mabew...@gmail.com napisał:
(...)
Something new. The query is run against table which has been
partitioned into a lot of small tables.
When I combine data and put
Hi!
I have a problem with the max postgres connections on SCO
Openserver 5.0.7, so ...my boss decided to buy the SCO Openserver 6.0
but this version comes in 2 editions: Starter and Enterprise.
If SCO 5.0.7 only allows 95 ( -3 used by superuser) connections to
postgres...
Do you
Hi,
System information:
- psql 8.4.4 on a client with CentOS 5.5 (64 bits)
- postgres 8.4.4 on the server with CentOS 5.5 (64 bits)
- the client is connected with vpn
I have a script to create a table with some comments in front. When I
use the command 'psql -f script.sql' to load it, it
On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
peter.hunsber...@gmail.com wrote:
On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers h...@diznix.com wrote:
On Thu, Jul 22, 2010 at 1:02 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
Why on Earth would I want to store this sort of stuff
On 7/22/2010 9:41 AM, P Kishor wrote:
I have been struggling with this for a while now, have even gone down
a few paths but struck out, so I turn now to the community for ideas.
First, the problem: Store six daily variables for ~ 25 years for cells
in a grid.
* Number of vars = 6
* Number
On Jul 22, 5:03 pm, Gary Fu gary...@sigmaspace.com wrote:
Hi,
System information:
- psql 8.4.4 on a client with CentOS 5.5 (64 bits)
- postgres 8.4.4 on the server with CentOS 5.5 (64 bits)
- the client is connected with vpn
I have a script to create a table with some comments in front.
On Thu, Jul 22, 2010 at 3:09 PM, Howard Rogers h...@diznix.com wrote:
On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
peter.hunsber...@gmail.com wrote:
On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers h...@diznix.com wrote:
On Thu, Jul 22, 2010 at 1:02 PM, Scott Marlowe
On Thu, Jul 22, 2010 at 3:24 PM, Edmundo Robles L.
erob...@sensacd.com.mx wrote:
Hi!
I have a problem with the max postgres connections on SCO
Openserver 5.0.7, so ...my boss decided to buy the SCO Openserver 6.0
but this version comes in 2 editions: Starter and Enterprise.
If SCO
Over time, I mistakenly did something like this:
CREATE TABLE a ();
CREATE TABLE b () INHERITS(a);
ALTER TABLE b ADD col int;
ALTER TABLE a ADD col int;
where I should have left out the third statement. Not a great loss, the only
consequence I've observed being pg_attribute.attislocal = true,
On 07/22/2010 04:03 PM, Gary Fu wrote:
Hi,
System information:
- psql 8.4.4 on a client with CentOS 5.5 (64 bits)
- postgres 8.4.4 on the server with CentOS 5.5 (64 bits)
- the client is connected with vpn
I have a script to create a table with some comments in front. When I use the
command
On Thu, Jul 22, 2010 at 10:27 PM, Howard Rogers h...@diznix.com wrote:
On Fri, Jul 23, 2010 at 8:37 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Thu, Jul 22, 2010 at 3:09 PM, Howard Rogers h...@diznix.com wrote:
On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
In response to Piotr Gasid??o :
Hello,
I have strange problem.
I test/optimize my queries with EXPLAIN ANALYZE. I get for example:
Total runtime: 40.794 ms
But when I run query without EXPLAIN ANALYZE i get, for example:
Time: 539.252 ms
Query returns 33 rows. Why?
Maybe
On Fri, Jul 23, 2010 at 8:37 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Jul 22, 2010 at 3:09 PM, Howard Rogers h...@diznix.com wrote:
On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
peter.hunsber...@gmail.com wrote:
On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers
On Thu, Jul 22, 2010 at 4:56 PM, Andy Colson a...@squeakycode.net wrote:
On 7/22/2010 9:41 AM, P Kishor wrote:
I have been struggling with this for a while now, have even gone down
a few paths but struck out, so I turn now to the community for ideas.
First, the problem: Store six daily
57 matches
Mail list logo