Re: [GENERAL] text_soundex function

2008-12-18 Thread Otandeka Simon Peter
Hi Chris,

Tried it and I get

LINE 1: SELECT * FROM s WHERE soundex(nm) = soundex('john');
HINT:  No function matches the given name and argument types. You might need
to add explicit type casts.

yet the data type of the nm is text.

I am using 8.3.1.

Anything am doing wrong?

On Thu, Dec 18, 2008 at 10:26 AM, Chris dmag...@gmail.com wrote:

 Otandeka Simon Peter wrote:

 Does this function(text_soundex) exist in the latest Postgresql versions
 or it was replaced?


 It's called soundex:

 http://www.postgresql.org/docs/8.3/interactive/fuzzystrmatch.html

 --
 Postgresql  php tutorials
 http://www.designmagick.com/




Re: [GENERAL] text_soundex function

2008-12-18 Thread Chris

Otandeka Simon Peter wrote:

Hi Chris,

Tried it and I get

LINE 1: SELECT * FROM s WHERE soundex(nm) = soundex('john');
HINT:  No function matches the given name and argument types. You might 
need to add explicit type casts.


Ahh, it's a contrib module. It's not installed by default, it's like an 
external package.


See http://www.postgresql.org/docs/8.3/interactive/contrib.html for how 
to install it.


--
Postgresql  php tutorials
http://www.designmagick.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] text_soundex function

2008-12-18 Thread Chris

Chris wrote:

Otandeka Simon Peter wrote:

Hi Chris,

Tried it and I get

LINE 1: SELECT * FROM s WHERE soundex(nm) = soundex('john');
HINT:  No function matches the given name and argument types. You 
might need to add explicit type casts.


Ahh, it's a contrib module. It's not installed by default, it's like an 
external package.


See http://www.postgresql.org/docs/8.3/interactive/contrib.html for how 
to install it.


Forgot to add, the package/script you want is 'fuzzystrmatch'.

--
Postgresql  php tutorials
http://www.designmagick.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SQL plan in functions

2008-12-18 Thread Julius Tuskenis

Hello, list.

I have a function witch returns record set. What troubles me is that 
depending on parameters I pass to this function  the execution time 
varies greatly. On the other hand if I execute the query (not the 
function) with pgAdmin  - it gives results quickly.
In previous post You helped me realize, that the problem was because the 
function has only one plan of SQL inside no matter the parameters 
values. Is there a way to order postgres to check the plan each time the 
function is called?


--
Julius Tuskenis


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prevent new timeline during archive recovery

2008-12-18 Thread Jonatan Evald Buus
Cheers for the input Simon, please find a couple of thoughts / questions
below

/Jona

On Thu, Dec 18, 2008 at 12:04 AM, Simon Riggs si...@2ndquadrant.com wrote:


 On Wed, 2008-12-17 at 17:54 +0100, Jonatan Evald Buus wrote:
  Greetings,
  We're currently developing an alternative to pg_standby, internally
  named pg_ha.
  pg_ha uses Record Based Log Shipping fetched by the warm standby
  server at millisecond intervals and will automatically restore
  completed WAL files on the standby server to minimize the time
  required when restoring it to production status after a failure on the
  primary server.
  Once completed we'd be happy to donate the code to the PostGreSQL
  community to assist in promoting the database server as a High
  Availability option.

 We're reworking replication now for 8.4 and it sounds like we're already
 some way ahead of you on that development.

We're quite looking forward to this work being completed, a simple
replication option that would transfer changes from a master node to one or
more slave nodes every couple of seconds would go a long way to solve most
HA scenarios.
I suspect we'll abandon our own solution and use the built-in replication
features of 8.4 once it's released.




 Have a look at Skytools' walmgr if you want a stream using 8.3. It uses
 production APIs built into the server in 8.2.

We did have a look at walmgr originally but it didn't seem to meet our needs
as it doesn't appear to be doing automatic restore of completed WAL files on
a continual basis. I wager that replaying a year's worth of WAL files from a
busy server would take quite a while thus decreasing the value of having a
Warm Standby Server.
It does however have a restore option but upon a closer look of that
specific part of the code it doesn't appear to address the automatically
increasing timeline during archive recovery.
Essentially it takes the same approach to restoring WAL archives as we have
opted for:
1) Create recovery.conf file in PGDATA, specifying restore_command = 'cp
[ARCHIVE DIR] /%f %p' (walmgr specifies itself as the restore command but
essentially simply does the copying)
2) Start the Postmaster process using pc_ctl start

The recovery.conf file used by walmgr is as follows:
restore_command = '%s %s %s'\n % (self.script, cf_file, 'xrestore %f %p')
# recovery_target_time=''
# recovery_target_xid=''
# recovery_target_inclusive=true
# recovery_target_timeline=''

In comparison we specify:
restore_command = 'cp [ARCHIVE DIR]/%f %p'
recovery_target_inclusive = 'true'

Unless I'm missing something, none of of these recovery.conf files will
address the incrementing timeline during archive recovery?




  During each restore cycle however we notice that PostGreSQL spawns a
  new timeline for the recovered archive as described in section 24.3.4
  at
 
 http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-BASE-BACKUP
 .
  Is it possible to specify that no new timeline should be spawned
  during the archiver recovery process?
 
  The only workaround otherwise appears to be to detect the current
  timeline number using pg_controldata and rename copied WAL files
  accordingly prior to initiating the archive recovery process.

 That sounds like a bad idea, for minor but annoying technical reasons.
 Please check the archives if you're interested further.

Agreed, it's a pretty appauling approach but it does appear to work.
An alternative would be to call pg_resetxlog after each archive restore
using the -l switch to reset the timeline.
i.e. pg_resetxlog -l 0x01,0x[CURRENT FILE ID],0x[NEXT LOG SEGMENT] [DATA
DIR]
This also seems like a bad idea which could have unforseen consequences due
to the internal cleanups performed.

Doing automatic restore of completed WAL files on the standby server would
greatly increase the timeline id as an archive restore might be executed
every couple of minutes depending on the load on the primary server.
Thus the best approach by far would be an optiong to specifying that the
timeline shouldn't be changed during archive recovery.
I haven't been able to find anything in relation to this topic in the
archives, seems that the timeline feature is discussed very seldom but I
might simply be searching for the wrong keywords.
If you could point me in the right direction I'd greatly appreciate it.




 --
  Simon Riggs   www.2ndQuadrant.com
  PostgreSQL Training, Services and Support




Re: [GENERAL] SQL plan in functions

2008-12-18 Thread Grzegorz Jaśkiewicz
On Thu, Dec 18, 2008 at 9:18 AM, Julius Tuskenis
julius.tuske...@gmail.com wrote:
 Hello, list.

 I have a function witch returns record set. What troubles me is that
 depending on parameters I pass to this function  the execution time varies
 greatly. On the other hand if I execute the query (not the function) with
 pgAdmin  - it gives results quickly.
 In previous post You helped me realize, that the problem was because the
 function has only one plan of SQL inside no matter the parameters values. Is
 there a way to order postgres to check the plan each time the function is
 called?

use EXECUTE 'query' .



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL plan in functions

2008-12-18 Thread A. Kretschmer
In response to Julius Tuskenis :
 Hello, list.
 
 I have a function witch returns record set. What troubles me is that 
 depending on parameters I pass to this function  the execution time 
 varies greatly. On the other hand if I execute the query (not the 
 function) with pgAdmin  - it gives results quickly.
 In previous post You helped me realize, that the problem was because the 
 function has only one plan of SQL inside no matter the parameters 
 values. Is there a way to order postgres to check the plan each time the 
 function is called?

Sure, use EXECUTE 'your sql' inside the function to force the planner to
generate a new plan depending on the current parameters.


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread Raymond O'Donnell
On 18/12/2008 05:26, novnov wrote:
 I have restored a postgres 8.2.4-1 db onto a postgres 8.3.1-1 server, and
 when I try to work with a table I get this error:
 
 Error: Operator does not exist: char = integer

A lot of previously automatic casts were removed in the 8.3 series -
this is possibly one of them.

You now need to cast explicitly in such cases, e.g.

  select '5'::integer;

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Other queries locked out during long insert

2008-12-18 Thread Peter Eisentraut

Joshua J. Kugler wrote:
OK, that makes sense.  There was nothing on the TRUNCATE page to suggest 
that TRUNCATE would lock the tables.  Maybe an addition to the 
documentation is in order?  Where do I go to suggest that?


I have added something to document this.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread Peter Eisentraut

novnov wrote:

I have restored a postgres 8.2.4-1 db onto a postgres 8.3.1-1 server, and
when I try to work with a table I get this error:

Error: Operator does not exist: char = integer

Hopefully that is enough of a clue to be useful. Maybe this is the first
time I've tried moving one of my non-trivial pg projects to a significantly
different version of postgres; is there a conversion process that helps with
moving between versions? 


Yes, reading the release notes. ;-)  I think you will find your problem 
explained there.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Rainer Bauer
Hi all,

Thomas Kellerer wrote here
http://archives.postgresql.org/pgsql-general/2008-12/msg00636.php

 [I'm reading this list through the newsserver at news.gmane.org,
 and several answers did not seem to make it to news.gmane.org
 but only to my private email]

I am reading the Postgres lists through the official newsserver
news.postgresql.org. But I have found the same problem: Some messages just
don't appear.

It seems like Alvaros message was sent only via private mail, because it does
not show in the mailing list archive.

I have configured the pgsql-performance mailing list so that I receive the
emails. And I see some messages not turning up on the news server (don't ask
me for exact numbers).

So is this a configuration problem or is there something else going wrong?

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Dave Page
On Thu, Dec 18, 2008 at 11:15 AM, Rainer Bauer use...@munnin.com wrote:

 It seems like Alvaros message was sent only via private mail, because it does
 not show in the mailing list archive.

His message went to -www, not to -general.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL plan in functions

2008-12-18 Thread Julius Tuskenis

Thank you Andreas and Grzegorz. It worked!

While experimenting I found that if I wright FOR rec IN EXECUTE 'my sql' 
LOOP ..  its OK, but If I wright RETURN QUERY EXECUTE 'my sql' I get 
syntax error. Is it a bug?


Julius Tuskenis



A. Kretschmer rašė:

In response to Julius Tuskenis :
  

Hello, list.

I have a function witch returns record set. What troubles me is that 
depending on parameters I pass to this function  the execution time 
varies greatly. On the other hand if I execute the query (not the 
function) with pgAdmin  - it gives results quickly.
In previous post You helped me realize, that the problem was because the 
function has only one plan of SQL inside no matter the parameters 
values. Is there a way to order postgres to check the plan each time the 
function is called?



Sure, use EXECUTE 'your sql' inside the function to force the planner to
generate a new plan depending on the current parameters.


Regards, Andreas
  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL plan in functions

2008-12-18 Thread Raymond O'Donnell
On 18/12/2008 12:12, Julius Tuskenis wrote:

 While experimenting I found that if I wright FOR rec IN EXECUTE 'my sql'
 LOOP ..  its OK, but If I wright RETURN QUERY EXECUTE 'my sql' I get
 syntax error. Is it a bug?

No, it's a syntax error. :-)

You need to do something like this to return the rows from the query:

  FOR rec in EXECUTE 'your sql here'
  LOOP
RETURN NEXT rec;
  END LOOP;

  RETURN;  -- exits from the function.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Alvaro Herrera
Rainer Bauer wrote:
 Hi all,
 
 Thomas Kellerer wrote here
 http://archives.postgresql.org/pgsql-general/2008-12/msg00636.php
 
  [I'm reading this list through the newsserver at news.gmane.org,
  and several answers did not seem to make it to news.gmane.org
  but only to my private email]
 
 I am reading the Postgres lists through the official newsserver
 news.postgresql.org. But I have found the same problem: Some messages just
 don't appear.
 
 It seems like Alvaros message was sent only via private mail, because it does
 not show in the mailing list archive.

It is on the archives, here:
http://archives.postgresql.org/message-id/20081217122802.GA4453%40alvh.no-ip.org

As Dave said, it was sent to pgsql-www, and I think the newsserver
doesn't carry that group (which is a bug that we've asked Marc to fix,
but as all things Marc, it takes quite a while).

Maybe I made a mistake and should have not taken pgsql-general out of
the CC line.

 I have configured the pgsql-performance mailing list so that I receive the
 emails. And I see some messages not turning up on the news server (don't ask
 me for exact numbers).
 
 So is this a configuration problem or is there something else going wrong?

I don't think the news gateway is all that trustworthy nowadays.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL plan in functions

2008-12-18 Thread Julius Tuskenis
Yes, Raymond - I know how to return record set in function using FOR, 
but since version 8.3 there is a option - using RETURN QUERY SELECT 
something FROM somewhere;. So if it works for SELECT why should it not 
work for EXECUTE ?


Julius Tuskenis


Raymond O'Donnell rašė:

On 18/12/2008 12:12, Julius Tuskenis wrote:

  

While experimenting I found that if I wright FOR rec IN EXECUTE 'my sql'
LOOP ..  its OK, but If I wright RETURN QUERY EXECUTE 'my sql' I get
syntax error. Is it a bug?



No, it's a syntax error. :-)

You need to do something like this to return the rows from the query:

  FOR rec in EXECUTE 'your sql here'
  LOOP
RETURN NEXT rec;
  END LOOP;

  RETURN;  -- exits from the function.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread Julius Tuskenis

Raymond O'Donnell rašė:

A lot of previously automatic casts were removed in the 8.3 series -
this is possibly one of them.

You now need to cast explicitly in such cases, e.g.

  select '5'::integer;

Ray.
  
That is a good advice and a good practice. But the solution usually 
takes time. For a quick (temporary) solution you could write your own 
operator for handling char = integer cases.


I had to do it once for an integer ILIKE text operator:

CREATE OR REPLACE FUNCTION of_integer_ilike_text(prm_integer integer, 
prm_text text)

 RETURNS boolean AS
$BODY$BEGIN
RETURN prm_integer::text ~~* prm_text ;
END;$BODY$
 LANGUAGE 'plpgsql' VOLATILE
 COST 100;
ALTER FUNCTION of_integer_ilike_text(integer, text) OWNER TO useris;

CREATE OPERATOR ~~(
 PROCEDURE = of_integer_ilike_text,
 LEFTARG = int4,
 RIGHTARG = text);

I'm sure you'll manage to do this for text = integer;

Julius Tuskenis



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DELETE running at snail-speed

2008-12-18 Thread gerhard
I'm running a delete command on a postgresql-8.1.9 server. 25000
tuples are delete, time is ~ 400 secs

the command:
delete from downtime where start'Aug 1 00:00:00 2008'


The table definitions are:
linie4= \d downtime
 Table public.downtime
   Column   |Type |
Modifiers
+-
+---
 downtimeid | integer | not null default nextval
('downtime_downtimeid_seq'::regclass)
 status | smallint|
 start  | timestamp without time zone | default now()
 machineid  | smallint|
 a_nr   | integer |
Indexes:
downtime_pkey PRIMARY KEY, btree (downtimeid)
idx_downtime_start btree (start)
Foreign-key constraints:
machineid_ok FOREIGN KEY (machineid) REFERENCES machine
(machineid) ON UPDATE CASCADE ON DELETE CASCADE

linie4= \d downtime_detail
   Table public.downtime_detail
   Column   |   Type   |
Modifiers
+--
+
 detailid   | integer  | not null default nextval
('downtime_detail_detailid_seq'::regclass)
 downtimeid | integer  |
 detail | smallint |
Indexes:
downtime_detail_pkey PRIMARY KEY, btree (detailid)
Foreign-key constraints:
statusid_ok FOREIGN KEY (downtimeid) REFERENCES downtime
(downtimeid) ON UPDATE CASCADE ON DELETE CASCADE

I suspect the foreign key constraint of downtime_detail to slow down
the delete process. Is this a bug, probably fixed in latest version
(8.1.x) or should I drop the constraint and recreate after deletion -
which I only see as workaround ?

Gerhard

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread gerhard
oops, should have thought about setting the foreign key constraint
deferrable, maybe showing this in table definition (\d) would be a
good idea.
will recreate my foreign key deferrable and do some timings ...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread Tom Lane
gerhard g.hinterma...@inode.at writes:
 oops, should have thought about setting the foreign key constraint
 deferrable, maybe showing this in table definition (\d) would be a
 good idea.

Don't think that'll make any difference.  What *should* help is having
an index on downtime_detail.downtimeid.  As-is it's got to seqscan
downtime_detail to look for rows to be cascade-deleted.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread Csaba Nagy
On Thu, 2008-12-18 at 05:29 -0800, gerhard wrote:
 I suspect the foreign key constraint of downtime_detail to slow down
 the delete process. Is this a bug, probably fixed in latest version
 (8.1.x) or should I drop the constraint and recreate after deletion -
 which I only see as workaround ?

The foreign key is the cause indeed, but you should put an index on
downtime_detail(downtimeid) and it would work fine. What happens is that
for each row you delete from 'downtime' table, the following is done by
the foreign key triggers (cascade the deletion to the child tables):

delete from downtime_detail where downtimeid = $1

You can try to see what kind of plan you get for that by:

prepare test_001(integer) as delete from downtime_detail where
downtimeid = $1;

explain execute test_001(0);

Now multiply whatever you get there by the count of rows deleted from
'downtime' and you'll get the reason why it is slow... then try it again
with the above mentioned index in place.

Cheers,
Csaba.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread Sam Mason
On Thu, Dec 18, 2008 at 05:29:52AM -0800, gerhard wrote:
 I'm running a delete command on a postgresql-8.1.9 server. 25000
 tuples are delete, time is ~ 400 secs
  detailid   | integer  | not null default nextval 
 ('downtime_detail_detailid_seq'::regclass)
  downtimeid | integer  |
  detail | smallint |
 Indexes:
 downtime_detail_pkey PRIMARY KEY, btree (detailid)
 Foreign-key constraints:
 statusid_ok FOREIGN KEY (downtimeid) REFERENCES downtime (downtimeid) 
 ON UPDATE CASCADE ON DELETE CASCADE
 
 I suspect the foreign key constraint of downtime_detail to slow down
 the delete process.

Try adding an index on downtime_detail.downtimeid.  If you have a lot
of entries in this table PG will spend a lot of time finding the entries
to delete them.

 Is this a bug, probably fixed in latest version
 (8.1.x) or should I drop the constraint and recreate after deletion -
 which I only see as workaround ?

It's not really a bug (although there probably is code that could be
written to make this case go faster) you can see where PG is actually
spending time by doing an EXPLAIN ANALYSE on the DELETE.  It should
display the time spent executing the triggers, but it's been a while
since I've used 8.1 so I'm not sure.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread gerhard
On Dec 18, 2:46 pm, gerhard g.hinterma...@inode.at wrote:
 oops, should have thought about setting the foreign key constraint
 deferrable, maybe showing this in table definition (\d) would be a
 good idea.
 will recreate my foreign key deferrable and do some timings ...

OK, recreated my foreign key constraints deferrable (actually used in
2 tables) but still deleting at snail speed. :-(

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DELETE running at snail-speed

2008-12-18 Thread gerhard
Added indices on the two tables that used foreign keys to the delting
table, and voila - delete finishes fast.
Thanks for the tip, also using explain analyze, as someone else
pointed out, would have shown me, where the problem comes from.

Thanks everybody for helping.
Gerhard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server side large object functions

2008-12-18 Thread George Weaver

Hi,

The PostgreSQL manual indicates that:

There are server-side functions callable from SQL that correspond to each
of the client-side functions described above.

In psql I can get the argument data types for lo_open:

development=# \df+ lo_open
List of functions
  Schema   |  Name   | Result data type | Argument data types | Volatility
|  O
wner   | Language | Source code |Description
+-+--+-++---
---+--+-+---
pg_catalog | lo_open | integer  | oid, integer| volatile
| po
stgres | internal | lo_open | large object open
(1 row)

But if I try to find the same for lo_read I get:

development=# \df+ lo_read
List of functions
Schema | Name | Result data type | Argument data types | Volatility | Owner
| L
anguage | Source code | Description
+--+--+-++---+--
+-+-
(0 rows)

Is there not a server side function for lo_read?  If so, how do I determine
the argument types?

My objective it to read a large object into memory as opposed to exporting 
it to a disk file.


I'm using version 8.3.3, psqlODBC 8.2.2 and vb.net.

Thanks,
George



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side large object functions

2008-12-18 Thread Alvaro Herrera
George Weaver wrote:

 But if I try to find the same for lo_read I get:

 development=# \df+ lo_read

alvherre=# \df loread
Listado de funciones
   Schema   | Nombre | Tipo de dato de salida | Tipos de datos de argumentos
+++--
 pg_catalog | loread | bytea  | integer, integer
(1 fila)


HTH,

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side large object functions

2008-12-18 Thread George Weaver


From: Alvaro Herrera alvhe...@commandprompt.com


George Weaver wrote:


But if I try to find the same for lo_read I get:

development=# \df+ lo_read


alvherre=# \df loread
   Listado de funciones
  Schema   | Nombre | Tipo de dato de salida | Tipos de datos de 
argumentos

+++--
pg_catalog | loread | bytea  | integer, integer
(1 fila)


Thanks Alvaro.  I didn't think to look for a variation of the function 
name...


As an aside, if I do a PostgreSQL Website search for loread, I get not 
hits.


George 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Rainer Bauer
Alvaro Herrera wrote:

Rainer Bauer wrote:
 
 It seems like Alvaros message was sent only via private mail, because it does
 not show in the mailing list archive.

It is on the archives, here:
http://archives.postgresql.org/message-id/20081217122802.GA4453%40alvh.no-ip.org

As Dave said, it was sent to pgsql-www, and I think the newsserver
doesn't carry that group (which is a bug that we've asked Marc to fix,
but as all things Marc, it takes quite a while).

Yeah, the www list is not available.

 I have configured the pgsql-performance mailing list so that I receive the
 emails. And I see some messages not turning up on the news server (don't ask
 me for exact numbers).
 
 So is this a configuration problem or is there something else going wrong?

I don't think the news gateway is all that trustworthy nowadays.

That's a pity. For me it's not so critical if I miss a few messages, but I
could imagine Thomas and I are not the only ones using a newsreader program to
follow the discussions on the mailing lists.

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side large object functions

2008-12-18 Thread Alvaro Herrera
George Weaver wrote:

 As an aside, if I do a PostgreSQL Website search for loread, I get not  
 hits.

Apparently these functions are not documented at all :-(  Most of the
chapter on large objects
http://www.postgresql.org/docs/8.3/static/lo-funcs.html
seems predicated on that one will be using libpq and C.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syntax error with select statement

2008-12-18 Thread aravind chandu
Hello,

   


Thanks for your help,but i still have some
problem.While executing it i encountered an error something like this:


terminate called after throwing an instance of 'pqxx::syntax_error'


  what():  ERROR:  unterminated dollar-quoted string at or near $Password$ 
M^fuo|`sjyo|`so|-?z


LINE 1: ... = $Username$ achandana $Username$ and password = $Password$...


^





I just give the query as it is in your previous mail.But still i got some 
problem can you please tell me.





Thank You,


Aravind.

--- On Thu, 12/18/08, Joshua J. Kugler jos...@eeinternet.com wrote:
From: Joshua J. Kugler jos...@eeinternet.com
Subject: Re: [GENERAL] Syntax error with select statement
To: pgsql-general@postgresql.org
Date: Thursday, December 18, 2008, 1:51 AM

On Wednesday 17 December 2008, justin said something like:
 aravind chandu wrote:
  Hello,
I have problem with select statement in c++ program I am
  using pqxx library to connect to postgresql database.My query is
 
result R(T.exec( select * from dbtable where username
= 
  ' +user+ ' and password = ' +st+
' ));
 
   here st is in encrypted format and the string is st =
  M^fuo|`sjyo|`so|-?z  this is the string i stored in the table .
 
  The error I was encountered is terminate called after
  throwing an instance of 'pqxx::syntax_error'
what():  ERROR:  unterminated quoted string at or near
  'M^fuo|`sjyo|`so|-?z
  LINE 1: ...table where username = 'achandana' and password =
  'M^fuo|`sj...
 
  ^
 
   I am not able to identify what the actual problem is can you guys
  please help to solve this problem?Your help is greatly appreciated.
 
  Thank You,
  Aravind

 Well its telling you in the error the quotes are flaky.  It apears
 that the password portion contains another sing quote.

 I would move to double dollar quoting when dealing with strings that
 contain special characters

 example
 R(T.exec( select * from dbtable where username =  $UserName$ 
+
 user +  $Username$ and password = $Password$  + st + 
$Password$
 ));

 see http://www.postgresql.org/docs/8.3/static/sql-syntax-lexical.html
 on dollar quoting

Better yet, use placeholders and let the driver do the quoting.

j

-- 
Joshua Kugler
Part-Time System Admin/Programmer
http://www.eeinternet.com
PGP Key: http://pgp.mit.edu/  ID 0xDB26D7CE

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  

[GENERAL] 8.1.11 PREPARE problem?

2008-12-18 Thread Jeremiah Jahn
This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
have mentioned this problem. When run as a prepared statement the first
statement will execute and return results, while the next two seem to
execute, but return no results. When run by hand, not prepared , each
statement runs just fine. when using an older version of the jdbc driver
(7.3?), that does not call PREPARE, things work just fine. Looking at
the logs below, it sure looks like the statements are running, given the
different durations, which are correct for the differences between the
statements. help please.




Dec 18 10:59:15 devel postgres[6889]: [2-1] LOG:  duration: 358.005 ms  
statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
Dec 18 10:59:15 devel postgres[6889]: [2-2]  
aggregationvalue$value,aggregationvalue.aggregatetype as 
aggregationvalue$aggregatetype,aggregationvalue.count as
Dec 18 10:59:15 devel postgres[6889]: [2-3]  
aggregationvalue$count,aggregationvalue.stoptime as 
aggregationvalue$stoptime,aggregationvalue.starttime as
Dec 18 10:59:15 devel postgres[6889]: [2-4]  aggregationvalue$starttime from 
aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
and $3  and
Dec 18 10:59:15 devel postgres[6889]: [2-5]  aggregationvalue.aggregatetype = 
$4 and  split_part(aggregationvalue.value,':',1) = $5  and 
Dec 18 10:59:15 devel postgres[6889]: [2-6]  
split_part(aggregationvalue.value,':',2) like $6  and  
split_part(aggregationvalue.value,':',3) like $7  ]

Dec 18 10:59:25 devel postgres[6889]: [4-1] LOG:  duration: 25.001 ms  
statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
Dec 18 10:59:25 devel postgres[6889]: [4-2]  
aggregationvalue$value,aggregationvalue.aggregatetype as 
aggregationvalue$aggregatetype,aggregationvalue.count as
Dec 18 10:59:25 devel postgres[6889]: [4-3]  
aggregationvalue$count,aggregationvalue.stoptime as 
aggregationvalue$stoptime,aggregationvalue.starttime as
Dec 18 10:59:25 devel postgres[6889]: [4-4]  aggregationvalue$starttime from 
aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
and $3  and
Dec 18 10:59:25 devel postgres[6889]: [4-5]  aggregationvalue.aggregatetype = 
$4 and  split_part(aggregationvalue.value,':',1) = $5  and 
Dec 18 10:59:25 devel postgres[6889]: [4-6]  
split_part(aggregationvalue.value,':',2) like $6  and  
split_part(aggregationvalue.value,':',3) like $7  ]

Dec 18 10:59:25 devel postgres[6889]: [6-1] LOG:  duration: 2.000 ms  
statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
Dec 18 10:59:25 devel postgres[6889]: [6-2]  
aggregationvalue$value,aggregationvalue.aggregatetype as 
aggregationvalue$aggregatetype,aggregationvalue.count as
Dec 18 10:59:25 devel postgres[6889]: [6-3]  
aggregationvalue$count,aggregationvalue.stoptime as 
aggregationvalue$stoptime,aggregationvalue.starttime as
Dec 18 10:59:25 devel postgres[6889]: [6-4]  aggregationvalue$starttime from 
aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
and $3  and
Dec 18 10:59:25 devel postgres[6889]: [6-5]  aggregationvalue.aggregatetype = 
$4 and  split_part(aggregationvalue.value,':',1) = $5  and 
Dec 18 10:59:25 devel postgres[6889]: [6-6]  
split_part(aggregationvalue.value,':',2) like $6  and  
split_part(aggregationvalue.value,':',3) like $7  ]


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side large object functions

2008-12-18 Thread Howard Cole

George Weaver wrote:


My objective it to read a large object into memory as opposed to 
exporting it to a disk file.


I'm using version 8.3.3, psqlODBC 8.2.2 and vb.net.

If you use Npgsql (.NET library) rather than ODBC - there are functions 
for reading large objects into memory.


Howard Cole
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side large object functions

2008-12-18 Thread George Weaver


- Original Message - 
From: Howard Cole howardn...@selestial.com


If you use Npgsql (.NET library) rather than ODBC - there are functions 
for reading large objects into memory.


Cool!

I've been meaning to have a look at Npgsql - maybe this is the nudge I need!

Thanks,
George 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.1.11 PREPARE problem?

2008-12-18 Thread Jeremiah Jahn
moving on:
After digging through the JDBC source. It seems that the server is
returning a 'C':  // Command Status (end of Execute) before returning
any data rows. So it doesn't look like JDBC is causing any problems.

I've tried not running the first, and only running the second, and I get
still no results. But as usual when running by hand, I do get results.


the SQL that i'm running ends up looking like these when run by hand:

1) works as a prepared statement
select aggregationvalue.value as aggregationvalue
$value,aggregationvalue.aggregatetype as aggregationvalue
$aggregatetype,aggregationvalue.count as aggregationvalue
$count,aggregationvalue.stoptime as aggregationvalue
$stoptime,aggregationvalue.starttime as aggregationvalue$starttime from
aggregationvalue where  date_trunc('month', aggregationvalue.stoptime)
between '2007-11-01' and '2008-12-01'  and
aggregationvalue.aggregatetype = 'MONTHLY' and
split_part(aggregationvalue.value,':',1) = 'JUDICI_PAGE'  and
split_part(aggregationvalue.value,':',2) like '%'  and
split_part(aggregationvalue.value,':',3) like '%'  ;

2) does not work as prepared statement
select aggregationvalue.value as aggregationvalue
$value,aggregationvalue.aggregatetype as aggregationvalue
$aggregatetype,aggregationvalue.count as aggregationvalue
$count,aggregationvalue.stoptime as aggregationvalue
$stoptime,aggregationvalue.starttime as aggregationvalue$starttime from
aggregationvalue where  date_trunc('day', aggregationvalue.stoptime)
between '2008-12-18' and '2008-12-18'  and
aggregationvalue.aggregatetype = 'HOURLY' and
split_part(aggregationvalue.value,':',1) = 'JUDICI_PAGE'  and
split_part(aggregationvalue.value,':',2) like '%'  and
split_part(aggregationvalue.value,':',3) like '%'  ;





On Thu, 2008-12-18 at 11:21 -0600, Jeremiah Jahn wrote:
 This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
 have mentioned this problem. When run as a prepared statement the first
 statement will execute and return results, while the next two seem to
 execute, but return no results. When run by hand, not prepared , each
 statement runs just fine. when using an older version of the jdbc driver
 (7.3?), that does not call PREPARE, things work just fine. Looking at
 the logs below, it sure looks like the statements are running, given the
 different durations, which are correct for the differences between the
 statements. help please.
 
 
 
 
 Dec 18 10:59:15 devel postgres[6889]: [2-1] LOG:  duration: 358.005 ms  
 statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
 Dec 18 10:59:15 devel postgres[6889]: [2-2]  
 aggregationvalue$value,aggregationvalue.aggregatetype as 
 aggregationvalue$aggregatetype,aggregationvalue.count as
 Dec 18 10:59:15 devel postgres[6889]: [2-3]  
 aggregationvalue$count,aggregationvalue.stoptime as 
 aggregationvalue$stoptime,aggregationvalue.starttime as
 Dec 18 10:59:15 devel postgres[6889]: [2-4]  aggregationvalue$starttime from 
 aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
 and $3  and
 Dec 18 10:59:15 devel postgres[6889]: [2-5]  aggregationvalue.aggregatetype = 
 $4 and  split_part(aggregationvalue.value,':',1) = $5  and 
 Dec 18 10:59:15 devel postgres[6889]: [2-6]  
 split_part(aggregationvalue.value,':',2) like $6  and  
 split_part(aggregationvalue.value,':',3) like $7  ]
 
 Dec 18 10:59:25 devel postgres[6889]: [4-1] LOG:  duration: 25.001 ms  
 statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
 Dec 18 10:59:25 devel postgres[6889]: [4-2]  
 aggregationvalue$value,aggregationvalue.aggregatetype as 
 aggregationvalue$aggregatetype,aggregationvalue.count as
 Dec 18 10:59:25 devel postgres[6889]: [4-3]  
 aggregationvalue$count,aggregationvalue.stoptime as 
 aggregationvalue$stoptime,aggregationvalue.starttime as
 Dec 18 10:59:25 devel postgres[6889]: [4-4]  aggregationvalue$starttime from 
 aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
 and $3  and
 Dec 18 10:59:25 devel postgres[6889]: [4-5]  aggregationvalue.aggregatetype = 
 $4 and  split_part(aggregationvalue.value,':',1) = $5  and 
 Dec 18 10:59:25 devel postgres[6889]: [4-6]  
 split_part(aggregationvalue.value,':',2) like $6  and  
 split_part(aggregationvalue.value,':',3) like $7  ]
 
 Dec 18 10:59:25 devel postgres[6889]: [6-1] LOG:  duration: 2.000 ms  
 statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
 Dec 18 10:59:25 devel postgres[6889]: [6-2]  
 aggregationvalue$value,aggregationvalue.aggregatetype as 
 aggregationvalue$aggregatetype,aggregationvalue.count as
 Dec 18 10:59:25 devel postgres[6889]: [6-3]  
 aggregationvalue$count,aggregationvalue.stoptime as 
 aggregationvalue$stoptime,aggregationvalue.starttime as
 Dec 18 10:59:25 devel postgres[6889]: [6-4]  aggregationvalue$starttime from 
 aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) between $2 
 and $3  and
 Dec 18 10:59:25 devel postgres[6889]: [6-5]  

Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread novnov

The error went away; I updated the admin tool I use (pg lightning admin) and
that seemed to help.


Thanks everyone.


Julius Tuskenis-2 wrote:
 
 Raymond O'Donnell rašė:
 A lot of previously automatic casts were removed in the 8.3 series -
 this is possibly one of them.

 You now need to cast explicitly in such cases, e.g.

   select '5'::integer;

 Ray.
   
 That is a good advice and a good practice. But the solution usually 
 takes time. For a quick (temporary) solution you could write your own 
 operator for handling char = integer cases.
 
 I had to do it once for an integer ILIKE text operator:
 
 CREATE OR REPLACE FUNCTION of_integer_ilike_text(prm_integer integer, 
 prm_text text)
   RETURNS boolean AS
 $BODY$BEGIN
 RETURN prm_integer::text ~~* prm_text ;
 END;$BODY$
   LANGUAGE 'plpgsql' VOLATILE
   COST 100;
 ALTER FUNCTION of_integer_ilike_text(integer, text) OWNER TO useris;
 
 CREATE OPERATOR ~~(
   PROCEDURE = of_integer_ilike_text,
   LEFTARG = int4,
   RIGHTARG = text);
 
 I'm sure you'll manage to do this for text = integer;
 
 Julius Tuskenis
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 

-- 
View this message in context: 
http://www.nabble.com/Error%3A-Operator-does-not-exist%3A-%22char%22%3Dinteger-tp21067261p21079266.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alocate table in memory and multui listener

2008-12-18 Thread Lennin Caro
I have two cuestion:

1- postgresql have the method to allocate a table in memory
2- In postgresql can have multiple port to listen


thank...


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Mirroring existing mysql setup

2008-12-18 Thread snacktime
Where I work we use mysql for a fairly busy website, and I'd like to
eventually start transitioning to postgres if possible.   The largest
obstacle is the lack of replication as a core feature.  I'm well aware
of the history behind why it's not in core, and I saw a post a while
back saying it would be in 8.4.  I'd like to say I think this is a
very good idea, and I know a number of shops personally that did not
go with postgres just for this reason.

So anyways our general setup is that we have one master replicating to
one slave.  We use the slave for generating various leaderboard stats
for our games.  Most of these get generated every 4 hours.   If we
wanted to duplicate this on postgres I'm not sure which replication
option would work best.  Last time I looked at slony you had to edit
configs for each table  you wanted to replicate, and the whole setup
was more complex then it needed to be.  If it's still like that, I
think we would lose more then we gain by moving to postgres.  Once
setup, the replication needs to be free of daily administration other
then routine automated tasks.  We add new tables/remove old ones
almost on a daily basis.

Now for one of the main things we don't like about mysql.  You can't
add indexes without locking the whole table, which means you can't go
back and add indexes later on a production app without shutting down
the whole system.   The effect his has had is that when we add new
features to our games that would normally require an additional
column, we have to add a new table since we can't add an index to the
old table.   When you add indexes in postgres, how much of a
performance hit will the database be taking while adding the index?
I haven't worked on a postgres installation that's as busy as our
mysql installation is.  We get roughly 3-4 million page views per day,
with each page view probably averaging 4-6 db queries.  Probably 20%
of these are cached.  In addition we have our slave which does far
fewer, but more complicated queries.  Quite a few of our tables will
gain thousands of rows per day, some tens of thousands.  Some of our
busiest tables have tens of millions of rows.  We could start to
archive some of these.

Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mirroring existing mysql setup

2008-12-18 Thread Erik Jones


On Dec 18, 2008, at 11:32 AM, snacktime wrote:


Where I work we use mysql for a fairly busy website, and I'd like to
eventually start transitioning to postgres if possible.   The largest
obstacle is the lack of replication as a core feature.  I'm well aware
of the history behind why it's not in core, and I saw a post a while
back saying it would be in 8.4.  I'd like to say I think this is a
very good idea, and I know a number of shops personally that did not
go with postgres just for this reason.

So anyways our general setup is that we have one master replicating to
one slave.  We use the slave for generating various leaderboard stats
for our games.  Most of these get generated every 4 hours.   If we
wanted to duplicate this on postgres I'm not sure which replication
option would work best.  Last time I looked at slony you had to edit
configs for each table  you wanted to replicate, and the whole setup
was more complex then it needed to be.  If it's still like that, I
think we would lose more then we gain by moving to postgres.  Once
setup, the replication needs to be free of daily administration other
then routine automated tasks.  We add new tables/remove old ones
almost on a daily basis.


You should check out Londiste, part of the Skytools package of  
Postgres projects.  For simple, master-slave replication it's *loads*  
easier to set up and administer than Slony.  The only reason I could  
see to go with Slony right now is if you need some kind of complex  
setup with cascaded replication or what-not.  Adding and removing  
tables to/from the replication stream is also a cinch in Londiste but  
you *do* have to actually do it -- they don't get added automatically  
like in MySQL's built-in replication.  However, you may want to wait a  
few months with your fingers crossed to see if Hot Standy replication  
is ready for 8.4 in March.



Now for one of the main things we don't like about mysql.  You can't
add indexes without locking the whole table, which means you can't go
back and add indexes later on a production app without shutting down
the whole system.   The effect his has had is that when we add new
features to our games that would normally require an additional
column, we have to add a new table since we can't add an index to the
old table.   When you add indexes in postgres, how much of a
performance hit will the database be taking while adding the index?
I haven't worked on a postgres installation that's as busy as our
mysql installation is.  We get roughly 3-4 million page views per day,
with each page view probably averaging 4-6 db queries.  Probably 20%
of these are cached.  In addition we have our slave which does far
fewer, but more complicated queries.  Quite a few of our tables will
gain thousands of rows per day, some tens of thousands.  Some of our
busiest tables have tens of millions of rows.  We could start to
archive some of these.


You can use CREATE INDEX CONCURRENTLY to avoid the table locks.   
However, that takes two passes over the data instead of one so there's  
a bigger IO hit.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.1.11 PREPARE problem?

2008-12-18 Thread Tom Lane
Jeremiah Jahn jeremiah.j...@gmail.com writes:
 This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
 have mentioned this problem. When run as a prepared statement the first
 statement will execute and return results, while the next two seem to
 execute, but return no results.

It seems quite unlikely that there's a generic bug in prepared
statements that no one has encountered till now.  If you've got a real
bug here I'm sure it's a lot narrower case than that.  Can you put
together a self-contained test case?  Also, can you reproduce the
problem on something newer than 8.1.11?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.1.11 PREPARE problem?

2008-12-18 Thread Jeremiah Jahn
thanks for the help.

between CAST(? as timestamp) and CAST(? as timestamp) fixes my problem.

because my second query uses a between the same day parameter it wasn't
being found when it was treated as a string, as a timestamp it works
fine. Apparently a simple query will convert the string to a timestamp
correctly, while they way I  was doing things in java was forcing it to
be a string, thus killing my query.

I'm using a rather dense index:
 CREATE INDEX stat_speed_big ON aggregationvalue (aggregatetype, value,
date_trunc('hour', stoptime), date_trunc('day', stoptime),
date_trunc('month', stoptime), date_trunc('year', stoptime),
split_part(aggregationvalue.value,':',1),
split_part(aggregationvalue.value,':',2),
split_part(aggregationvalue.value,':',3),
split_part(aggregationvalue.value,':',4),
split_part(aggregationvalue.value,':',5));

and messing with my dynamic SQL enough to make sure it hits the index
every time, that I got headed down the wrong track when it didn't work
consistently between applications, different driver versions, and did
work correctly on the old version, which doesn't server prepare
statements.

thanks again, if only for the moral support.



On Thu, 2008-12-18 at 14:52 -0500, Tom Lane wrote:
 Jeremiah Jahn jeremiah.j...@gmail.com writes:
  This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
  have mentioned this problem. When run as a prepared statement the first
  statement will execute and return results, while the next two seem to
  execute, but return no results.
 
 It seems quite unlikely that there's a generic bug in prepared
 statements that no one has encountered till now.  If you've got a real
 bug here I'm sure it's a lot narrower case than that.  Can you put
 together a self-contained test case?  Also, can you reproduce the
 problem on something newer than 8.1.11?
 
   regards, tom lane
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mirroring existing mysql setup

2008-12-18 Thread Scott Marlowe
On Thu, Dec 18, 2008 at 12:32 PM, snacktime snackt...@gmail.com wrote:
 Where I work we use mysql for a fairly busy website, and I'd like to
 eventually start transitioning to postgres if possible.   The largest
 obstacle is the lack of replication as a core feature.  I'm well aware
 of the history behind why it's not in core, and I saw a post a while
 back saying it would be in 8.4.  I'd like to say I think this is a
 very good idea, and I know a number of shops personally that did not
 go with postgres just for this reason.

Then they made a emotional decision, not a scientific one. :)  Whether
or not replication is built in comes WAY after a few other factors,
like

is it reliable?
is it the right kind of replication?
Is it supported during upgrades? (slony was designed to allow
different versions of pgsql to replicate to one another)
What's the overhead for running it?
Are there any showstopper bugs that would affect you in production?
Can I start replication on disparate machines without downtime?
(another slony feature, no need to take down the master to initiate
replication)

 So anyways our general setup is that we have one master replicating to
 one slave.  We use the slave for generating various leaderboard stats
 for our games.  Most of these get generated every 4 hours.   If we
 wanted to duplicate this on postgres I'm not sure which replication
 option would work best.  Last time I looked at slony you had to edit
 configs for each table  you wanted to replicate, and the whole setup
 was more complex then it needed to be.

With a fairly simple script it's quite easy to automate.  That's how I
do it.  But slony is aimed at a narrow wedge of users, and other
replication engines may be a better choice for what you're doing.
They all make compromises in one area or another.

 If it's still like that, I
 think we would lose more then we gain by moving to postgres.  Once
 setup, the replication needs to be free of daily administration other
 then routine automated tasks.  We add new tables/remove old ones
 almost on a daily basis.

Then I would question your design.  But yeah, slony is a bad choice
here.  pgpool might work, but in general ddl replication in pgsql is
problematic for most replication solutions.

 Now for one of the main things we don't like about mysql.  You can't
 add indexes without locking the whole table, which means you can't go
 back and add indexes later on a production app without shutting down
 the whole system.   The effect his has had is that when we add new
 features to our games that would normally require an additional
 column, we have to add a new table since we can't add an index to the
 old table.   When you add indexes in postgres, how much of a
 performance hit will the database be taking while adding the index?

Actually, this is one of my favorite things about using slony and
pgsql.  With the create index concurrently command, index creation
does not block, and you can have different indexes on the master and
slave.  So, minimal indexes with fast updates on the master, and
custom indexes that allow reports run faster on the slave.

 I haven't worked on a postgres installation that's as busy as our
 mysql installation is.  We get roughly 3-4 million page views per day,
 with each page view probably averaging 4-6 db queries.

It's 1pm here and we've had ~4Million db accesses on our machines.
Looking at the logs for yesterday we had ~10Million db queries run.
This is on our master only.  We see about twice as many selects on the
slave.  These machines are overpowered for what we're doing, with one
12 disk RAID-10 array each, and a pair of drives for os/pg_xlog.  32
Gig ram 8 opteron cores.  $11,500 each.   These machines run at a load
factor of 1.5 to 2.5 all day, and can handle load factors into the
dozens before any serious slowdowns occur.  I'm sure with a 6 drive
RAID-10 and a two drive RAID-1 you could get a fast enough machine for
what you're doing.

 Probably 20%
 of these are cached.  In addition we have our slave which does far
 fewer, but more complicated queries.

You could definitely look at optimizing each db differently for each job.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unable to assign value to composite column in trigger

2008-12-18 Thread Ketema Harris

The following is a snippet from a trigger i am using:

_mycompositeType.user_id = (OLD).mycompositeType.user_id;
...do some stuff...  --notice that assigning from the trigger record  
works


but later on...

(new).mycompositeType.transaction_id := _transaction_id;

Fails with syntax error at or near (

I have tried removing the () and it fails with syntax error at or  
near new


non composite column can be assigned to in the trigger just fine:

new.other_columm := 5;  --gives no issues

How can a composite column be assigned to inside a trigger?

Thanks

Ketema J. Harris
www.ketema.net
ket...@ketema.net
ketemaj on iChat



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unable to assign value to composite column in trigger

2008-12-18 Thread Ketema Harris

On Dec 18, 2008, at 4:19 PM, Ketema Harris wrote:


The following is a snippet from a trigger i am using:

_mycompositeType.user_id = (OLD).mycompositeType.user_id;
...do some stuff...  --notice that assigning from the trigger record  
works


but later on...

(new).mycompositeType.transaction_id := _transaction_id;

Fails with syntax error at or near (

I have tried removing the () and it fails with syntax error at or  
near new


non composite column can be assigned to in the trigger just fine:

new.other_columm := 5;  --gives no issues

How can a composite column be assigned to inside a trigger?

Thanks

Ketema J. Harris
www.ketema.net
ket...@ketema.net
ketemaj on iChat



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


I have answered my own question.  It appears that any composite column  
elements in the NEW and OLD variables within a trigger can be read  
from using the () accessor syntax, but not assigned to.  What I had to  
do was create a variable of the composite type and then assign the  
whole column.


EX:

_var mycompositeType;

_var.element  := 1
_var.element2 := 2

new.compositeCol := _var

It seems a little strange that the same accessor syntax can't be used  
on both sides of the assignment operator, but this method solved my  
problem.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL plan in functions

2008-12-18 Thread Raymond O'Donnell
On 18/12/2008 12:40, Julius Tuskenis wrote:
 Yes, Raymond - I know how to return record set in function using FOR,
 but since version 8.3 there is a option - using RETURN QUERY SELECT
 something FROM somewhere;. So if it works for SELECT why should it not
 work for EXECUTE ?

Oh - I didn't know about thatso I'm afraid I don't know why it's
causing a problem for you :-)

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help - Regarding Data Compression in PostgreSQL.

2008-12-18 Thread salma
Hi,
 
I'd like to know:
1. If it is possible to compress data in the PostgreSQL.
2. What kind of compression, if any, is available in PostgreSQL.
3. If compression is available, what kind of disk space savings can I expect 
using it.
 
I'd appreciate it if you can take some time and get me this information.
 
Thanks,
Salma

[GENERAL] Hi iam the beginner

2008-12-18 Thread sai srujan
 Hi
  This is srujan and I am beginner of postgresql 8.1 and I have
just stopped using mysql and i would like to use the dump (.sql file)
created in it. I would like transfer the databases in my sql to postgresql
8.1. Please do reply. I am waiting for it.
Thanks in Advance


[GENERAL] Postgre Tables problem

2008-12-18 Thread Danail Pavlov

Dear Sir/Madam,



I have to create a postgre database programmatically from VB .Net and after 
that to create a tables.
After successfully creating the Data Base, when a try to create a table in this 
DB i have this error - schema dbName does not exist, Code 3F000
I'm sure that name is correct, because i use the same string from creating 
DataBase. 
How can i solve this problem?

Best Regards, 
Danail Valkov



_
Invite your mail contacts to join your friends list with Windows Live Spaces. 
It's easy!
http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us

Re: [GENERAL] 8.1.11 PREPARE problem?

2008-12-18 Thread Jeremiah Jahn
doh! my second prepared statement is getting prepared as all text, when
the second and third parameters should be timestamps.

argh! Is there some sort of logging, that says how prepared statements
are getting prepared?


On Thu, 2008-12-18 at 13:13 -0600, Jeremiah Jahn wrote:
 moving on:
 After digging through the JDBC source. It seems that the server is
 returning a 'C':  // Command Status (end of Execute) before returning
 any data rows. So it doesn't look like JDBC is causing any problems.
 
 I've tried not running the first, and only running the second, and I get
 still no results. But as usual when running by hand, I do get results.
 
 
 the SQL that i'm running ends up looking like these when run by hand:
 
 1) works as a prepared statement
 select aggregationvalue.value as aggregationvalue
 $value,aggregationvalue.aggregatetype as aggregationvalue
 $aggregatetype,aggregationvalue.count as aggregationvalue
 $count,aggregationvalue.stoptime as aggregationvalue
 $stoptime,aggregationvalue.starttime as aggregationvalue$starttime from
 aggregationvalue where  date_trunc('month', aggregationvalue.stoptime)
 between '2007-11-01' and '2008-12-01'  and
 aggregationvalue.aggregatetype = 'MONTHLY' and
 split_part(aggregationvalue.value,':',1) = 'JUDICI_PAGE'  and
 split_part(aggregationvalue.value,':',2) like '%'  and
 split_part(aggregationvalue.value,':',3) like '%'  ;
 
 2) does not work as prepared statement
 select aggregationvalue.value as aggregationvalue
 $value,aggregationvalue.aggregatetype as aggregationvalue
 $aggregatetype,aggregationvalue.count as aggregationvalue
 $count,aggregationvalue.stoptime as aggregationvalue
 $stoptime,aggregationvalue.starttime as aggregationvalue$starttime from
 aggregationvalue where  date_trunc('day', aggregationvalue.stoptime)
 between '2008-12-18' and '2008-12-18'  and
 aggregationvalue.aggregatetype = 'HOURLY' and
 split_part(aggregationvalue.value,':',1) = 'JUDICI_PAGE'  and
 split_part(aggregationvalue.value,':',2) like '%'  and
 split_part(aggregationvalue.value,':',3) like '%'  ;
 
 
 
 
 
 On Thu, 2008-12-18 at 11:21 -0600, Jeremiah Jahn wrote:
  This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
  have mentioned this problem. When run as a prepared statement the first
  statement will execute and return results, while the next two seem to
  execute, but return no results. When run by hand, not prepared , each
  statement runs just fine. when using an older version of the jdbc driver
  (7.3?), that does not call PREPARE, things work just fine. Looking at
  the logs below, it sure looks like the statements are running, given the
  different durations, which are correct for the differences between the
  statements. help please.
  
  
  
  
  Dec 18 10:59:15 devel postgres[6889]: [2-1] LOG:  duration: 358.005 ms  
  statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
  Dec 18 10:59:15 devel postgres[6889]: [2-2]  
  aggregationvalue$value,aggregationvalue.aggregatetype as 
  aggregationvalue$aggregatetype,aggregationvalue.count as
  Dec 18 10:59:15 devel postgres[6889]: [2-3]  
  aggregationvalue$count,aggregationvalue.stoptime as 
  aggregationvalue$stoptime,aggregationvalue.starttime as
  Dec 18 10:59:15 devel postgres[6889]: [2-4]  aggregationvalue$starttime 
  from aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) 
  between $2 and $3  and
  Dec 18 10:59:15 devel postgres[6889]: [2-5]  aggregationvalue.aggregatetype 
  = $4 and  split_part(aggregationvalue.value,':',1) = $5  and 
  Dec 18 10:59:15 devel postgres[6889]: [2-6]  
  split_part(aggregationvalue.value,':',2) like $6  and  
  split_part(aggregationvalue.value,':',3) like $7  ]
  
  Dec 18 10:59:25 devel postgres[6889]: [4-1] LOG:  duration: 25.001 ms  
  statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
  Dec 18 10:59:25 devel postgres[6889]: [4-2]  
  aggregationvalue$value,aggregationvalue.aggregatetype as 
  aggregationvalue$aggregatetype,aggregationvalue.count as
  Dec 18 10:59:25 devel postgres[6889]: [4-3]  
  aggregationvalue$count,aggregationvalue.stoptime as 
  aggregationvalue$stoptime,aggregationvalue.starttime as
  Dec 18 10:59:25 devel postgres[6889]: [4-4]  aggregationvalue$starttime 
  from aggregationvalue where  date_trunc($1, aggregationvalue.stoptime) 
  between $2 and $3  and
  Dec 18 10:59:25 devel postgres[6889]: [4-5]  aggregationvalue.aggregatetype 
  = $4 and  split_part(aggregationvalue.value,':',1) = $5  and 
  Dec 18 10:59:25 devel postgres[6889]: [4-6]  
  split_part(aggregationvalue.value,':',2) like $6  and  
  split_part(aggregationvalue.value,':',3) like $7  ]
  
  Dec 18 10:59:25 devel postgres[6889]: [6-1] LOG:  duration: 2.000 ms  
  statement: EXECUTE unnamed  [PREPARE:select aggregationvalue.value as
  Dec 18 10:59:25 devel postgres[6889]: [6-2]  
  aggregationvalue$value,aggregationvalue.aggregatetype as 
  aggregationvalue$aggregatetype,aggregationvalue.count as
  Dec 18 

Re: [GENERAL] Lost password

2008-12-18 Thread Johan Nel

Martin,

You need to find the file pg_hba.conf.

Inside of it find the lines showing:
# IPv4 local connections:
hostall all 127.0.0.1/32  md5

Change the md5 to trust, make sure it is only for local host that you do it.

Save the file and restart your postgres server.

Use pgAdmin and connect to the server.  When prompted for a password, 
give it an empty password.


Press the SQL toolbar button and run the following script:

ALTER USER yourusername SET PASSWORD = 'newpassword'

Edit the pg_hba.conf file again and change the trust back to what it was.

Restart the server.

HTH,

Johan Nel
Pretoria, South Africa.

Martin Roach wrote:

Hi
 
I started the installation of postrgres got distracted and then started 
again but forgot my password. I have received the info below:
 
Now what?
If you have access to shell account on the machine PostgreSQL is 
running, and your shell works as the same user as Postgres itself, or 
root - solution is easy.

Find your pg_hba.conf file. It might be in many files so try:

* $ locate pg_hba.conf
* find /var/lib/ -type f -name pg_hba.conf
* find /etc -type f -name pg_hba.conf
* find / -type f -name pg_hba.conf 


Of course last option is your last resort - it will take a long time.
 
However, i'm not that computer savvy and don't now how to go about the 
above. How come you can't just delete the software and try again?
 
Any help in finding these files and working what to do would be a 
massive help.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgre Tables problem

2008-12-18 Thread Alan Hodgson
On Tuesday 16 December 2008, Danail Pavlov danail...@hotmail.com wrote:
 I have to create a postgre database programmatically from VB .Net and
 after that to create a tables. After successfully creating the Data Base,
 when a try to create a table in this DB i have this error - schema
 dbName does not exist, Code 3F000 I'm sure that name is correct,
 because i use the same string from creating DataBase. How can i solve
 this problem?

Don't call your table dbName.table_name, just use table_name.

-- 
Panics do not destroy capital; they merely reveal the extent to which it has 
been destroyed by its betrayal into hopelessly unproductive works.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread Aarni
On Thursday 18 December 2008 12:46:38 Peter Eisentraut wrote:
 novnov wrote:
  I have restored a postgres 8.2.4-1 db onto a postgres 8.3.1-1 server, and
  when I try to work with a table I get this error:
 
  Error: Operator does not exist: char = integer
 
  Hopefully that is enough of a clue to be useful. Maybe this is the first
  time I've tried moving one of my non-trivial pg projects to a
  significantly different version of postgres; is there a conversion
  process that helps with moving between versions?

 Yes, reading the release notes. ;-)  I think you will find your problem
 explained there.

Hi,

I had similar errors here and there after moving to 8.3.3 from 8.2.x., no more 
automatic casts.

ERROR:  operator does not exist: character varying = integer at character 286
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.

Quick fix to sql statements eg. 

... WHERE CAST (your_char AS INTEGER) = integer ...
... WHERE CAST (your_char AS INTEGER) IN (1,2,3,...)

BR,

-- 
Aarni Ruuhimäki


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error: Operator does not exist: char=integer

2008-12-18 Thread Gregory Stark
Aarni aa...@kymi.com writes:

 ERROR:  operator does not exist: character varying = integer at character 286
 HINT:  No operator matches the given name and argument type(s). You might 
 need 
 to add explicit type casts.

 Quick fix to sql statements eg. 

 ... WHERE CAST (your_char AS INTEGER) = integer ...
 ... WHERE CAST (your_char AS INTEGER) IN (1,2,3,...)

Note that this is *not* what was happening in 8.2. There it was casting them
to text and doing a text comparison. In the case of integer and equality
they're probably equivalent. However  and  will behave quite differently.
That's why the casts disappeared -- you probably weren't running the queries
you thought you were running in 8.2 and previously.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hi iam the beginner

2008-12-18 Thread Scott Marlowe
On Mon, Dec 15, 2008 at 11:02 PM, sai srujan creativesru...@gmail.com wrote:
  Hi
   This is srujan and I am beginner of postgresql 8.1 and I have
 just stopped using mysql and i would like to use the dump (.sql file)
 created in it. I would like transfer the databases in my sql to postgresql
 8.1. Please do reply. I am waiting for it.

If possible, go ahead and jump right into 8.3.5.  There's been quite a
few improvements over 8.1 in it.  And unlike Oracle, when a new pg
version is released, it's considered to really be production ready
right away, or at least within a month or so.  8.3.5 is a year old
now, and we've been running 8.3 in production since 8.3.3 came out and
it's fantastic.

The easiest way to transfer your database over is to dump the schema
first, and get that working on pgsql.  Beware mysqlisms, like quoting
identifiers with back ticks (worst idea ever really) and odd type
names, like tinyint or unsigned ints.  None of that stuff is standard.

Once you've got your schema to import, then dump the data from mysql
as either CSV or insert statements, whichever is easier, and import
that.  You might need to whip up some skills in perl, bash, php, awk,
or sed to massage your data to go in right.

And if you get stuck, post here...  Best of luck.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help - Regarding Data Compression in PostgreSQL.

2008-12-18 Thread Scott Marlowe
On Tue, Dec 16, 2008 at 6:52 AM, salma salma.kha...@applabs.com wrote:
 Hi,

 I'd like to know:
 1. If it is possible to compress data in the PostgreSQL.

Ayup.  Anything over a standard size gets compressed out of line into
TOAST tables.

 2. What kind of compression, if any, is available in PostgreSQL.

Standard unix compress I believe

 3. If compression is available, what kind of disk space savings can I expect
 using it.

depends on how compressible your data is.  The compression that's
built into pgsql is focused on speed, not compression.  So don't
expect 99% compression or anything.  But on highly compressible data
you can expect it to compress quite well.  Do an experiment with du
and see what you get.

A lot of this info is probably in the docs somewhere, but I'm not sure
it's a faq or not.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hi iam the beginner

2008-12-18 Thread Pedro Doria Meunier
(sorry all, I *couldn't* resist!)

Scott ...

In all these years that's the 1st time I've ever 'heard' the term massage you 
data !!!
All sorts of deviant thoughts come to your mind! 
LOL

BR,

Pedro Doria Meunier
GSM: +351961720188
Skype: pdoriam

On Friday 19 December 2008 12:03:26 am Scott Marlowe wrote:
 On Mon, Dec 15, 2008 at 11:02 PM, sai srujan creativesru...@gmail.com 
wrote:
   Hi
This is srujan and I am beginner of postgresql 8.1 and I
  have just stopped using mysql and i would like to use the dump (.sql
  file) created in it. I would like transfer the databases in my sql to
  postgresql 8.1. Please do reply. I am waiting for it.

 If possible, go ahead and jump right into 8.3.5.  There's been quite a
 few improvements over 8.1 in it.  And unlike Oracle, when a new pg
 version is released, it's considered to really be production ready
 right away, or at least within a month or so.  8.3.5 is a year old
 now, and we've been running 8.3 in production since 8.3.3 came out and
 it's fantastic.

 The easiest way to transfer your database over is to dump the schema
 first, and get that working on pgsql.  Beware mysqlisms, like quoting
 identifiers with back ticks (worst idea ever really) and odd type
 names, like tinyint or unsigned ints.  None of that stuff is standard.

 Once you've got your schema to import, then dump the data from mysql
 as either CSV or insert statements, whichever is easier, and import
 that.  You might need to whip up some skills in perl, bash, php, awk,
 or sed to massage your data to go in right.

 And if you get stuck, post here...  Best of luck.




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Need help - Regarding Data Compression in PostgreSQL.

2008-12-18 Thread Alvaro Herrera
Scott Marlowe escribió:
 On Tue, Dec 16, 2008 at 6:52 AM, salma salma.kha...@applabs.com wrote:

  2. What kind of compression, if any, is available in PostgreSQL.
 
 Standard unix compress I believe

No, it's our own LZ implementation, which is focused on speed but does
not compress as well as gzip or compress (see
src/backend/utils/adt/pg_lzcompress.c for more details)

  3. If compression is available, what kind of disk space savings can I expect
  using it.
 
 depends on how compressible your data is.  The compression that's
 built into pgsql is focused on speed, not compression.  So don't
 expect 99% compression or anything.

Well, you can compress 200 constant bytes to 4 with our algorithm, so
it's 98% :-) (plus the varlena header though)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hi iam the beginner

2008-12-18 Thread Star Liu
On Tue, Dec 16, 2008 at 2:02 PM, sai srujan creativesru...@gmail.com wrote:
  Hi
   This is srujan and I am beginner of postgresql 8.1 and I have
 just stopped using mysql and i would like to use the dump (.sql file)
 created in it. I would like transfer the databases in my sql to postgresql
 8.1. Please do reply. I am waiting for it.
 Thanks in Advance

I'm not able to help you, for I haven't used mysql, but I'm wondering
why you want to stop using mysql? :)



-- 
I'm a web developer using
debian+mono(C#)+postgresql+xhtml+javascript+xsl+xml+css
my home is http://www.starliu.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hi iam the beginner

2008-12-18 Thread ries van Twisk


On Dec 18, 2008, at 10:19 PM, Star Liu wrote:

On Tue, Dec 16, 2008 at 2:02 PM, sai srujan  
creativesru...@gmail.com wrote:

Hi
 This is srujan and I am beginner of postgresql 8.1 and  
I have

just stopped using mysql and i would like to use the dump (.sql file)
created in it. I would like transfer the databases in my sql to  
postgresql

8.1. Please do reply. I am waiting for it.
Thanks in Advance


I'm not able to help you, for I haven't used mysql, but I'm wondering
why you want to stop using mysql? :)




I used to make a schema dump and modify that in a text editor.

Then I use JasperETL to transfer data from one DB to a other DB.

Ries







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help - Regarding Data Compression in PostgreSQL.

2008-12-18 Thread salma
I started feeling nobody wants to answer those questions but finally got a 
reply :-)

Thanks a lot Scott.

- Salma

- Original Message - 
From: Scott Marlowe scott.marl...@gmail.com

To: salma salma.kha...@applabs.com
Cc: pgsql-nov...@postgresql.org; pgsql-general@postgresql.org
Sent: Friday, December 19, 2008 5:36 AM
Subject: Re: [GENERAL] Need help - Regarding Data Compression in PostgreSQL.



On Tue, Dec 16, 2008 at 6:52 AM, salma salma.kha...@applabs.com wrote:

Hi,

I'd like to know:
1. If it is possible to compress data in the PostgreSQL.


Ayup.  Anything over a standard size gets compressed out of line into
TOAST tables.


2. What kind of compression, if any, is available in PostgreSQL.


Standard unix compress I believe

3. If compression is available, what kind of disk space savings can I 
expect

using it.


depends on how compressible your data is.  The compression that's
built into pgsql is focused on speed, not compression.  So don't
expect 99% compression or anything.  But on highly compressible data
you can expect it to compress quite well.  Do an experiment with du
and see what you get.

A lot of this info is probably in the docs somewhere, but I'm not sure
it's a faq or not. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general