Re: [SQL] Change of data type

2006-08-08 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2006-08-07 12:38:20 -0500:
>Hello everybody, excuse me how can I change de data type of a field, 
> I currently have:
>   
>material character(30)  
> 
>but I now want the field in text type like this:
>  
>material  text
> 
>somebody knows if ALTER TABLE has some option to do this?, or How 
> can I do that?

http://www.postgresql.org/docs/8.1/static/sql-altertable.html

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Query response time

2006-08-08 Thread Jonathan Sinclair

Hi all,

I am using PostgresSQL 7.4 and having some serious performance issues.
Trawling through the archives and previous posts the only visible advice
I could see was either by running vacuum or setting the fsynch flag to
false.

I am using tables that only contain approx 2GB of data. However
performing a number of simple conditional select statements takes a
great deal of time. Putting limits on the data obviously reduces the
time, but there is still a delay. (Note: on one particular query I set
the limit to 538 and the query returns in under 2mins if the limit
becomes 539 the query loops indefinitely!)
From previous experience I know these delays are longer than both
Informix and MySql. In some instances it takes so long I end up having
to kill the query.

The install was performed by yum onto a RAID server using Centos. I am
sure there is something fundamentally wrong for I can't believe that
postgres would have the reputation it has based on the statistics I'm
getting. Does anyone have any advice?

The data I am using was imported from an Informix system as part of a
migration strategy.
I know this is long shot but I hope someone can shed some light.

Regards,

Jonathan 


This email may contain information which is privileged or confidential. This 
information is intended only for the named recipient. If you are not the 
intended recipient, please be aware that disclosure, copying, distribution or 
use of this information is prohibited. If you have received this email in 
error, we would be grateful if you would inform us as soon as possible by 
telephoning +44 (0) 1769 573431, or by email to [EMAIL PROTECTED] and then 
delete this email. Views or opinions expressed in this email are those of the 
writer, and are not necessarily the views of Mole Valley Farmers Limited or its 
subsidiary companies. Unless specifically stated, this email does not 
constitute any part of an offer or contract.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Query response time

2006-08-08 Thread Chris Mair

> I am using PostgresSQL 7.4 and having some serious performance issues.
> Trawling through the archives and previous posts the only visible advice
> I could see was either by running vacuum or setting the fsynch flag to
> false.
> 
> I am using tables that only contain approx 2GB of data. However
> performing a number of simple conditional select statements takes a
> great deal of time. Putting limits on the data obviously reduces the
> time, but there is still a delay. (Note: on one particular query I set
> the limit to 538 and the query returns in under 2mins if the limit
> becomes 539 the query loops indefinitely!)
> >From previous experience I know these delays are longer than both
> Informix and MySql. In some instances it takes so long I end up having
> to kill the query.

Please run the command "analyze" on your database (you can do that with
the psql shell), then post to this list:

- your query
- the output of the command "explain + your query"
- the layout of the tables concerned by the query (
 "\d tablename" from psql)


> The install was performed by yum onto a RAID server using Centos. I am
> sure there is something fundamentally wrong for I can't believe that
> postgres would have the reputation it has based on the statistics I'm
> getting. Does anyone have any advice?

> The data I am using was imported from an Informix system as part of a
> migration strategy.
> I know this is long shot but I hope someone can shed some light.

If you're on [email protected] as well, we might continue
discussion there.

Bye :)
Chris.


-- 

Chris Mair
http://www.1006.org



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Query response time

2006-08-08 Thread Richard Broersma Jr
> I am using PostgresSQL 7.4 and having some serious performance issues.
> Trawling through the archives and previous posts the only visible advice
> I could see was either by running vacuum or setting the fsynch flag to
> false.
> 
> I am using tables that only contain approx 2GB of data. However
> performing a number of simple conditional select statements takes a
> great deal of time. Putting limits on the data obviously reduces the
> time, but there is still a delay. (Note: on one particular query I set
> the limit to 538 and the query returns in under 2mins if the limit
> becomes 539 the query loops indefinitely!)
> From previous experience I know these delays are longer than both
> Informix and MySql. In some instances it takes so long I end up having
> to kill the query.
> 
> The install was performed by yum onto a RAID server using Centos. I am
> sure there is something fundamentally wrong for I can't believe that
> postgres would have the reputation it has based on the statistics I'm
> getting. Does anyone have any advice?
> 
> The data I am using was imported from an Informix system as part of a
> migration strategy.
> I know this is long shot but I hope someone can shed some light.

Are the Update/Insert queries slow or is it the select queries that are taking 
awhile?

For select queries, an explain analyze of the offending query would be helpful. 
Also, in addition
to vacuuming you may want to reindex you db in order to clean all of the dead 
tuples from your
indexs.

For heavy insert/update queries check your postgres logs to see if any messages 
suggest increasing
your check-point-segments.

If this is the case, try increasing you check_point_segments and try moving 
your pg_xlog to a
different spindle.

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Function Temp Table Woes

2006-08-08 Thread Chad Voelker
Hello,

I'm a newbie to plpgsql functions so any help I can
get would be appreciated. I have a little bit of a
history with MS SQL Server 2000, so this may be where
I'm messed up ;)

The Goal: Return a set of records from one table based
on entries in a temporary table. I'd expect this to be
a commonly requested functionality. After reading the
docs, support lists and googling, I haven't come up
with a clean solution. My query is at the end of this
post.

I've come to the conclusion that returning a REFCURSOR
is the best approach, but I will take other
suggestions. My current issue is that the temp table
(tt_occ_units) is not being dropped after commit. The
second calling of this function produces the 'relation
already exists' error. 

I've learned that there is an implicit transaction
around every function and I was hoping that
transaction would force the commit and drop the table.
That tells me that the table shouldn't be there if I
immediately call it again. Otherwise, I am calling
this function using SQL similar to: BEGIN; SELECT
sUnitsByOccStatus('c', FALSE, '08/07/2006'); FETCH ALL
IN c; COMMIT;

Note that I am actually making this call from PHP.
But, after I get over this hurdle, I'm sure I can make
it work from there.

Any thoughts?
Thanks in advance.

-Chad


--Current function text
CREATE OR REPLACE FUNCTION sUnitsByOccStatus (cur
REFCURSOR, occupied BOOLEAN, refDate DATE) RETURNS
REFCURSOR
  AS $$
  BEGIN
  CREATE TEMP TABLE tt_occ_units (unit_id INTEGER)
ON COMMIT DROP;
  -- Get ids for all available units
  SELECT u.id INTO tt_occ_units
FROM tbl_unit u 
  INNER JOIN tbl_tenant_unit tu ON u.id =
tu.unit_id
  INNER JOIN tbl_rent r ON tu.rent_id = r.id
  WHERE r.date_start < refDate AND r.date_end
> refDate;
  
  -- Return tbl_unit records for avail or not
avail depending on what the caller wants.
  IF occupied THEN
OPEN cur FOR SELECT u.* FROM tbl_unit u
  WHERE u.id IN (select unit_id from
tt_occ_units);
  ELSE 
OPEN cur FOR SELECT u.* FROM tbl_unit u
  WHERE u.id NOT IN (select unit_id from
tt_occ_units);
  END IF; -- IF
RETURN cur;
  END;
  $$ LANGUAGE plpgsql STRICT VOLATILE;

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Function Temp Table Woes

2006-08-08 Thread Michael Fuhr
On Tue, Aug 08, 2006 at 10:24:02AM -0700, Chad Voelker wrote:
> The Goal: Return a set of records from one table based
> on entries in a temporary table. I'd expect this to be
> a commonly requested functionality. After reading the
> docs, support lists and googling, I haven't come up
> with a clean solution. My query is at the end of this
> post.
> 
> I've come to the conclusion that returning a REFCURSOR
> is the best approach, but I will take other suggestions.

A set-returning function would probably be easier to use; search
for examples of functions declared with "RETURNS SETOF ".
Using views instead of a function might also be possible.

> My current issue is that the temp table (tt_occ_units) is
> not being dropped after commit. The second calling of this
> function produces the 'relation already exists' error. 

The temporary table is indeed being dropped; the problem is that
the function creates another non-temporary table with the same name
and that table isn't being dropped:

>   CREATE TEMP TABLE tt_occ_units (unit_id INTEGER)
> ON COMMIT DROP;
>   -- Get ids for all available units
>   SELECT u.id INTO tt_occ_units

The first command above creates the temporary table but then SELECT
INTO creates the non-temporary table with the same name but in a
different schema.  Here's a simple example that shows what's happening:

CREATE FUNCTION test() RETURNS void AS $$
BEGIN
CREATE TEMP TABLE foo (x integer) ON COMMIT DROP;
SELECT x INTO foo FROM (SELECT 1::integer AS x) AS s;
END;
$$ LANGUAGE plpgsql;

test=> \dt *.foo
No matching relations found.
test=> BEGIN;
BEGIN
test=> SELECT test();
 test 
--
 
(1 row)

test=> \dt *.foo
List of relations
  Schema   | Name | Type  | Owner 
---+--+---+---
 pg_temp_1 | foo  | table | mfuhr
 public| foo  | table | mfuhr
(2 rows)

test=> COMMIT;
COMMIT
test=> \dt *.foo
   List of relations
 Schema | Name | Type  | Owner 
+--+---+---
 public | foo  | table | mfuhr
(1 row)

Notice that the temporary table went away after the commit but that
the non-temporary table remained -- that's the table that's giving
you trouble.  Instead of using SELECT INTO you could use INSERT
with a query, like this:

CREATE TEMP TABLE  ...
INSERT INTO  SELECT ...

However, this still has a problem: after the first time you call
the function subsequent calls will fail with "relation with OID
X does not exist."  See the FAQ for the reason and how to avoid
it:

http://www.postgresql.org/docs/faqs.FAQ.html#item4.19

Instead of using a temporary table, consider incorporating that
query directly into the main query/queries.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings