[SQL] Re: [SQL] unnesting of array of different size explodes memory

2011-04-13 Thread Charlie
Another approach to consider is using strings.  E.g. The function assembles the 
underlying query as a string and returns the results of executing the string.  
Passing the groups in a string may be straightforward in your application, or 
not.

There was a thread in the novice forum last year about the relative speed of 
functions that assemble and execute strings vs functions that encode the 
underlying query.  The string execution is faster.  If I recall the 
explanation, the executed string is remembered by the planner and speeds up.  
But the planner has no knowledge of the query structure encoded in the 
function, and so starts from scratch each time - very slow.

I got my function times from 10 secs to 300 ms by assembling query strings in 
the function.

Charlie



- Reply message -
From: "Andreas Gaab" 
Date: Wed, Apr 13, 2011 6:15 am
Subject: [SQL] unnesting of array of different size explodes memory
To: "pgsql sql" 
Cc: "Hans Joachim Münzer" 


Hi,

I need to sum up the elements of two arrays. Most arrays were of the same size, 
thus I just unnested the arrays in one table and grouped the results in a loop 
for every row. When two arrays of different size (1200 and 1300) were 
processed, the memory usage exploded and the query was killed by the kernel.

As I now understand, the following query leads to 12 results, not just 4 (or 
3...):

SELECT unnest(ARRAY[1,2,3]), unnest(ARRAY[4,5,6,7]);

Why could postgres use as much memory till the kernel complained when unnesting 
1200 and 1300 elements resulting in 1.6e6 rows. Are there settings to prevent 
this such as "work_mem"?

Regards,
Andreas

___

SCANLAB AG
Dr. Andreas Simon Gaab
Entwicklung * R & D

Siemensstr. 2a * 82178 Puchheim * Germany
Tel. +49 (89) 800 746-513 * Fax +49 (89) 800 746-199
mailto:[email protected] * www.scanlab.de<http://www.scanlab.de>

Amtsgericht München: HRB 124707 * USt-IdNr.: DE 129 456 351
Vorstand: Georg Hofner (Sprecher), Christian Huttenloher, Norbert Petschik
Aufsichtsrat (Vorsitz): Dr. Hans J. Langer
___

Besuchen Sie uns auf der / Meet us at
LASER World of PHOTONICS 2011
Munich, Germany
May 23 - 26, 2011
Hall C2, Booth 461



Re: [SQL] help on select

2011-04-21 Thread Charlie

You might try:
 
WITH
  pontos AS
  (
SELECT column1 AS idponto
FROM (VALUES (10), (11), (23), (24) ) AS a
  ),
  subset AS
  (
SELECT b.idponto, date_trunc('day', datetime) AS datetime
FROM medidas b
INNER JOIN pontos USING(idponto)
GROUP BY b.idponto, date_trunc('day', datetime)
  ),
  datetimes AS
  (
SELECT datetime
FROM subset
GROUP BY datetime
HAVING COUNT(*) = (SELECT COUNT(*) FROM pontos)
  )
SELECT max(datetime)
FROM datetimes
;

 

 


From: [email protected]
Date: Wed, 20 Apr 2011 17:10:32 -0300
Subject: [SQL] help on select
To: [email protected]

Hi guys, 
I need your help.
I have a table called medidas, in this table i have some ocurrences that has 
id_medida(primary key) id_ponto (sec_key) and also datetime field as timestamp.
i would like to know from a set of idpontos, e.g. 10,11,23,24 how can i get 
the most recent date that is common to all??
for example, if idponto das date 2011-02-03 but none of others have this date 
in the db i dont want this. i want one common for all..
thanks.

---
Saulo B. M. Venâncio
Control and Automation Eng. Student
Associate in Business Management
T +55 4884121486
E [email protected]



  

[SQL] Re: [SQL] Select and merge rows?

2011-05-05 Thread Charlie
While there is insufficient information provided (a couple of table snippets), 
you may consider and experiment with the snippet below to get you started.


SELECT 
  ids.id,
  f1.value AS value1,
  f2.value AS value2,
  f3.value AS value3
FROM
( SELECT DISTINCT id FROM foo ) AS ids 
LEFT JOIN foo f1 
ON f1.id = ids.id 
AND f1.order = 1
LEFT JOIN foo f2
ON f2.id = ids.id 
AND f2.order = 2 
LEFT JOIN foo f3 
ON f3.id = ids.id 
AND f3.order = 3
ORDER BY ids.id; 


- Reply message -
From: "Claudio Adriano Guarracino" 
Date: Thu, May 5, 2011 5:18 pm
Subject: [SQL] Select and merge rows?
To: 

Hello!
I have a doubt about a query that I tried to do, but I cant..
This is the scenario:
I have a table, with this rows:
order    ID    value    
--
1    1000    3
2    1000    5
3    1000    6
1    1001    1
2    1001    2
1    1002    4
2    1002    4

I need to get this table, divided by ID, like this:
id    value1    value2    value3

1000    3    5    6
1001    1    2
1002    1    2

How I can do this?
I tried with cursors and view, but i can't
Any help is welcome!

Thanks in advance!
Regards,




[SQL] Re: [SQL] Sorting data based fields in two linked tables

2011-05-14 Thread Charlie
SELECT

A.ID,A.FIELD1,A.FIELD2,  B.FIELD1,B.FIELD2, B.FIELD1,B.FIELD2, B.FIELD1,B.FIELD2

FROM a
INNER JOIN B
ON a.id = b.a_id 
ORDER BY a.field2 ASC, b.field1 ASC ;


- Reply message -
From: "R. Smith" 
Date: Fri, May 13, 2011 12:00 pm
Subject: [SQL] Sorting data based fields in two linked tables
To: 

Hi,

I am looking for a way to sort data returned from two tables with the
first sort based on a field from table A and the secord sort based on
the results of the first sort but the sort field is from table B.
While I can sort on either fields from either table, I cannot get it
to work on both. I have tried a crosstab query, but this a fails as
the number of rows returned from TABLE B for each row in TABLE A is an
unknown. I tried creating a temporary table, to then sort on, but this
failed as well. Example layout below:

Table A

ID FIELD1 FIELD2

Table B

ID, A.ID FIELD1,FIELD2

Output Based on sorting A.FIELD2, then B.FIELD1

A.ID,A.FIELD1,A.FIELD2,  B.FIELD1,B.FIELD2, B.FIELD1,B.FIELD2, B.FIELD1,B.FIELD2

Can anyone help me with this?

Regards

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql



[SQL] Re: [SQL] extracting location info from string

2011-05-25 Thread Charlie
Have you looked at 

http://en.m.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance




- Reply message -
From: "Tarlika Elisabeth Schmitz" 
Date: Wed, May 25, 2011 6:13 pm
Subject: [SQL] extracting location info from string
To: 

On Wed, 25 May 2011 09:25:48 -0600
Rob Sargent  wrote:

>
>
>On 05/24/2011 10:57 AM, Lew wrote:
>> Tarlika Elisabeth Schmitz wrote:
>>
>>> CREATE TABLE person
>>> (
>>> id integer NOT NULL,
>>> "name" character varying(256) NOT NULL,
>>> "location" character varying(256),
>>> CONSTRAINT person_pkey PRIMARY KEY (id)
>>> );
>>>
>>> this was just a TEMPORARY table I created for quick analysis
>>> of my CSV data (now renamed to temp_person).

CREATE TABLE country
(
  id character varying(3) NOT NULL, -- alpha-3 code
  "name" character varying(50) NOT NULL,
  CONSTRAINT country_pkey PRIMARY KEY (id)
);


>To minimize the ultimately quite necessary human adjudication, one
>might make good use of what is often termed "crowd sourcing":  Keep
>all the distinct "hand entered" values and a map to the final human
>assessment. 

I was wondering how to do just that. I don't think it would be a good
idea to hard code this into the clean-up script. Take, for instance,
variations of COUNTRY.NAME spelling. Where would I store these? 

I could do with a concept for this problem, which applies to a lot of
string-type info.

-- 

Best Regards,
Tarlika Elisabeth Schmitz

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql



[SQL] Re: [SQL] selecting records X minutes apart

2011-06-04 Thread Charlie
SELECT a.* 
FROM foo a 
INNER JOIN foo b 
ON b.id = a.id 
AND b.ts >= a.ts + '5 minutes'
GROUP BY a.id, a.ts 
ORDER BY a.id, a.ts 

Not clear yet on the needs, but this may give a start. 


- Reply message -
From: [email protected]
Date: Sat, Jun 4, 2011 4:15 pm
Subject: [SQL] selecting records X minutes apart
To: 

On Sat, Jun 04, 2011 at 11:45:08AM +, Jasen Betts wrote:
> On 2011-06-03, [email protected]  wrote:
> >
> > ID  TS (HH:MM)
> > ---
> > 0   20:00
> > 0   20:05
> > 0   20:10
> > 1   20:03
> > 1   20:09
> >
> >
> > Does my question make sense?
> 
> no, why is (1,20:04) excluded, but (0,20:05) included?
> both records are 5 minutes from the newest.

Jasen,

(1,20:04) is excluded because it's timestamp is less than 5 minutes from the
previous record with the same ID (1,20:03), (0,20:05) is included for the
opposite reason.

Let me restate my requirement again with a little more detail.  I want to
select records grouped by ID, ordered by timestamp, in ascending order so I'm
starting with the oldest, that are at least X minutes apart.

I hope that helps.

Thanks again,
Wayne

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql



[SQL] Re: [SQL] selecting records X minutes apart

2011-06-04 Thread Charlie
WITH foo AS
(
  SELECT column1::integer id, column2::timestamp ts
  FROM (VALUES
    (0, '1-Jan-2010 20:00'),
    (1, '1-Jan-2010 20:03'),
    (1, '1-Jan-2010 20:04'),
    (0, '1-Jan-2010 20:05'),
    (1, '1-Jan-2010 20:05'),
    (0, '1-Jan-2010 20:08'),
    (1, '1-Jan-2010 20:09'),
    (0, '1-Jan-2010 20:10')) vals
)

SELECT * 
FROM 
(
  SELECT
    id,
    ts,
    (
      SELECT b.ts FROM foo b
      WHERE b.id = a.id
      AND b.ts > a.ts
      ORDER BY b.ts
      LIMIT 1
    ) - ts gap
    FROM foo a
) c
ORDER BY id, ts
;

Still can't make heads or tails of the needs yet.  But running the snippet 
above may give some more ideas.

Looking at id 1 tuples: 20:03 is in because it has a record >= X away.  But so 
does 20:04.  20:04 is out because it has a record that is < X away.  But so 
does 20:03, which is in. Etc.




- Reply message -
From: [email protected]
Date: Fri, Jun 3, 2011 4:52 pm
Subject: [SQL] selecting records X minutes apart
To: 

I have a table that, at a minimum, has ID and timestamp columns.  Records
are inserted into with random IDs and timestamps.  Duplicate IDs are allowed. 

I want to select records grouped by ID, ordered by timestamp that are X minutes
apart. In this case X is 5.

Note, the intervals are not X minute wall clock intervals, they are X minute
intervals from the last accepted record, per-id.

For instance here is some sample input data:

ID  TS (HH:MM)
---
0   20:00
1   20:03
1   20:04
0   20:05
1   20:05
0   20:08
1   20:09
0   20:10

I'd want the select to return:

ID  TS (HH:MM)
---
0   20:00
0   20:05
0   20:10
1   20:03
1   20:09


Does my question make sense?

Thanks in advance,
Wayne

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql



[SQL] Re: [SQL] How to remove a set of characters in text-columns ?

2011-06-30 Thread Charlie
Would 

regexp_replace(src_str, '[\{\}\[\]\(\)\.', '')

at

http://www.postgresql.org/docs/9.0/static/functions-string.html

help?




- Reply message -
From: "Andreas" 
Date: Thu, Jun 30, 2011 4:28 pm
Subject: [SQL] How to remove a set of characters in text-columns ?
To: 

Hi,
how can I remove a set of characters in text-columns ?
Say I'd like to remove { } ( ) ' " , ; . : !
Of course I can chain replace ( replace ( replace ( replace ( ... , '' ) . 
and replace the chars one by one against an empty string ''.

There might be a more elegant way.
Is there ?


regards

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql





[SQL] Re: [SQL] Help with regexp-query

2011-08-02 Thread Charlie
select id, regexp_replace(category, (E'\\|{2,}'), E'\|', 'g') as category from
akb_articles limit 100

Backslash in regex doubled. Added global modifier to replace all occurrences.


- Reply message -
From: "Johann Spies" 
Date: Thu, Jul 28, 2011 8:20 am
Subject: [SQL] Help with regexp-query
To: 

I am struggling a bit to do the following type of update in a table.

I want the content of a field updated like this:

Original:  
'0894396e-16bf-4e63-aa52-97fe7031eec9||50a6b47c-f69c-414d-bcb6-14bbe403de5f|||'

After update:
 '|0894396e-16bf-4e63-aa52-97fe7031eec9|50a6b47c-f69c-414d-bcb6-14bbe403de5f|'

in other words: change all multiple adjacent occurences of '|' to only 1. 

I have tried the following query but it fails:

select id, regexp_replace(category, (E'\|{2,}'), E'\|') as category from
akb_articles limit 100

This ends with 'ERROR: invalid regular expression: quantifier operand
invalid'.

I would apreciate some help with this one please.

Regards
Johann
-- 
Johann SpiesTelefoon: 021-808 4699
Databestuurder /  Data manager

Sentrum vir Navorsing oor Evaluasie, Wetenskap en Tegnologie
Centre for Research on Evaluation, Science and Technology 
Universiteit Stellenbosch.

 "If any of you lack wisdom, let him ask of God, that 
  giveth to all men liberally, and upbraideth not; and 
  it shall be given him."  James 1:5 

-- 
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql



Re: [SQL] Calculating relative time/distance from CURRENT_TIMESTAMP

2011-08-04 Thread Charlie

Consider: SELECT (NOW() - '1-Aug-2011')::text || ' ago.' user_string;
yields: "3 days 18:59:24.781174 ago." 
 From: [email protected]
To: [email protected]
CC: [email protected]
Subject: Re: [SQL] Calculating relative time/distance from CURRENT_TIMESTAMP
Date: Thu, 4 Aug 2011 17:12:51 +0530
















Hi Amitabh,

 

Yes, I already took a glance of the Date/Time functions.  But what
I’m expecting is something more meaningful/user-friendly value to be
returned than from the actual return value of these functions available here. 
I’m just finding out whether this could be solved at query level itself.

 

For example, there is age(timestamp, timestamp) function. 
If I call like this select
age(current_timestamp, (current_timestamp - interval '1 day')) it returns

“1 day”.  But what I’m expecting to be returned
from the function is something more meaningful/user-friendly to the end users. 
In this case, I wanted it to return “yesterday”, similarly today, 15
minutes ago, 1 week ago, etc.

 

Regards,

Gnanam

 



From: Amitabh Kant
[mailto:[email protected]] 

Sent: Thursday, August 04, 2011 4:34 PM

To: [email protected]

Cc: [email protected]

Subject: Re: [SQL] Calculating relative time/distance from
CURRENT_TIMESTAMP



 

Have you looked into the date
time functions already ?



http://www.postgresql.org/docs/9.0/static/functions-datetime.html





Amitabh Kant



On Thu, Aug 4, 2011 at 1:24 PM, Gnanakumar  wrote:

Hi,



Is there any built-in function/add-on module available in PostgreSQL, that

converts a given "timestamp"/"timestamptz" value into its
relative distance

from CURRENT_TIMESTAMP?  For example, relative distance as today,
yesterday,

5 minutes ago, 1 week ago, etc.



Regards,

Gnanam







--

Sent via pgsql-sql mailing list ([email protected])

To make changes to your subscription:

http://www.postgresql.org/mailpref/pgsql-sql



 

  

[SQL] Schema partitioning

2011-09-01 Thread Charlie
Could I get feedback from the community on schema partitioning?

I'm doing maintenance on my ddl and I'm noticing that my tables are all in 1 
schema, but they have prefixes on their names like table_app1_sometable, 
table_app1_secondtable,  table_app2_anothertable, 
table_priviledged_restrictedtable1, etc.  The table_app1 tables seem to want to 
go in their own schema "app1", etc, and drop the prefixes.  Except they'll 
still be there, as in app1.sometable.

Is this just style?  Or are there concrete benefits to partitioning?


[SQL] serial column vs. explicit sequence question

2002-06-13 Thread Charlie Toohey

I'm having a problem and there seems to be 2 solutions. It is simple and 
straighforward, but will take several paragraphs to explain.

I have a schema with a master-detail design. The master table does not have 
an expicit id, so I have a column of type serial. 

Lets say I need to insert a row into the master table and N rows into the 
detail table. After inserting a row into master, and before detail, I need to 
read the master table to obtain the value of the id for the row just 
inserted, so I can insert this id as the foreign key value for the N rows in 
the detail table. 

This seems like a poor solution because I have to write and then read the 
master table each time. With lot of activity on these tables, I don't know 
how well this will scale. Additionally, the only way that I can guarantee 
that I am getting the id of the most recent row inserted into master is to 
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE --- because otherwise, if other 
processes are inserting rows into master/detail concurrently, I may pick up 
the id from an incorrect row (the technique used to get the correct id is to 
include a timestamp column on the insert into master and then query for the 
latest row).

A better solution would seem to use a sequence explicitly, rather than a id 
column of type serial. I would obtain the id value from the sequence, and 
then insert this id into the master table and into the detail table. This 
way, I wouldn't be writing/reading the same table constantly -- I would only 
be writing to it, and, I would guarantee that I would be using the correct id 
in both master and detail without have to SET TRANSACTION ISOLATION LEVEL 
SERIALIZEABLE.

Any comments on which solution you would choose, or is there a better 
solution ?

Thanks,
Charlie

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] serial column vs. explicit sequence question

2002-06-13 Thread Charlie Toohey

If session A and session B are concurrently doing the same master-detail 
transaction, wouldn't currval possibly reflect the sequence value used by the 
other session ? Or are you saying that since this will be an explicit 
transaction that currval won't reflect the fact that the sequence may have 
been incremented by another session ?


On Thursday 13 June 2002 02:06 pm, Stephan Szabo wrote:
> On Thu, 13 Jun 2002, Charlie Toohey wrote:
> > I'm having a problem and there seems to be 2 solutions. It is simple and
> > straighforward, but will take several paragraphs to explain.
> >
> > I have a schema with a master-detail design. The master table does not
> > have an expicit id, so I have a column of type serial.
> >
> > Lets say I need to insert a row into the master table and N rows into the
> > detail table. After inserting a row into master, and before detail, I
> > need to read the master table to obtain the value of the id for the row
> > just inserted, so I can insert this id as the foreign key value for the N
> > rows in the detail table.
> >
> > This seems like a poor solution because I have to write and then read the
> > master table each time. With lot of activity on these tables, I don't
> > know how well this will scale. Additionally, the only way that I can
> > guarantee that I am getting the id of the most recent row inserted into
> > master is to SET TRANSACTION ISOLATION LEVEL SERIALIZABLE --- because
> > otherwise, if other processes are inserting rows into master/detail
> > concurrently, I may pick up the id from an incorrect row (the technique
> > used to get the correct id is to include a timestamp column on the insert
> > into master and then query for the latest row).
> >
> > A better solution would seem to use a sequence explicitly, rather than a
> > id column of type serial. I would obtain the id value from the sequence,
> > and then insert this id into the master table and into the detail table.
> > This way, I wouldn't be writing/reading the same table constantly -- I
> > would only be writing to it, and, I would guarantee that I would be using
> > the correct id in both master and detail without have to SET TRANSACTION
> > ISOLATION LEVEL SERIALIZEABLE.
> >
> > Any comments on which solution you would choose, or is there a better
> > solution ?
>
> Well, serial really is just an integer with a default value pulling from a
> sequence, so right now you can use currval on the sequence (which I think
> gets named something like __seq

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] serial column vs. explicit sequence question

2002-06-13 Thread Charlie Toohey

that definitely helps ! thank you Jason --- the key thing that I didn't 
undertand, and you have now enlightened me, is that currval was connection 
dependent --- I didn't think this would be guaranteed to work with concurrent 
transactions, but now I understand.

Just prior to receiving your message, I posted a reply basically asking how 
currval would work if there were concurrent updates --- please ignore that 
response.

thanks everyone --- I now feel "empowered" to carry on with my project


On Thursday 13 June 2002 03:01 pm, Jason Earl wrote:
> Charlie Toohey <[EMAIL PROTECTED]> writes:
> > I'm having a problem and there seems to be 2 solutions. It is simple
> > and straighforward, but will take several paragraphs to explain.
> >
> > I have a schema with a master-detail design. The master table does
> > not have an expicit id, so I have a column of type serial.
> >
> > Lets say I need to insert a row into the master table and N rows
> > into the detail table. After inserting a row into master, and before
> > detail, I need to read the master table to obtain the value of the
> > id for the row just inserted, so I can insert this id as the foreign
> > key value for the N rows in the detail table.
> >
> > This seems like a poor solution because I have to write and then
> > read the master table each time. With lot of activity on these
> > tables, I don't know how well this will scale. Additionally, the
> > only way that I can guarantee that I am getting the id of the most
> > recent row inserted into master is to SET TRANSACTION ISOLATION
> > LEVEL SERIALIZABLE --- because otherwise, if other processes are
> > inserting rows into master/detail concurrently, I may pick up the id
> > from an incorrect row (the technique used to get the correct id is
> > to include a timestamp column on the insert into master and then
> > query for the latest row).
> >
> > A better solution would seem to use a sequence explicitly, rather
> > than a id column of type serial. I would obtain the id value from
> > the sequence, and then insert this id into the master table and into
> > the detail table. This way, I wouldn't be writing/reading the same
> > table constantly -- I would only be writing to it, and, I would
> > guarantee that I would be using the correct id in both master and
> > detail without have to SET TRANSACTION ISOLATION LEVEL
> > SERIALIZEABLE.
> >
> > Any comments on which solution you would choose, or is there a
> > better solution ?
> >
> > Thanks,
> > Charlie
>
> The SERIAL type is a thin veneer over an underlying conglomeration of
> a unique index and a sequence, nothing more, nothing less.  I still
> prefer to use the old syntax that spelled this out explicitly (mostly
> because it reminded me that I needed to drop the sequences as well as
> the table if I made changes during the development phases of my
> project).  Instead of using a serial type I have a whole pile of
> scripts that contain bits that look like this:
>
> DROP TABLE prod_journal;
> DROP SEQUENCE prod_journal_id_seq;
>
> CREATE SEQUENCE prod_journal_id_seq;
>
> CREATE TABLE prod_journal (
>id int PRIMARY KEY
>   DEFAULT nextval('prod_journal_id_seq'),
>...
> );
>
> The SERIAL type does precisely the same sort of thing.  The only
> difference is that PostgreSQL thinks up the sequence name for you
> (currently PostgreSQL tries to choose a name that looks precisely like
> the one I have chosen in this example).  The reason that I bring this
> up is A) it makes me happy to think that I have been using PostgreSQL
> long enough that my PostgreSQL memories predate the SERIAL type, and
> B) to point out that there is not really a difference between using
> the SERIAL type and using sequences explicitly.
>
> What you *really* need is to get acquainted with the nifty sequence
> functions currval and nextval.  They hold the secret to sequence
> Nirvana.  See Chapter 4 Section 11 of the PostgreSQL User's Guide for
> the full scoop.  The short story is that curval gives the current
> value of the sequence (for whichever backend you are connected to) and
> nextval will give you the next value of the sequence.
>
> Now let's say that you had two simple tables foo for the master record
> and bar for the detail records.
>
> test=# create table foo (id serial primary key, name text);
> NOTICE:  CREATE TABLE will create implicit sequence 'foo_id_seq' for SERIAL
> column 'foo.id' NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit
> index 'foo_p

[SQL] schema-qualified permission problem

2003-05-29 Thread Charlie Toohey
Why can't my primary user (ttvuser) access tables (owned by owneruser) for
which they've been granted access?  I can describe the table, but can't
read it.

===
Here's what I'm getting:

ttvdev=> \c - owneruser
Password: 
You are now connected as new user owneruser.
ttvdev=> \d
  List of relations
 Schema |  Name  |   Type   | Owner  
++--+
 owneruser | users  | table| owneruser


ttvdev=> select count(*) from owneruser.users;
 count 
---
 0
(1 row)


ttvdev=> \dp
  Access privileges for database "ttvdev"
 Schema | Table  |Access privileges
++-
 owneruser | users  | {=,owneruser=arwdRxt,ttvuser=arwd}


ttvdev=> \c - postgres
Password: 
You are now connected as new user postgres.
ttvdev=# select count(*) from owneruser.users;
 count 
---
 0
(1 row)



ttvdev=> \c - ttvuser
Password: 
You are now connected as new user ttvuser.
ttvdev=> \d owneruser.users
 Table "owneruser.users"
   Column   |Type | Modifiers 
+-+---
 user_id| integer | not null
 initials   | character varying(3)| not null
 username   | character varying(18)   | not null
 password   | character varying(25)   | not null
 email  | character varying(256)  | not null
 authenticationdate | timestamp without time zone | 
 creationdate   | timestamp without time zone | 
 modifydate | timestamp without time zone | 
 userlastmodified   | timestamp without time zone | 
 adminlastmodified  | timestamp without time zone | 
 autologin  | character varying(1)| 
 active | character varying(1)| 
 passhint   | character varying(25)   | 
 firstname  | character varying(40)   | 
 lastname   | character varying(40)   | 
 sex| character varying(6)| 
 department | character varying(40)   | 
 manager_flag   | character varying(1)| 
 phone  | character varying(50)   | 
Indexes: pk_users primary key btree (user_id)


ttvdev=> select count(*) from owneruser.users;
ERROR:  owneruser: permission denied



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[SQL] Problem with JOINS

2004-05-21 Thread Charlie Clark
 Join Filter: ((("inner".value = 'erlassen'::character 
varying) OR ("outer".id_status = 2)) AND (("inner".value = 
'erlassen'::character varying) OR ("outer".id_authorise = 2)) AND 
(("inner".value = 'erlassen'::character varying) OR 
(ltrim(lower(("outer".ort)::text)) ~~ 'neuss%'::text)) AND (("outer".roles 
~~ '%Premium'::text) OR ("inner".value = 'bezahlt'::character varying)))
   ->  Hash Join  (cost=57.80..68.97 rows=1 width=199)
 Hash Cond: ("outer".id_person = "inner".id_person)
 ->  Seq Scan on bill  (cost=0.00..10.44 rows=144 
width=8)
 ->  Hash  (cost=57.80..57.80 rows=1 width=191)
   ->  Hash Join  (cost=49.85..57.80 rows=1 
width=191)
 Hash Cond: ("outer".id_person = 
"inner".id_person)
 Join Filter: ((("inner".roles ~~ 
'%Premium'::text) OR ("outer".id_status = 2)) AND (("inner".roles ~~ 
'%Premium'::text) OR ("outer".id_authorise = 2)) AND (("inner".roles ~~ 
'%Premium'::text) OR (ltrim(lower(("outer".ort)::text)) ~~ 'neuss%'::text)))
 ->  Merge Join  (cost=41.84..44.05 
rows=153 width=176)
   Merge Cond: ("outer".id_gender = 
"inner".id)
   ->  Sort  (cost=40.81..41.19 
rows=153 width=164)
 Sort Key: person.id_gender
 ->  Hash Join  
(cost=18.97..35.25 rows=153 width=164)
   Hash Cond: 
("outer".id_person = "inner".id_person)
   ->  Seq Scan on address  
(cost=0.00..13.58 rows=158 width=120)
   ->  Hash  
(cost=18.59..18.59 rows=153 width=44)
 ->  Hash Join  
(cost=4.91..18.59 rows=153 width=44)
   Hash Cond: 
("outer".id_person = "inner".id_person)
   ->  Seq Scan 
on person  (cost=0.00..10.58 rows=158 width=40)
   ->  Hash  
(cost=4.53..4.53 rows=153 width=4)
 ->  
Seq Scan on therapist  (cost=0.00..4.53 rows=153 width=4)
   ->  Sort  (cost=1.03..1.03 rows=2 
width=12)
 Sort Key: gender.id
 ->  Seq Scan on gender_list 
gender  (cost=0.00..1.02 rows=2 width=12)
 ->  Hash  (cost=7.61..7.61 rows=161 
width=15)
   ->  Seq Scan on users  
(cost=0.00..7.61 rows=161 width=15)
   ->  Hash  (cost=1.10..1.10 rows=2 width=15)
 ->  Seq Scan on bill_status_list bs  (cost=0.00..1.10 
rows=2 width=15)
   Filter: ((value = 'erlassen'::character varying) 
OR (value = 'bezahlt'::character varying)) 

What I notice is that in the second query the following filter is missing.
->  Hash  (cost=14.77..14.77 rows=1 width=120)
 ->  Seq Scan on address  (cost=0.00..14.77 
rows=1 width=120)
   Filter: (ltrim(lower((ort)::text)) 
~~ 'neuss%'::text)

I'm going to try and break this down and work through it myself but would 
be very grateful for any pointers.

Thanks

Charlie Clark

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


RE: [SQL] List Concatination [warning]

2001-05-02 Thread Wilkinson Charlie E
Title: RE: [SQL] List Concatination [warning]





A word of warning for the newbies...


*Don't* create a function called textcat.  You *will* be sorry.  There's
already a textcat builtin and you kinda need it for || and whatnot.
(Yes, I found out the hard way...)


There might be a better way to recover, but I did a pg_dumpall, clobbered
the postgres DB files, re-inited, re-loaded...  and that got me textcat
back.


A related question: how do you *display* a function?  Obviously there's
CREATE and DROP, but I could find nothing the equivalent of "LOOK AT".
I want to be able to see the code!


Anyhow, in case a working example of some list catting SQL would be useful,
here's what I put together to list all the states in a "zone":


  Table "newstate"
 Attribute |    Type | Modifier
---+-+--
 state | varchar(4)  |
 fullstate | varchar(40) |
 zone  | varchar(4)  |
 country   | varchar(4)  |


    Table "zone_names"
 Attribute |    Type    | Modifier
---++--
 zoneid    | varchar(4) | not null
 zonename  | varchar(8) | not null
Indices: zone_names_zoneid_key,
 zone_names_zonename_key 


CREATE FUNCTION commacat(text,text) RETURNS text AS '
begin
    if $1 <>  then
    return $1 || '', '' || $2 ;
    else
    return $2;
    end if;
end;
' LANGUAGE 'plpgsql';


CREATE AGGREGATE catenate(
    sfunc1=commacat,
    stype1=text,
    basetype=text,
    initcond1=''
);


CREATE VIEW zones AS
  SELECT newstate.zone AS id, zone_names.zonename AS zone_name,
    catenate(newstate.fullstate) AS desc
    FROM newstate, zone_names
    WHERE newstate.zone = zone_names.zoneid
    GROUP BY newstate.zone, zone_names.zonename; 


The output looks like:


cwilkins=# select * from zones where id = 'Z1';
 id | zone_name |   desc
+---+--
 Z1 | Zone 1    | Delaware, New York, Pennsylvania
(1 row) 


Many thanks to those here who provided clues on how to do this.
I'm a happy camper!  :)


-cw-


> -Original Message-
> From: Josh Berkus [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, March 15, 2001 11:30 AM
> Cc: sqllist
> Subject: Re: [SQL] List Concatination
> 
> 
> Richard,
> 
>   I wanted to thank you for the concatination suggestion 
> ... on testing,
> a custom aggregate *was* faster than procedural concatination ... much
> faster.
> 
> > But - if you don't care about the order of contacts you can 
> define an
> > aggregate function:
> > 
> > create aggregate catenate(sfunc1=textcat, basetype=text, 
> stype1=text, initcond1='');
> > 
> > Then group by client and catenate(firstname || ' ' || lastname)
> > 
> > You'll want to read the CREATE AGGREGATE page in the 
> reference manual,
> > replace textcat with your own routine that adds a comma and 
> you'll need
> > a finalisation routine to strip the final trailing comma.
> 
> Actually, if you use a sub-select as your data source, you can control
> both the appearance and the order of the catenated values:
> 
> SELECT client, catenate(con_name)
> FROM (SELECT client, (firstname || ' ' || lastname || ', '
>   FROM contacts ORDER BY lastname ASC) AS con_list
> GROUP BY client;
> 
> This seems to work pretty well.
> 
> > Note that this is probably not a good idea - the ordering of the
> > contacts will not be well-defined. When I asked about this 
> Tom Lane was
> > quite surprised that it worked, so no guarantees about 
> long-term suitability.
> 
> Hmmm ... this feature is very, very, useful now that I know how to use
> it.  I'd love to see it hang around for future versions of 
> PgSQL.  Tom?
> 
> -Josh Berkus
> 
> -- 
> __AGLIO DATABASE SOLUTIONS___
> Josh Berkus
>    Complete information technology  [EMAIL PROTECTED]
> and data management solutions   (415) 565-7293
>    for law firms, small businesses   fax  621-2533
> and non-profit organizations.   San Francisco
> 
> ---(end of 
> broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://www.postgresql.org/search.mpl
> 





[SQL] VACUUM VERBOSE ANALYZE locking up?? Please help!

2001-10-10 Thread Wilkinson Charlie E
Title: VACUUM VERBOSE ANALYZE locking up??  Please help!





Ok it doesn't kill the whole backend, but VACUUM... comes
to a halt partway through and it's necessary to ^C out
of the transaction in psql.  Sometimes trying again will
work, mostly that doesn't work.  Restarting the backend
and then doing the VACUUM... will usually result in
success.  I was having this problem in 7.0.something and
it's still happening in 7.1.3 (built from scratch on a
Redhat 6.2 box).  I used pg_dumpall to migrate the data.
I've made a serious effort to hunt the web for a clue with no
real success.


Here's what I get:


cwilkins=# vacuum verbose analyze;
NOTICE:  --Relation pg_type--
NOTICE:  Pages 4: Changed 0, reaped 1, Empty 0, New 0; Tup 244: Vac 0,
Keep/VTL 0/0, Crash 0, UnUsed 26, MinLen 106, MaxLen 109; Re-using:
Free/Avail. Space 4472/0; EndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u
sec.
NOTICE:  Index pg_type_oid_index: Pages 2; Tuples 244: Deleted 0. CPU
0.00s/0.00u sec.
NOTICE:  Index pg_type_typname_index: Pages 4; Tuples 244: Deleted 0.
CPU 0.00s/0.00u sec.
NOTICE:  Analyzing...
NOTICE:  --Relation pg_attribute--
NOTICE:  Rel pg_attribute: TID 24/20: DeleteTransactionInProgress 0 -
can't shrink relation
NOTICE:  Rel pg_attribute: TID 24/21: DeleteTransactionInProgress 0 -
can't shrink relation
NOTICE:  Rel pg_attribute: TID 24/22: DeleteTransactionInProgress 0 -
can't shrink relation
NOTICE:  Rel pg_attribute: TID 24/23: DeleteTransactionInProgress 0 -
can't shrink relation
NOTICE:  Rel pg_attribute: TID 24/24: DeleteTransactionInProgress 0 -
can't shrink relation
NOTICE:  Rel pg_attribute: TID 24/25: DeleteTransactionInProgress 0 -
can't shrink relation
[[[etc... etc...  snip...]]]
NOTICE:  Index pg_rewrite_oid_index: Pages 2; Tuples 11: Deleted 0. CPU
0.00s/0.00u sec.
NOTICE:  Index pg_rewrite_rulename_index: Pages 2; Tuples 11: Deleted 0.
CPU 0.00s/0.00u sec.
NOTICE:  --Relation pg_toast_17058--
NOTICE:  Pages 1: Changed 0, reaped 1, Empty 0, New 0; Tup 2: Vac 0,
Keep/VTL 0/0, Crash 0, UnUsed 3, MinLen 834, MaxLen 2034; Re-using:
Free/Avail. Space 5280/0; EndEmpty/Avail. Pages 0/0. CPU 0.01s/0.00u
sec.
NOTICE:  Index pg_toast_17058_idx: Pages 2; Tuples 2: Deleted 0. CPU
0.00s/0.00u sec.
[[[query stops here and will go no further]]]


Any useful insights or solutions?  Pretty please?  :)


Thanks for any help,
            Charlie


(Apologies in advance if this message has any HTML component.
I've told ^%$#@! Outlook as loudly as possible PLAIN TEXT ONLY!)
--
Charlie Wilkinson
TRIS Development Systems Administrator
M:I:SD:CT:CC:TD
Phone: 202-283-3241
MSMail:   [EMAIL PROTECTED]
SMTP:  [EMAIL PROTECTED]
Home:  [EMAIL PROTECTED]
This message constructed from 90% post-consumer electrons. 





Re: [SQL] Disk is full, what's cool to get rid of?

2006-07-27 Thread Wilkinson Charlie E
Won't help some of us, who set -m 0 on selected filesystems to begin
with.  But if we could get tune2fs -m -5  to work, then we
could unreserve space that didn't previously exist.  Think of the
possibilties!

I'll look into that as soon as I'm done modding my C compiler to handle
the --force option.

-cw-

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Erik Jones
Sent: Thursday, July 27, 2006 4:00 PM
To: Jeff Frost
Cc: Scott Marlowe; General Postgresql List
Subject: Re: [SQL] Disk is full, what's cool to get rid of?

Awesome.  Makes sense as 5% is exactly the amount of space that appeared

after running it.  Thanks!

Jeff Frost wrote:
> Depends what the default is on your system.  The default is 5% with 
> the version of mke2fs that I have here, so you would just:
>
> tune2fs -m 5 
>
> to put it back.
>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org