Re: [GENERAL] execute same query only one time?

2016-02-09 Thread Harald Fuchs
Johannes  writes:

>> What the reason to execute all statements which return different
>> columns at once?
>> 
>>> Saving roundtrips,
>> 
>> In most cases they are not so big. Getting a bunch of duplicated data
>> is wasting you network bandwidth and don't increase speed.
>
> In my and your example no duplicated data (result sets) is send over the
> network. The server do not need to wait until the client snips out the
> id and sends it id in the next query again. So the server can compute
> the result set without external dependencies as fast as possible.

Sounds like what you're really after is a stored procedure, isn't it?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] log_min_duration question

2016-02-08 Thread Harald Fuchs
>From the 9.5 docs:

  log_min_duration_statement (integer)

  Causes the duration of each completed statement to be logged if the
  statement ran for at least the specified number of
  milliseconds. Setting this to zero prints all statement
  durations. Minus-one (the default) disables logging statement
  durations. For example, if you set it to 250ms then all SQL statements
  that run 250ms or longer will be logged.

While the first sentence explicitly states a fixed unit (namely msec),
the last sentence seems to imply a (maybe optional) unit suffix
(ms in this case).  Which one is true?

How would e.g. "SET log_min_duration_statement=1min" be interpreted?

I'd prefer a docs clarification.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is PRIMARY KEY the same as UNIQUE NOT NULL?

2016-01-31 Thread Harald Fuchs
Ben Leslie  writes:

> "Technically, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL"
>
> I wanted to clarify if that was, technically, true.

Yes, but see below.

> "identifying a set of columns as primary key also provides metadata
> about the design of the schema, as a primary key implies that other
> tables can rely on this set of columns as a unique identifier for
> rows."

This means that e.g. you can use

  ALTER TABLE othertbl FOREIGN KEY (refid) REFERENCES mytbl

without specifying the column(s) of mytbl.  This implies that there can
be only one primary key (possibly covering more than one column), and
that's the second difference to UNIQUE NOT NULL.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] “Loop” in plpgsql Function - PostgreSQL 9.2

2016-01-19 Thread Harald Fuchs
"drum.lu...@gmail.com"  writes:

> So, the new plan is:
>
> 1 - Select 50.000 rows and gives it a batch number.
> 2 - Select *MORE* 50,000 rows and gives it a *NEW* batch number.
> 3 - Select *MORE* 50,000 rows and gives it a *NEW* batch number.

Why so complicated?  Here's a simplified example:

  CREATE TABLE mytable ( 
id serial NOT NULL,
payload int NOT NULL,
batch_number int NULL,
PRIMARY KEY (id)
  ); 

  INSERT INTO mytable (payload)
  SELECT x FROM generate_series(1, 2000) AS g(x);

  \set batchSize 600

  UPDATE mytable
  SET batch_number = (id % (SELECT count(*) FROM mytable) / :batchSize) + 1;

  SELECT batch_number, count(*) AS cnt
  FROM mytable
  GROUP BY batch_number
  ORDER BY batch_number;



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recursive Arrays 101

2015-10-26 Thread Harald Fuchs
Thomas Kellerer  writes:

> I always wonder whether it's more efficient to aggregate this path
> using an array rather than a varchar. Mainly because representing the
> numbers as varchars will require more memory than as integer, but then
> I don't know the overhead of an array structure and whether appending
> to an array doesn't actually copy it.

If you go that direction, you're not far away from the ltree extension
(which might also be a solution for the OP's problem).



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Controlling complexity in queries

2011-12-15 Thread Harald Fuchs
Jay Levitt jay.lev...@gmail.com writes:

 * You want contextual queries.

 (I guess this is a special case of you need non relational features.)

 In my case, I want all queries against content to be filtered by their
 relevance to the current user. That can't go into a view, because
 views don't have parameters; I need a computed column that may be
 different every time I run the query, and depends on a piece of
 information (the current user ID) that Postgres can't know.

How about the following:

  CREATE TABLE test1 (
id serial NOT NULL,
username text NOT NULL,
value text NOT NULL,
PRIMARY KEY (id)
  );

  COPY test1 (username, value) FROM stdin DELIMITER '|';
  user1|user1_1
  user1|user1_2
  user2|user2_1
  user2|user2_2
  user2|user2_3
  \.

  CREATE VIEW test1v AS
  SELECT id, username, value
  FROM test1
  WHERE username = current_user;

Here the result of SELECT * FROM test1v depends on who issued the query.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oddball data distribution giving me planner headaches

2011-12-03 Thread Harald Fuchs
Jeff Amiel becauseimj...@yahoo.com writes:

 At the moment I think the only way to work around this is
 to denormalize
 your schema a bit.

 And I feared as much.
 It's biting me in other areas as well...this unusual distribution of 
 data...certain types of customers have completely different data patterns 
 than others.  
 Back to the drawing board...thanks!

I find your table structure anyway somewhat strange.  For an ordinary
parent/child relationship a parent_id column in the customer table would
be enough.  Do you really have an m:n relationship between parents and
children?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get normalized data from tekst column

2011-11-24 Thread Harald Fuchs
Andrus kobrule...@hot.ee writes:

 David,
Regular Expressions are your friend here.  If you do not know them you
 should learn them; though if you ask nicely someone may just provide you
 the solution you need.
Split-to-array and unnest may work as well.
  
 Thank you very much. I dona**t know regexps.
 Can you provide example, please for 8.1. Or maybe CASE WHEN and substring
 testing can also used.

The query

SELECT id, a[1] AS name, a[2] AS percent
FROM (
SELECT id, regexp_split_to_array(regexp_split_to_table(comment, ', *'), 
'\W+') AS a
FROM project
  ) AS dummy

should work un every halfway recent PostgreSQL version - dunno about 8.1.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Harald Fuchs
In article 4116.1317226...@sss.pgh.pa.us,
Tom Lane t...@sss.pgh.pa.us writes:

 Not sure this specific proposal makes any sense at all.  IMO the only
 real advantage that rules have over triggers is that they work on a
 set-operation basis not a tuple-by-tuple basis.

Isn't that what statement-level triggers are for, at least in other DB
systems?  How about telling PostgreSQL's statement-level triggers
something about the set of rows they affect?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does postgresql 9.0.4 use index on PREFIX%SUFFIX queries?

2011-09-27 Thread Harald Fuchs
In article cabrt9rdxhkcxrq8wbohnikpf-cggktejwdw3q2_kxfedp4p...@mail.gmail.com,
Marti Raudsepp ma...@juffo.org writes:

 Ah, the reverse() function is not included with PostgreSQL 9.0 yet.
 This is what I use:

 CREATE FUNCTION reverse(input text) RETURNS text
 LANGUAGE plpgsql IMMUTABLE STRICT AS $$
 DECLARE
   result text = '';
   i int;
 BEGIN
   FOR i IN 1..length(input) BY 2 LOOP
 result = substr(input,i+1,1) || substr(input,i,1) || result;
   END LOOP;
   RETURN result;
 END$$;

Pavel Stehule has found a better solution for that:

CREATE OR REPLACE FUNCTION reverse(text) RETURNS text AS $$
  SELECT string_agg(substring($1 FROM i FOR 1), '')
  FROM generate_series(length($1), 1, -1) g(i)
$$ language sql;

But the best, of course, is upgrading to 9.1.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New feature: accumulative functions.

2011-09-26 Thread Harald Fuchs
In article 
CAFj8pRDx6JLmneV30kWNrcwzGLOSqyK-qN7T4_N37L9UPd2M=q...@mail.gmail.com,
Pavel Stehule pavel.steh...@gmail.com writes:

 2011/9/25 pasman pasmański pasma...@gmail.com:
 I found second use case. Look at expression:
 
 where left(str,n)='value'
 
 function left(str,n) increase monotonically for str and n. With this
 feature it can use index on str.
 
 Classic index needs recreating.
 

 these use cases are just theory - for example - this case should be
 solved with immutable functions

 I can use a functional index left(str, const) and use a query

 where left(str, const) = left('value', const) and left(str, n) = 'value'

 There are a theoretical cases, but these cases should be solved via
 special data type and GiST index

If I don't misunderstand you, this data type is called 'prefix_range',
available at PgFoundry.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column Privileges: NULL instead of permission denied

2011-09-20 Thread Harald Fuchs
In article df0c87d105b235419e2d9e5066cccf510b7...@gcmxbe02.dac.int,
Matthew Hawn matth...@donaanacounty.org writes:

 I have a table with privileged data that is restricted using column level
 permissions.  I would like to have single query  that returns data from
 the table.  If the user has permission, it should return the data but
 return NULL if the user does not have permission.   I do not want to
 create separate queries in my application for different users.

 

 Ex:

 Table people:  Name, SSN

 

 If I try:

 Select name, ssn from people;

 

 I get if the user does not have permission:

 **ERROR: permission denied for relation people **

 

 I would like to get:

 No Permission:

  Dave, NULL

  Bill, NULL

  Steve, NULL

 Permission:

  Dave, 456

  Bill, 789

   Steve, 123

The only thing I can think of is an ugly kludge:

-- Define the following function as a user with SELECT privilege

CREATE FUNCTION doselect(IN currusr text, OUT name text, OUT ssn int) RETURNS 
SETOF RECORD AS $$
SELECT name,
   CASE has_column_privilege($1, 'people', 'ssn', 'SELECT')
   WHEN true THEN ssn
   ELSE NULL END AS ssn
FROM people
$$ LANGUAGE sql SECURITY DEFINER;

CREATE VIEW people_view AS
SELECT * FROM doselect(current_user);

-- The following query will do what you want

SELECT * FROM people_view;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.1.0 bug?

2011-09-16 Thread Harald Fuchs
In article 21641.1316159...@sss.pgh.pa.us,
Tom Lane t...@sss.pgh.pa.us writes:

 Harald Fuchs hari.fu...@gmail.com writes:
 I think I've found a bug either in PostgreSQL 9.1.0 or in the ip4r
 package (version 1.05).

 Good catch --- gistendscan is forgetting to free so-giststate.

Confirmed - adding a pfree(so-giststate) solves my problem.  Thanks
for the quick fix!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bit datatype performance?

2011-09-15 Thread Harald Fuchs
In article CAPHN3JX1YNxnGsu3q5A0wGqMMwjXMcmu8LnZ72jepE2A=t2...@mail.gmail.com,
Antonio Vieiro anto...@antonioshome.net writes:

 Hi all,
 One of my entities 'E' may be 'tagged' with an arbitrary set of 256 tags 'T'.

 A first approach could be to add a M:N relationship between 'E' and 'T'.

 A second way to do this could be to add a BIT(256) datatype to 'E',
 setting bits to '1' if the entity is tagged with each one of the 256
 tags (i.e. using a 'bitmask' on the set of tags).

 Since querying entities 'E' with a certain set of tags 'T' must be
 very fast I was wondering if the second approach would be faster. What
 do you think?

I think the best way is to put the tags into a hstore column.  With a
GiST index on that column access is very fast.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.1.0 bug?

2011-09-15 Thread Harald Fuchs
I think I've found a bug either in PostgreSQL 9.1.0 or in the ip4r
package (version 1.05).  Since the problematic behavior occurs on
different tables and on different servers, it should be relatively easy
to reproduce:

CREATE TABLE foo (
  id serial NOT NULL,
  range ip4r NOT NULL,
  PRIMARY KEY (id)
);

Now populate this table with 2M rows where the range values don't
overlap.  Ensure this by doing

ALTER TABLE foo
ADD CONSTRAINT foo_range_ix
EXCLUDE USING gist (range WITH );

After a while PostgreSQL will begin to eat all your main memory and then crash.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join between a table and function.

2011-08-16 Thread Harald Fuchs
In article cakwofmjwz3znxcj9radn4ov+krsa-133968yvag3l8m3w3z...@mail.gmail.com,
Lauri Kajan lauri.ka...@gmail.com writes:

 I have also tried:
 select
 *, getAttributes(a.id)
 from
   myTable a

 That works almost. I'll get all the fields from myTable, but only a
 one field from my function type of attributes.
 myTable.id | myTable.name | getAttributes
 integer  | character| attributes
 123   | record name | (10,20)

 What is the right way of doing this?

If you want the attributes parts in extra columns, use

SELECT *, (getAttributes(a.id)).* FROM myTable a


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hstore installed in a separate schema

2011-08-04 Thread Harald Fuchs
In article 1312401318.5199.yahoomailclas...@web120108.mail.ne1.yahoo.com,
Ioana Danes ioanasoftw...@yahoo.ca writes:

 Hi,
 I am planning to use the contrib module hstore 
 but I would like to install it on a separate schema, not public, 
 and include the schema in the search_path. 
 Do you know if there are any issues with this scenario. 
 In the hstore.sql script I see it forces it into public:

 -- Adjust this setting to control where the objects get created.
 SET search_path = public;
 
 I did some tests for my use cases and it all works fine...

That's the way I use any contrib module because it doesn't mix my own
stuff with the module's stuff.  With an

  ALTER DATABASE mydb SET search_path = public, hstore

all that works transparently.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Difference between inet and cidr

2011-07-05 Thread Harald Fuchs
In article cabrt9rar2bfrxdx93h_aeqskmuchmwursfenp8itspexsws...@mail.gmail.com,
Marti Raudsepp ma...@juffo.org writes:

 Hi,
 On Tue, Jul 5, 2011 at 09:50, Yan Cheng CHEOK ycch...@yahoo.com wrote:
 The essential difference between inet and cidr data types is that inet 
 accepts values with nonzero bits to the right of the netmask, whereas cidr 
 does not.

 Say, if you have a /8 netmask, the 'cidr' type requires that all the
 24 rightmost bits are zero. inet does not have this requirement.

 E.g:
 db=# select '255.0.0.0/8'::cidr;
 255.0.0.0/8

 db=# select '255.1.0.0/8'::cidr;
 ERROR:  invalid cidr value: 255.1.0.0/8
 DETAIL:  Value has bits set to right of mask.

 And inet allows this:
 db=# select '255.1.0.0/8'::inet;
 255.1.0.0/8

 Hope that helps.

Do the inet/cidr types have any advantage over the ip4r contrib module?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select from Java Strings

2011-07-04 Thread Harald Fuchs
In article 4e116e11.1030...@gmail.com,
Daron Ryan daron.r...@gmail.com writes:

 Hello David,
 This is a simplified version of my own attempt:

 SELECT *
 FROM (oxford, webster)
 WHERE NOT ( columnName = ANY (SELECT name FROM dictionaries))

 The idea is that oxford and webster come from the Java program end up
 in the column called columnName.

If the list is not very long, you could use a VALUES expression:

SELECT g.x
FROM (VALUES ('oxford'), ('webster')) AS g(x)
WHERE g.x NOT IN (SELECT name FROM dictionaries)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a default value to a column after it exists

2011-04-14 Thread Harald Fuchs
In article 20110413163120.gu24...@shinkuro.com,
Andrew Sullivan a...@crankycanuck.ca writes:

 On Wed, Apr 13, 2011 at 09:21:20AM -0700, Gauthier, Dave wrote:
 Is there a way to add a default value definition to an existing column?  
 Something like an alter table... alter column... default 'foo'.

 ALTER TABLE table ALTER [ COLUMN ] column SET DEFAULT expression

 (see http://www.postgresql.org/docs/9.0/interactive/sql-altertable.html)

 Note that this doesn't actually update the fields that are NULL in the
 column already.  For that, once you had the default in place, you
 could do

 UPDATE table SET column = DEFAULT WHERE column IS NULL

And you probably want to do
  ALTER TABLE table ALTER [ COLUMN ] column SET NOT NULL
after that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model

2010-10-24 Thread Harald Fuchs
In article 20101022161331.gd9...@frohike.homeunix.org,
Peter Bex peter@xs4all.nl writes:

 As far as I can see, this would imply either creating views on the
 whatever for every user (or company?), or manually crafting queries
 to do the same.

Not necessarily. Consider this:

  CREATE TABLE t1 (
id serial NOT NULL,
val int NOT NULL,
usr text NOT NULL,
PRIMARY KEY (id)
  );

  COPY t1 (val, usr) FROM stdin;
  1 u1
  2 u1
  3 u2
  4 u3
  \.

  CREATE VIEW t1v (id, val) AS
  SELECT id, val
  FROM t1
  WHERE usr = current_user;

Now user u1 can only see rows 1 and 2.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint: string length must be 32 chars

2010-10-17 Thread Harald Fuchs
In article 4cba2bc4.9030...@darrenduncan.net,
Darren Duncan dar...@darrenduncan.net writes:

 I would further recommend turning the above into a separate data type,
 especially if you'd otherwise be using that constraint in several
 places, like this ...

FWIW, the shatypes contrib package includes a binary md5 datatype.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prefix LIKE search and indexes issue.

2010-07-25 Thread Harald Fuchs
In article aanlktims+x5bpfaxf+9_cohiaga7=b_npn=hw99kg...@mail.gmail.com,
Marcelo de Moraes Serpa celose...@gmail.com writes:

 Some good souls hinted me at the prefix extension, but
 how would I use it?

Like this:

CREATE TABLE users (
  id serial NOT NULL,
  name text NOT NULL,
  reversed_domain prefix_range NULL,
  PRIMARY KEY (id)
);

CREATE INDEX users_dom_ix ON users USING gist (reversed_domain);

SELECT id, name, reversed_domain
FROM users
WHERE reversed_domain @ 'com.app.mycompany';


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index scan and functions

2010-07-20 Thread Harald Fuchs
In article 20100719162547.ga17...@localhost,
arno a...@renevier.net writes:

 Thanks, that's exactly what I was looking for.

No, I'd say you're looking for the ip4r package which provides
an indexable IP address range type.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Some insight on the proper SQL would be appreciated

2010-06-09 Thread Harald Fuchs
In article 4c0f4ba8.3040...@gmail.com,
Ognjen Blagojevic ognjen.d.blagoje...@gmail.com writes:

 Plenty of solutions here:
 http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/

This doesn't mention the incredibly powerful windowing functions of
PostgreSQL = 8.4.0:

  SELECT username, firstname, lastname, signedup
  FROM (
  SELECT username, firstname, lastname, signedup,
 row_number() OVER (PARTITION BY username ORDER BY signedup)
  FROM mytbl
) dummy
  WHERE row_number = 5


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ltree - how to sort nodes on parent node

2010-04-20 Thread Harald Fuchs
In article 59670b22-30cb-4e6e-83c8-c1d1036c9...@solfertje.student.utwente.nl,
Alban Hertroys dal...@solfertje.student.utwente.nl writes:

 2). Drop the ltree column and go with a truly recursive approach, something 
 like this:

 CREATE TABLE node (
   categorytextNOT NULL PRIMARY KEY,
   sort_order  int NOT NULL,
   parent  textREFERENCES tree (category)
   ON UPDATE CASCADE
   ON DELETE CASCADE
 );

 WITH RECURSIVE tree AS (
   SELECT *
 FROM node
WHERE parent IS NULL

   UNION ALL

   SELECT node.*
 FROM tree, node
WHERE node.parent = tree.category
ORDER BY sort_order
 )
 SELECT * FROM tree;

Here's a working version:

  WITH RECURSIVE tree (path, category, sort_order, parent) AS (
SELECT category, category, sort_order::text, parent
FROM node
WHERE parent IS NULL
  UNION ALL
SELECT t.path || '.' || n.category,
   n.category,
   t.sort_order || '.' || n.sort_order,
   n.parent
FROM tree t
JOIN node n ON n.parent = t.category
  )
  SELECT path
  FROM tree
  ORDER BY sort_order


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Ltree - how to sort nodes on parent node

2010-04-20 Thread Harald Fuchs
In article 1f96e061-713c-4929-a7d9-278e5b608...@solfertje.student.utwente.nl,
Alban Hertroys dal...@solfertje.student.utwente.nl writes:

 On 20 Apr 2010, at 18:05, Harald Fuchs wrote:
 Here's a working version:
 
 WITH RECURSIVE tree (path, category, sort_order, parent) AS (
 SELECT category, category, sort_order::text, parent
 FROM node
 WHERE parent IS NULL
 UNION ALL
 SELECT t.path || '.' || n.category,
 n.category,
 t.sort_order || '.' || n.sort_order,
 n.parent
 FROM tree t
 JOIN node n ON n.parent = t.category
 )
 SELECT path
 FROM tree
 ORDER BY sort_order

 May be, but then you're just re-inventing ltree again.

Not quite - with proper normalization you're storing the path elements
only once and create the ltree-style paths on the fly.

 I'm pretty sure this must be possible without adding convoluted
 things like casting sort orders to text (which can for example cause
 issues like '10' ending up between '1' and '2').

Ah, you're right.  I think _some_ convolution is still needed because
we must remember the sort order for each path element.

 Since this is 8.4 anyway (CTE's after all), can't the sorting be
 done using a windowing function or something? We have recursion now,
 there's got to be a proper solution, I just can't get my mind around
 it right now.

I don't think windowing functions will help here.  Anyway, here's a
complete example which also deals with the 1/10/2 issue you mentioned
above:

CREATE TABLE node (
  id serial NOT NULL,
  category text NOT NULL,
  sort_order int NOT NULL,
  parent int NULL REFERENCES node (id),
  PRIMARY KEY (id)
);

CREATE UNIQUE INDEX node_pc_uq ON node (parent, category);

-- Enforce unambiguous sorting
CREATE UNIQUE INDEX node_ps_uq ON node (parent, sort_order);

COPY node (id, category, sort_order, parent) FROM stdin;
1   Top 1   \N
2   Science 1   1
3   Physics 1   2
4   Chemistry   2   2
5   Biology 3   2
6   History 4   2
7   Archeology  5   2
8   Hobby   2   1
9   Fishing 1   8
10  Football2   8
11  Swimming3   8
12  Climbing4   8
13  Colors  3   1
14  Black   1   13
15  Red 2   13
16  Blue3   13
17  Gray4   13
18  Purple  5   13
19  Brown   6   13
\.

WITH RECURSIVE tree (path, id, sort_order, parent) AS (
  SELECT category, id, ARRAY[sort_order], parent
  FROM node
  WHERE parent IS NULL
UNION ALL
  SELECT t.path || '.' || n.category, n.id,
 t.sort_order || n.sort_order,
 n.parent
  FROM tree t
  JOIN node n ON n.parent = t.id
)
SELECT path, id, sort_order, parent
FROM tree
ORDER BY sort_order;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large Object leakage

2010-04-01 Thread Harald Fuchs
I have a DB (mydb) where one table (mytbl) contains a large object
column.  The contents are managed by contrib/lo.  This breaks when I
want to copy the DB to another host where the schema is already
present with some old contents:  when I do

  pg_dump -c mydb | psql -q -h otherhost mydb -f -

pg_dump emits a DROP TABLE mytbl which leaves the old lo contents on
otherhost orphaned and, even worse, raises an exception if pg_dump
wants to create a large object with an id already present.  I thought
about a TRUNCATE TRIGGER which could make the appropriate lo_unlink
calls, but this trigger won't be called by DROP TABLE.

Any ideas?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plPgSQL + CDIR/INET types ...

2010-03-31 Thread Harald Fuchs
In article alpine.bsf.2.00.1003302306490.97...@hub.org,
Marc G. Fournier scra...@hub.org writes:

 Has anyone either played with, or gotten to work, a plPgSQL function
 that would take:  192.168.1.1/24 and determine the start and end IP
 from that? Or even start IP + # of IPs in the subnet?

Just install ip4r from pgfoundry, and you'll never look back at the old
cidr/inet types.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to perform text merge

2010-03-29 Thread Harald Fuchs
In article 609bf3ce079445569fc0d047a5c81...@andrusnotebook,
Andrus kobrule...@hot.ee writes:

 Database column contains merge data in text column.
 Expressions are between  and  separators.
 How to replace them with database values ?

 For example, code below should return:

 Hello Tom Lane!

 How to implement textmerge procedure or other idea ?

 Andrus.

 create temp table person ( firstname text, lastname text ) on commit drop;
 insert into person values ('Tom', 'Lane');
 create temp table mergedata ( template text ) on commit drop;
 insert into mergedata values ('Hello firstname||'' ''||lastname!');

 select textmerge(template,'select * from person') from mergedata;

Here's a quick shot:

CREATE FUNCTION textmerge(tpl text, query text) RETURNS text AS $$
DECLARE
  pref text = substring(tpl FROM '(.*)');
  expr text = substring(tpl FROM '(.+)');
  post text = substring(tpl FROM '(.*)');
  tmp1 text = regexp_replace(query, E'\\*', expr);
  tmp2 text;
BEGIN
  EXECUTE tmp1 INTO tmp2;
  RETURN pref || tmp2 || post;
END;
$$ LANGUAGE plpgsql IMMUTABLE;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp_replace puzzle

2010-03-10 Thread Harald Fuchs
I've got a problem with regexp_replace which I could reduce to the following:

  CREATE FUNCTION digest(text, text) RETURNS bytea
  LANGUAGE c IMMUTABLE STRICT
  AS '$libdir/pgcrypto', 'pg_digest';

  CREATE FUNCTION sha224enc(text) RETURNS text AS $$
  BEGIN
RAISE WARNING 'arg=»%«', $1;
RETURN encode(digest($1, 'sha224'), 'hex');
  END;
  $$ LANGUAGE plpgsql IMMUTABLE;

  CREATE TABLE t1 (
id serial NOT NULL,
val text NOT NULL,
PRIMARY KEY (id)
  );

  INSERT INTO t1 (val) VALUES ('d11');
  INSERT INTO t1 (val) VALUES ('xd22');
  INSERT INTO t1 (val) VALUES ('x d33');

  SELECT val,
 regexp_replace(val,
  '^(.*\W)?(C\d{7}|[DI]\d{6}|S\d{10})(\W.*)?$',
  '\1' || '»\2«='|| sha224enc('\2') || '\3', 'i')
  FROM t1
  WHERE val ~*
  '^(.*\W)?(C\d{7}|[DI]\d{6}|S\d{10})(\W.*)?$';

(I want to replace patterns within a string by their SHA-224 hash.)
However, when I run this example I get:

  WARNING:  arg=»\2«
  val|regexp_replace

  
---+--
   d11   | 
»d11«=da2c99be8170ce02b04ee7d90877ae9d86fea467abb973c74c708b27
   x d33 | x 
»d33«=da2c99be8170ce02b04ee7d90877ae9d86fea467abb973c74c708b27
  (2 rows)

i.e. the first '\2' gets properly expanded by the second paren match,
but the second '\2' doesn't get expanded.

What am I overlooking?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DROP column: documentation unclear

2010-03-09 Thread Harald Fuchs
In article 20100308213549.gb...@svana.org,
Martijn van Oosterhout klep...@svana.org writes:

 subsequent ... will store a null value would imply that deleted columns 
 will still take some place, while the space will be reclaimed ... would 
 suggest that new rows (insert or updates in mvcc) don't have the deleted 
 column anymore - I'm not quite sure how to interpret this.  What is pg 
 doing?

 What you're missing is that in postgres NULLs are stored as a bit in
 the header and there is no data. So in a sense NULLs take no space
 (well, one bit) which means both statements are true.

But if you already have eight nullable columns, the (maybe originally
non-null) column which has been dropped would cause the header to be
one byte larger, wouldn't it?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Objects: Sizeof and Deleting Unlinked LOs

2010-02-11 Thread Harald Fuchs
In article 4b72aeb3.4000...@selestial.com,
Howard Cole howardn...@selestial.com writes:

 Is there an SQL function to determine the size of a large object?

I'm using a pgsql helper function for that:


  CREATE FUNCTION lo_size(oid oid) RETURNS integer
  LANGUAGE plpgsql
  AS $$
  DECLARE
fd int;
res int;
i int;
  BEGIN
fd = lo_open(oid, 262144); -- INV_READ
IF fd  0 THEN
  RAISE EXCEPTION 'lo_open returns %', fd;
END IF;
res = lo_lseek(fd, 0, 2); -- SEEK_END
IF res  0 THEN
  RAISE EXCEPTION 'lo_lseek returns %', res;
END IF;
i = lo_close(fd);
IF i  0 THEN
  RAISE EXCEPTION 'lo_close returns %', i;
END IF;
RETURN res;
  END;
  $$;

 Also, can I safely delete all the large objects in
 pg_catalog.pg_largeobject? For example:

 select lo_unlink(loid) from (select distinct loid from
 pg_catalog.pg_largeobject) as loids where loid not in (select my_oid
 from my_only_table_that_uses_large_objects)

Did you have a look at contrib/lo?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Statement level triggers

2010-01-20 Thread Harald Fuchs
In article 4b5702b9.50...@postnewspapers.com.au,
Craig Ringer cr...@postnewspapers.com.au writes:

 What'd be the behavior of a (plpgsql) trigger function when called as
 a statement level trigger?
 Let's say that a statement will involve more than one row.
 The documentation (v8.4.2, 35.1. Overview of Trigger Behavior) says:
 
 Statement-level triggers do not currently have any way to examine the
 individual row(s) modified by the statement.

 It means you don't have NEW or OLD record-variables.

Other databases have NEW and/or OLD pseudo-tables for that.  My
suggestion about implementing that got turned down because, without a
primary key, you can't say which NEW and OLD rows belong to each
other.

Since tables often have a primary key I still think that this would be
an addition making statement-level triggers much more useful than they
are now.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_matches() quantified-capturing-parentheses oddity

2009-12-08 Thread Harald Fuchs
In article 13289.1260290...@sss.pgh.pa.us,
Tom Lane t...@sss.pgh.pa.us writes:

 Julian Mehnle jul...@mehnle.net writes:
 So far, so good.  However, can someone please explain the following to me?
 wisu-dev=# SELECT regexp_matches('q...@foo@bar.zip', '([...@.]|[...@.]+)+', 
 'g');
 wisu-dev=# SELECT regexp_matches('q...@foo@bar.zip', 
 '([...@.]|[...@.]+){1,2}', 'g');
 wisu-dev=# SELECT regexp_matches('q...@foo@bar.zip', 
 '([...@.]|[...@.]+){1,3}', 'g');

 These might be a bug, but the behavior doesn't seem to me that it'd be
 terribly well defined in any case.  The function should be pulling the
 match to the parenthesized subexpression, but here that subexpression
 has got multiple matches --- which one would you expect to get?

Perl seems to return always the last one, but the last one is never just
'p' - so I also think that Julian has spotted a bug.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql 'prefix' error

2009-11-25 Thread Harald Fuchs
In article 87tywid19x@hi-media-techno.com,
Dimitri Fontaine dfonta...@hi-media.com writes:

 The BTree opclass is not made to resist to overlapping data. Maybe in
 this case though we could say that 12 contains less elements than 1 so
 it's less than 1. Here's a test to redefine the pr_cmp() operator in
 term of that, as a patch against current CVS (which is 1.0.0). 

 Can you test with this version and maybe better data set?

Looks good.

 Note that as said earlier the indexing you need to speed up queries is
 the GiST one, but it could be you want the PK constraint noneless.

Indeed - I think it's a good thing to be able to prevent something
like

  INSERT INTO myrecords (record) VALUES ('12'), ('12');

Oh, here's another gripe: how about renaming README.txt to README.prefix
and removing TESTS.* from make install?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql 'prefix' error

2009-11-24 Thread Harald Fuchs
In article 4b0bbc8e.6010...@indoakses-online.com,
Bino Oetomo b...@indoakses-online.com writes:

 I downloaded pgfoundry's prefix, postgresql-8.3-prefix_1.0.0-1_i386.deb
 I install it using dpkg , and run the prefix.sql

 Create database .. named 'prefbino', and

 CREATE TABLE myrecords (
  record prefix_range NOT NULL,
  PRIMARY KEY (record)
 );


 Looks good, next
 I try to create some records, But I got this error :
 ---START--
 prefbino=# COPY myrecords (record) FROM stdin;
 Enter data to be copied followed by a newline.
 End with a backslash and a period on a line by itself.
 1
 12
 123
 1234
 \.
 ERROR: duplicate key value violates unique constraint myrecords_pkey
 CONTEXT: COPY myrecords, line 2: 12

 ---STOP--

 Kindly please give me further enlightment

At least in prefix 1.0.0 unique indexes seem to be broken.  Just drop
the primary key and add a separate index:

CREATE INDEX myrecords_record_ix ON myrecords USING gist (record);


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How is the right query for this condition ?

2009-11-23 Thread Harald Fuchs
In article 5a9699850911222009j272071fbi1dd0c40dfdf62...@mail.gmail.com,
Brian Modra epai...@googlemail.com writes:

 2009/11/23 Bino Oetomo b...@indoakses-online.com:
 Dear All
 
 Suppose I created a database with single table like this :
 --start--
 CREATE DATABASE bino;
 CREATE TABLE myrecords(record text);
 --end
 
 and I fill myrecords with this :
 --start--
 COPY myrecords (record) FROM stdin;
 1
 12
 123
 1234
 \.
 --end
 
 In my bash script, I have variable called 'vseek', that will be use for
 query parameter.
 How to query the table , for (i.e):
 
 a. If vseek = '127' , I want the result is == '12'
 b. if vseek = '123987' , I want the result is == '123'
 c. if vseek = '14789' , I want the result is == '1'
 
 Kindly please give me any enlightment

 You can use a plpgsql to do that e.g.

 create or replace function getMatchingRecord(vseek text)
 ...

For larger tables where an index search would be useful, check out
pgfoundry.org/projects/prefix:


CREATE TABLE myrecords (
  record prefix_range NOT NULL,
  PRIMARY KEY (record)
);

COPY myrecords (record) FROM stdin;
1
12
123
1234
\.

SELECT id, record
FROM myrecords
WHERE record @ '127'
ORDER BY length(record::text) DESC
LIMIT 1;

SELECT id, record
FROM myrecords
WHERE record @ '123987'
ORDER BY length(record::text) DESC
LIMIT 1;

SELECT id, record
FROM myrecords
WHERE record @ '14789'
ORDER BY length(record::text) DESC
LIMIT 1;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cidr data type question

2009-11-02 Thread Harald Fuchs
In article 1257149236.3426.9.ca...@localhost,
Vasiliy G Tolstov v.tols...@selfip.ru writes:

 Hello. 
 I have table with cidr data type column (table is white/black list of
 networks).

 Does it possible to query - is the some address contains in any cidr
 network in this table? (for all networks in the table do contains or
 equals
 http://www.postgresql.org/docs/8.4/interactive/functions-net.html
 )

If you mess with IP addresses, you should have a look at the ip4r
contrib module.  Since I discovered it, I've never looked back at
cidr/inet.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to match sets?

2009-09-13 Thread Harald Fuchs
In article c07f9bfd-5fc1-4b8b-ba87-c8bdc47d0...@solfertje.student.utwente.nl,
Alban Hertroys dal...@solfertje.student.utwente.nl writes:

 An example of the two sets I need to join are, at the left hand side:
  unit  | token | exponent
 ---+---+--
 m.s^-1 | m | 1
 m.s^-1 | s | -1
 m.s^-2 | m | 1
 m.s^-2 | s | -2

 And at the right hand side:
  token | exponent
 ---+--
  m | 1
  s | -2

 The goal of the query is to find which unit at the left hand side
 matches all the tokens and exponents at the right hand side, which
 would be 'm.s^-2' in the above example. The order in which the tokens
 are returned can be random, there isn't really a defined order as it
 doesn't change the meaning of a unit.

 I do have a possible solution using array_accum [1][2] on an ordered
 version (on unit,token,exponent) of these sets. It's not a pretty
 solution though, I'm not happy with it - it's a transformation (from a
 set to an array) where I feel none should be necessary. Isn't there a
 better solution?

Hm, how about a double negation, i.e. return all units except those
with a non-match?  In SQL:

SELECT t1.unit
FROM t1
EXCEPT
SELECT t1.unit
FROM t1
LEFT JOIN t2 ON t2.token = t1.token AND t2.exponent = t1.exponent
WHERE t2.token IS NULL


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique index for periods

2009-08-20 Thread Harald Fuchs
In article 20090820065819.ga2...@gheift.kawo1.rwth-aachen.de,
Gerhard Heift ml-postgresql-20081012-3...@gheift.de writes:

 Hello,
 I try to create an unique index for a (time)period, and my goal is to
 prevent two overlapping periods in a row.

 ...

 Is there another solution to solve my problem?

Have a look at http://pgfoundry.org/projects/temporal


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generating random unique alphanumeric IDs

2009-08-17 Thread Harald Fuchs
In article 20090816122526.gw5...@samason.me.uk,
Sam Mason s...@samason.me.uk writes:

 I've just had a look and PG does actually seem to be returning values as
 I'd expect, i.e. 0 = n  1.

That's what everyone would expect.  If it's really implemented like
that the documentation is wrong, isn't it?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Refer to another database

2009-08-04 Thread Harald Fuchs
In article 4a77c4af.2060...@gmx.de,
Andreas Kalsch andreaskal...@gmx.de writes:

 To be completely
 in context of a schema - so that I can use all tables without the
 prefix - I have to reset the search_path very often.

Why?  Just say ALTER DATABASE foo SET search_path = public, bar, baz
once and you're done.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return LEFT JOINed tables when one has no matching column

2009-06-24 Thread Harald Fuchs
In article 4a425379.90...@alteeve.com,
Madison Kelly li...@alteeve.com writes:

 SELECT
   a.tbl1_name,
   b.tbl2_date,
   c.tbl3_value AS some_value
 FROM
   table_1 a
 LEFT JOIN
   table_2 b ON (a.tbl1_id=b.tbl2_tbl1_id)
 LEFT JOIN
   table_3 c ON (a.tbl1_id=c.tbl3_tbl1_id)
 WHERE
   c.tbl3_variable='some_variable'
 AND
   a.tbl1_id=123;

   I want the data from table_1 and table_2 to return and table_3 to
 return NULL when there is no matching
 c.tbl3_variable='some_variable'. Is this possible?

Move c.tbl3_variable='some_variable' from WHERE to c's ON clause.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WITH RECURSIVE clause -- all full and partial paths

2009-06-12 Thread Harald Fuchs
In article aaf543e90906120856r5219cf9cv7f13ba0d37494...@mail.gmail.com,
aryoo howar...@gmail.com writes:

 Dear list,
 In reference to the message below posted on the 'pgsql-hackers' list regarding
 'iterative' queries,
 could anyone help me write the queries that return all full and all partial
 paths from the root?

Probably you want to use the following query:

  WITH RECURSIVE subdepartment AS (
SELECT id, parent_department, name AS path
FROM department
WHERE name = 'A'
  UNION ALL
SELECT d.id, d.parent_department, sd.path || '.' || d.name
FROM department d
JOIN subdepartment sd ON sd.id = d.parent_department
  )
  SELECT id, path
  FROM subdepartment;

This returns:

 id |  path   
+-
  1 | A
  2 | A.B
  3 | A.B.C
  4 | A.B.D
  6 | A.B.C.F
(5 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with join syntax sought

2009-05-20 Thread Harald Fuchs
In article 43639.216.185.71.24.1242834374.squir...@webmail.harte-lyne.ca,
James B. Byrne byrn...@harte-lyne.ca writes:

 What I want to be able to do is to return the most recent rate for
 all unique rate-pairs, irrespective of type.  I also have the
 requirement to return the 5 most recent rates for each rate-pair,
 thus the HAVING count(*) = 1, which I thought would allow me to
 simply change the test to HAVING count(*) = 5 in that case.

The following queries seem to return what you want.

 Given this:

 CAD AUD 2009-05-19 16:15:00 NOON 1.146300
 CAD AUD 2009-05-19 20:40:00 CLSE 1.131200
 CAD AUD 2009-05-17 16:15:00 NOON 1.151300
 CAD AUD 2009-05-17 20:40:00 CLSE 1.141100
 CAD AUD 2009-05-16 16:15:00 NOON 1.143700
 CAD AUD 2009-05-16 20:40:00 CLSE 1.142100
 CAD USD 2009-05-19 16:15:00 NOON 0.864400
 CAD USD 2009-05-19 20:40:00 CLSE 0.843100

 What I want to see in the final output is

 CAD AUD 2009-05-19 20:40:00 CLSE 1.131200
 CAD USD 2009-05-19 20:40:00 CLSE 0.843100

  SELECT t1.currency_code_base AS c1, t1.currency_code_quote AS c2,
 t1.effective_from AS dt, t1.currency_exchange_type AS type,
 t1.currency_exchange_rate AS rate
  FROM currency_exchange_rates t1
  WHERE t1.effective_from = (
  SELECT max(t2.effective_from)
  FROM currency_exchange_rates t2
  WHERE t2.currency_code_base = t1.currency_code_base
AND t2.currency_code_quote = t1.currency_code_quote
)

 and if requested for the 5 most recent then I want to see this:

 CAD AUD 2009-05-19 20:40:00 CLSE 1.131200
 CAD AUD 2009-05-19 16:15:00 NOON 1.146300
 CAD AUD 2009-05-17 20:40:00 CLSE 1.141100
 CAD AUD 2009-05-17 16:15:00 NOON 1.151300
 CAD AUD 2009-05-16 20:40:00 CLSE 1.142100
 CAD USD 2009-05-19 20:40:00 CLSE 0.843100
 CAD USD 2009-05-19 16:15:00 NOON 0.864400

  SELECT t1.currency_code_base AS c1, t1.currency_code_quote AS c2,
 t1.effective_from AS dt, t1.currency_exchange_type AS type,
 t1.currency_exchange_rate AS rate
  FROM currency_exchange_rates t1
  WHERE (
  SELECT count(*)
  FROM currency_exchange_rates t2
  WHERE t2.currency_code_base = t1.currency_code_base
AND t2.currency_code_quote = t1.currency_code_quote
AND t2.effective_from  t1.effective_from
) = 5
  ORDER BY t1.currency_code_base, t1.currency_code_quote,
   t1.effective_from DESC

Both of them must touch all currency_code_base/currency_code_quote pairs;
maybe you can avoid that with a composite index.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [GENEAL] dynamically changing table

2009-03-31 Thread Harald Fuchs
In article 437faa9f-df2d-429e-9856-eb2026b55...@solfertje.student.utwente.nl,
Alban Hertroys dal...@solfertje.student.utwente.nl writes:

 On Mar 30, 2009, at 5:39 PM, A B wrote:
 Hi,
 In the next project I'm going to have a number of colums in my tables,
 but I don't know how many, they change. They all use integers as
 datatype though.. One day, I get 2 new columns, a week later I loose
 one column, and so on in a random pattern.


 Ignoring design implications (you got enough replies about that I
 think)...

 You could add the columns you're sure that you need and put the rest
 in an XML field.

mantra
If you have a problem and want to solve it using XML, you have two problems.
/mantra

Why serializing the rest of the data in an XML field?  contrib/hstore
seems to accomplish the same, without the XML overhead.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Convert Arbitrary Table to Array?

2009-02-09 Thread Harald Fuchs
In article 17050.1234200...@sss.pgh.pa.us,
Tom Lane t...@sss.pgh.pa.us writes:

 Lee Hughes l...@hughesys.com writes:
 Hi, I need a function that accepts a table name and returns a 2-dimensional
 array of the table data.

 Well, in 8.3 and up there are arrays of composite types, so you can
 do something like

   select array(select mytable from mytable);

Interesting.  On 8.3.5 I tried

  CREATE TABLE mytable (
id serial NOT NULL,
mytable int NOT NULL,
PRIMARY KEY (id)
  );

  INSERT INTO mytable VALUES
(1, 10),
(2, 20),
(3, 30),
(4, 40),
(5, 50),
(6, 60);

  SELECT array(SELECT mytable FROM mytable);

and it returned

  {10,20,30,40,50,60}

Only when I renamed the second column from mytable to mytablex I got

  {(1,10),(2,20),(3,30),(4,40),(5,50),(6,60)}

as you promised.  Is there any syntax for treating the first mytable
as a composite type name instead of a column name?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Array, bytea and large objects

2009-02-05 Thread Harald Fuchs
In article 4989e659.3000...@computer.org,
David Wall d.w...@computer.org writes:

 If I have an unlimited number of name-value pairs that I'd like to
 get easy access to for flexible reports, could I store these in two
 arrays (one for name, the other for value) in a table so that if I had
 10 name-value pairs or 200 name-value pairs, I could store these into
 a single row using arrays so I could retrieve all name-value pairs in
 a single SELECT from the db?

I would use the hstore contrib module for that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call volume query

2009-01-30 Thread Harald Fuchs
In article 1233269836.13476.10.ca...@ubuntu,
Mike Diehl mdi...@diehlnet.com writes:

 Hi all.
 I've encountered an SQL problem that I think is beyond my skills...

 I've got a table full of records relating to events (phone calls, in
 this case) and I need to find the largest number of events (calls)
 occurring at the same time.

 The table had a start timestamp and a duration field which contains the
 length of the call in seconds.

 I need to find out how many concurrent calls I supported, at peek
 volume.

 Can this be done in SQL?  Or do I need to write a perl script?

Try something like the following:

  CREATE TABLE calls (
id serial NOT NULL,
start timestamp(0) NOT NULL,
nsec int NOT NULL,
PRIMARY KEY (id)
  );

  COPY calls (start, nsec) FROM stdin;
  2009-01-30 10:09:00   10
  2009-01-30 10:10:00   10
  2009-01-30 10:10:02   10
  2009-01-30 10:10:04   10
  2009-01-30 10:10:06   10
  2009-01-30 10:10:08   10
  2009-01-30 10:10:10   10
  2009-01-30 10:10:12   10
  2009-01-30 10:11:00   10
  \.

  SELECT ts, count(c.id)
  FROM (
  SELECT (SELECT min(start) FROM calls) + s.a * interval '1 sec' AS ts
  FROM generate_series(0, (
 SELECT extract(epoch FROM (max(start + nsec * interval '1 sec') -
min(start)))::bigint
 FROM calls
   )) AS s(a)
) AS t
  LEFT JOIN calls c
ON t.ts BETWEEN c.start AND c.start + c.nsec * interval '1 sec'
  GROUP BY t.ts
  ORDER BY t.ts;

Here I use generate_series to create timestamp values for every second
of the table range and join them to the table itself to see how many
calls were active at this time.

You could simplify that somewhat by using the period datatype
available on PgFoundry.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] selecting recs based on a tmp tbl vals that are wildcarded ?

2009-01-08 Thread Harald Fuchs
In article 
482e80323a35a54498b8b70ff2b8798003e5ac7...@azsmsx504.amr.corp.intel.com,
Gauthier, Dave dave.gauth...@intel.com writes:

 I have a temp table containg wildcarded strings and I want to select values
 froma different table using ?like? against all those wildcarded values.  
 Here?s
 the example...

 create temporary table match_these (val varchar(32));

 insert into match_these (val) values (?jo%?);

 insert into match_these (val) values (?%denn_?);

 insert into match_these (val) values (?alt%?);

 create table footable (name varchar(32));

 (insert a bunch of records)

 Now...

 select * from footable where name in (select val from match_these)

 ... won?t work because ?in? implies equality.  I want something like...

 select * from footable where name like (select val from match_these)

Why don't you use a simple join?  Something like

  SELECT f.name
  FROM footable f
  JOIN match_these m ON f.name ~~ m.val


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cumulative count

2008-12-04 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Carson Farmer [EMAIL PROTECTED] writes:

  date |   user
 --+-
 20050201   |   Bill
 20050210   |   Steve
 20050224   |   Sally
 20050311   |   Martha
 20050316   |   Ryan
 20050322   |   Phil
 20050330   |   William
 20050415   |   Mary
 20050428   |   Susan
 20050503   |   Jim

 and I want to run a query that returns a *count* of the number of
 users *each month*, ordered by year and *month*, with an additional
 column that is a *running total of the count*, as in:

 year|month  |count|   run_count
 ---+++-
 2005   |  02   |  3  | 3
 2005   |  03   |  4  | 7
 2005   |  04   |  2  | 9
 2005   |  05   |  1  |10

 I can get almost everything I want with:

 SELECT a.year, a.month, a.count, a.month_name, sum(b.count) AS total
 FROM (SELECT EXTRACT(year from added_date) AS year,
 EXTRACT(month FROM added_date) AS month,
 TO_CHAR(added_date, 'Month') AS month_name,
 COUNT(*) AS count FROM users_table GROUP BY 1, 2, 3) AS a,
 (SELECT EXTRACT(year FROM added_date) AS year,
 EXTRACT(month FROM added_date) AS month,
 TO_CHAR(added_date, 'Month') AS month_name,
 COUNT(*) AS count
 FROM users_table GROUP BY 1, 2, 3) AS b
 WHERE a.year = b.year AND a.month = b.month
 GROUP BY 1, 2, 3, 4
 ORDER BY a.year, a.month asc;

 but I can't quite figure out the running total of the count. The above
 example works right up to the end of the first year, then the values
 no longer make sense. My guess is it's something to do with my WHERE
 clause, but I can't think of a better way to do things.

Yes, your WHERE condition is the problem.  It should be
WHERE a.year  b.year OR a.year = b.year AND a.month  b.month.

You could simplify the date logic by doing the year/month split later, e.g.

  CREATE TEMP TABLE tmp AS
  SELECT date_trunc('month', date) AS dt, count(*) AS count
  FROM users_table
  GROUP BY dt;

  SELECT extract(YEAR FROM t1.dt) AS year,
 extract(MONTH FROM t1.dt) AS month,
 t1.count,
 sum(t2.count) AS run_count
  FROM tmp t1
  LEFT JOIN tmp t2 ON t2.dt = t1.dt
  GROUP BY year, month, t1.count
  ORDER BY year, month;

(AFAIK the WITH clause in PostgreSQL 8.4 would let you get rid of
the temp table.)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redefining an existing Table Schema for Foreign Key Constraint - Question

2008-11-07 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Michelle Konzack [EMAIL PROTECTED] writes:

 Hallo Harald,
 Am 2008-11-03 13:41:52, schrieb Harald Fuchs:
 In article [EMAIL PROTECTED],
 Brian714 [EMAIL PROTECTED] writes:
  Customers Table
  id:integer -- primary key
  first_name:varchar(50)
  last_name:varchar(50)
  cc_id:integer references Creditcards.id
  address:varchar(200)
  email:varchar(50)
  password:varchar(20)
 This is the usual 1:n relationship, but I think you got it backwards.
 There are two questions to ask:
 1. Are there customers with more than one credit card?

 This could be a problem for the above table...

 2. Are there credit cards owned by more than one customer?

 CreditCards are personaly and sometimes (in France) I need an ID card to
 prove, that I am the holder...

 So how can one credit card can have more then one owner?

That's exactly why I told you I think you got it backwards.
You need a cust_id column in your CreditCards table, not a cc_id
column in your Customers table.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redefining an existing Table Schema for Foreign Key Constraint - Question

2008-11-03 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Brian714 [EMAIL PROTECTED] writes:

 Currently, the database contains thousands of records in the Customers and
 Creditcards tables. I would like to re-define the Customers table to follow
 the following schema:

 Customers Table
 id:integer -- primary key
 first_name:varchar(50)
 last_name:varchar(50)
 cc_id:integer references Creditcards.id
 address:varchar(200)
 email:varchar(50)
 password:varchar(20)

 As you can see, I would like to convert the column that used to be
 credit_card_number from the Customers table and turn it into a cc_id
 which is an integer that references the column id from the table
 Creditcards.

This is the usual 1:n relationship, but I think you got it backwards.
There are two questions to ask:
1. Are there customers with more than one credit card?
2. Are there credit cards owned by more than one customer?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query m:n-Combination

2008-10-24 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Ludwig Kniprath [EMAIL PROTECTED] writes:

 Dear list,
 I have to solve a simple Problem, explained below with some sample-Data.

 A typical M:N-constellation, rivers in one table, communities in the
 other table, m:n-join-informations (which river is running in which
 community) in a third table.

 Table rivers:
 R_ID  R_Name
 1 river_1
 2 river_2
 3 river_3
 4 river_4
 5 river_5

 Table communities :
 C_ID   C_Name
 1  community_1
 2  community_2
 3  community_3
 4  community_4
 5  community_5

 Join-table
 mn_2_r_id   mn_2_c_id
 1   1
 1   2
 1   3
 1   4
 2   1
 3   2
 3   5
 4   3
 ...

 (in real database this relation is an gis-relation with thousands of
 rivers and countries, related by spatial join, but the problem is the
 same...)

 I want to know, which river is running through communities 1,2,3 *and* 4?
 You can see the solution by just looking at the data above (only
 river_1 is running through all these countries), but how to query
 this by sql?

Probably the fastest way is to do an OR join and counting the matches:

  SELECT r.r_name
  FROM rivers r
  JOIN join_table j ON j.mn2_r_id = r.r_id
  JOIN communities c ON c.c_id = j.mn2_c_id
  WHERE c.c_name IN ('community_1', 'community_2',
 'community_3', 'community_4')
  GROUP BY r.r_name
  HAVING count(*) = 4


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OR or IN ?

2008-10-14 Thread Harald Fuchs
In article [EMAIL PROTECTED],
A. Kretschmer [EMAIL PROTECTED] writes:

 am  Tue, dem 14.10.2008, um  8:33:21 +0200 mailte Luca Ferrari folgendes:
 Hi all,
 I've got a query with a long (50) list of ORs, like the following:
 
 SELECT colB, colC FROM table WHERE colA=X OR colA=Y OR colA=Z OR 
 
 Is there any difference in how postgresql manages the above query and the 
 following one?
 
 SELECT colB, colC FROM table WHERE colA IN (X,Y,Z,...)

 Depends on the version: (same table foo)


 8.1:
 test=*# explain select * from foo where a in (1,2,3);
  QUERY PLAN
 -
  Seq Scan on foo  (cost=0.00..47.45 rows=32 width=4)
Filter: ((a = 1) OR (a = 2) OR (a = 3))


 8.4devel:
 test=# explain select * from foo where a in (1,2,3);
  QUERY PLAN
 -
  Seq Scan on foo  (cost=0.00..43.00 rows=36 width=4)
Filter: (a = ANY ('{1,2,3}'::integer[]))


 As you can see, 8.1 rewrite the query to many ORs.

I think that OR or IN is the wrong question.  Where do those 50
values come from?  If they come from a DB operation, just include this
operation in your query.  Otherwise, you should ask yourself how many
values you might get: 50, 500, 5000?  There's a point where it's
probably more efficient to COPY all those values into an indexed
temporary table, ANALYZE it, and JOIN it into your query.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Converting string to IN query

2008-09-17 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Andrus [EMAIL PROTECTED] writes:

 I found that following query works:
 create temp table test ( test int ) on commit drop;
 insert into test values(1);
 select * from test where test = ANY ( '{1,2}' );

 Is this best solution ?

 Will it work without causing stack overflow with 8.2 server  default
 settings
 if string contains some thousands numbers ?

If you get thousands of numbers, it is safer and maybe also faster to
put them into a temporary table, analyze it, and then join it to the
table in question.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql scripting tutorials

2008-09-11 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Greg Smith [EMAIL PROTECTED] writes:

 On Tue, 9 Sep 2008, Artacus wrote:
 Can psql access environmental variables or command line params?

 $ cat test.sql
 select :TEST as input;
 $ psql -v TEST=16 -f test.sql
  input
 ---
 16
 (1 row)

Nice trick, but when I try the following variant:

  psql -v TEST=16 -c 'select :TEST as input'

I get

  ERROR:  syntax error at or near :
  LINE 1: select :TEST as input

This seems to be contrary to the psql manual page:

 These assignments are done during a very early stage of start-up...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A challenge for the SQL gurus out there...

2008-09-07 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Uwe C. Schroeder [EMAIL PROTECTED] writes:

 or maybe not and I'm just not getting it.
 So here's the scenario:

 I have 3 tables

 forum: with primary key id
 forum_thread: again primary key id and a foreign key forum_id referencing 
 th primary key of the forum table
 forum_post: again primary key id with a forign key thread_id referencing 
 the primary key of the forum_thread table

 The forum_post table also has a field date_posted (timestamp) with an index 
 on it.


 What I need is an efficient way to create overviews (just think about a forum)
 I.e. the forum table has 3 records, one for each forum category

 I want to get a list looking like

 forum idthread_id post_id
 1 6   443
 2 9   123
 3 3   557

 The trick is, that I need the latest post (by the date posted column) for 
 each 
 category (speak forum_id). Due to the keys the forum_thread table has to be 
 involved.

 I've been thinking about this for hours now, but I just can't come up with a 
 query that will give me 3 records, one for each category showing the latest 
 post.

Try something like this:

  SELECT t1.forum_id, p1.thread_id, p1.id AS post_id, p1.date_posted
  FROM forum f1
  JOIN forum_thread t1 ON t1.forum_id = f1.id
  JOIN forum_post p1 ON p1.thread_id = t1.id
  LEFT JOIN (
  SELECT t2.forum_id, p2.thread_id, p2.date_posted
  FROM forum_thread t2
  JOIN forum_post p2 ON p2.thread_id = t2.id
) AS f2 ON f2.forum_id = f1.id AND f2.date_posted  p1.date_posted
  WHERE f2.forum_id IS NULL
  ORDER BY t1.forum_id


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] generate_series woes

2008-04-16 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Merlin Moncure [EMAIL PROTECTED] writes:

 On Mon, Apr 14, 2008 at 5:21 AM, Harald Fuchs [EMAIL PROTECTED] wrote:
 I think there's something sub-optimal with generate_series.
 In the following, documents is a table with more than 12 rows,
 vacuumed and analyzed before the queries.

 everything is working exactly as intended.  while it's obvious to you
 that the generate series function returns a particular number of rows
 based on your supplied inputs, it's not (yet) obvious to the planner.

Which was exactly my point.  Since generate_series is a builtin
function, the planner could theoretically know the number of rows
returned, thus choosing a better plan.

OTOH, the difference between theory and reality is in theory smaller
than in reality.

 your genser function supplies the hint the planner needs and it
 adjusts the plan.  most set returning functions (particularly
 non-immutable ones) are not so easy to determine the # of rows from
 the input parameters anyways.

Yes, of course.  I used genser just to show that there is a better plan.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] generate_series woes

2008-04-14 Thread Harald Fuchs
I think there's something sub-optimal with generate_series.
In the following, documents is a table with more than 12 rows,
vacuumed and analyzed before the queries.

  EXPLAIN ANALYZE
  SELECT count (d.id), floor (s.val / 5000)
  FROM generate_series (1::INT, 5009) AS s (val)
  LEFT JOIN documents d ON d.id = s.val
  GROUP BY 2
  ORDER BY 2;

This returns:

 Sort  (cost=4231.52..4232.02 rows=200 width=8) (actual time=41.886..41.887 
rows=2 loops=1)
   Sort Key: (floor(((s.val / 5000))::double precision))
   Sort Method:  quicksort  Memory: 25kB
   -  HashAggregate  (cost=4219.88..4223.88 rows=200 width=8) (actual 
time=41.843..41.846 rows=2 loops=1)
 -  Nested Loop Left Join  (cost=0.00..4214.88 rows=1000 width=8) 
(actual time=1.274..38.193 rows=5009 loops=1)
   -  Function Scan on generate_series s  (cost=0.00..12.50 
rows=1000 width=4) (actual time=1.209..3.102 rows=5009 loops=1)
   -  Index Scan using documents_pkey on documents d  
(cost=0.00..4.18 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=5009)
 Index Cond: (d.id = s.val)
 Total runtime: 42.218 ms

Now let's wrap generate_series into an SQL function:

  CREATE FUNCTION genser (int, int) RETURNS SETOF int AS $$
SELECT * FROM generate_series ($1, $2) AS g(x);
  $$ LANGUAGE sql;

  EXPLAIN ANALYZE
  SELECT count (d.id), floor (s.val / 5000)
  FROM genser (1::INT, 5009) AS s (val)
  LEFT JOIN documents d ON d.id = s.val
  GROUP BY 2
  ORDER BY 2;

Not surprisingly, this returns the same plan:

 Sort  (cost=4479.02..4479.52 rows=200 width=8) (actual time=43.606..43.607 
rows=2 loops=1)
   Sort Key: (floor(((s.val / 5000))::double precision))
   Sort Method:  quicksort  Memory: 25kB
   -  HashAggregate  (cost=4467.38..4471.38 rows=200 width=8) (actual 
time=43.559..43.561 rows=2 loops=1)
 -  Nested Loop Left Join  (cost=0.00..4462.38 rows=1000 width=8) 
(actual time=3.564..39.740 rows=5009 loops=1)
   -  Function Scan on genser s  (cost=0.00..260.00 rows=1000 
width=4) (actual time=3.503..5.435 rows=5009 loops=1)
   -  Index Scan using documents_pkey on documents d  
(cost=0.00..4.18 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=5009)
 Index Cond: (d.id = s.val)
 Total runtime: 44.047 ms
(9 rows)

But look what happens if we tell PostgreSQL how many rows genser
will return:

  CREATE FUNCTION genser (int, int) RETURNS SETOF int AS $$
SELECT * FROM generate_series ($1, $2) AS g(x);
  $$ LANGUAGE sql ROWS 5009;

  EXPLAIN ANALYZE
  SELECT count (d.id), floor (s.val / 5000)
  FROM genser (1::INT, 5009) AS s (val)
  LEFT JOIN documents d ON d.id = s.val
  GROUP BY 2
  ORDER BY 2;

Now we get a better plan:

 Sort  (cost=15545.54..15546.04 rows=200 width=8) (actual time=27.857..27.859 
rows=2 loops=1)
   Sort Key: (floor(((s.val / 5000))::double precision))
   Sort Method:  quicksort  Memory: 25kB
   -  HashAggregate  (cost=15533.89..15537.89 rows=200 width=8) (actual 
time=27.817..27.819 rows=2 loops=1)
 -  Merge Right Join  (cost=1610.15..15508.85 rows=5009 width=8) 
(actual time=7.714..24.133 rows=5009 loops=1)
   Merge Cond: (d.id = s.val)
   -  Index Scan using documents_pkey on documents d  
(cost=0.00..13472.20 rows=125518 width=4) (actual time=0.045..6.112 rows=5010 
loops=1)
   -  Sort  (cost=1610.15..1622.67 rows=5009 width=4) (actual 
time=7.651..9.501 rows=5009 loops=1)
 Sort Key: s.val
 Sort Method:  quicksort  Memory: 427kB
 -  Function Scan on genser s  (cost=0.00..1302.34 
rows=5009 width=4) (actual time=3.559..5.262 rows=5009 loops=1)
 Total runtime: 28.445 ms
(12 rows)

Since generate_series is a builtin function, can't it tell how many
rows it will return?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Numbering rows by date

2008-04-06 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Andrus [EMAIL PROTECTED] writes:

 I have table
 create Document ( docdate date, docorder integer )

 I need update docorder column with numbers 1,2 in docdate date order
 Something like

 i = 1;
 UPDATE Document SET docorder = i++
   ORDER BY docdate;


 How to do this is PostgreSQL 8.2 ?

I don't think you can avoid a temp table:

CREATE TEMP TABLE tmp (
  docdate date,
  docorder serial
);

INSERT INTO tmp (docdate)
SELECT docdate
FROM documents
ORDER BY docdate;

UPDATE documents d
SET docorder = t.docorder
FROM tmp t
WHERE d.docdate = t.docdate;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with planner choosing nested loop

2008-04-02 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Rodrigo E. De León Plicet [EMAIL PROTECTED] writes:

 On Wed, Apr 2, 2008 at 12:36 PM, Alex Solovey [EMAIL PROTECTED] wrote:
 ... I have no idea how it could be fixed.

 - CREATE INDEX xifoo ON foo(bar_id);
 - ANALYZE;
 - Retry.

A compound index
  CREATE INDEX xifoo2 ON foo (foo_a, bar_id)
might be more worthwhile.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)

2008-03-27 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Tom Lane [EMAIL PROTECTED] writes:

 I note that we can continue to have the current executables stashed in
 PREFIX/share/libexec and let the pg executable exec them.

 Not share/ surely, since these are executables, but yeah.

 This brings me to the idea that pg is a very small stupid program
 that just tries to match its first argument against a filename in
 PREFIX/libexec/postgresql.  If it finds a match it execs that program
 with the remaining args, else it fails.

Add an optional command-line argument for specifying an alternative
PREFIX, and the problem of multiple PostgreSQL versions on one host is
solved as well.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ISO something like #if 0 ... #endif for SQL code

2008-03-10 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alban Hertroys [EMAIL PROTECTED] writes:

 I'm thinking of something like the trick of surrounding C code with
 pairs of #if 0 and #endif, which effectively comments out code,
 even when it contains /* C-style comments */.
 
 Is there some similar trick for SQL code?

 I have been playing with the idea of using cpp to pre-process such
 files, and maybe even put them in a Makefile. I don't think there's
 any reason that wouldn't be possible with SQL files. Added bonus, you
 can use macros in your SQL, for things like environment paths (for
 including other SQL files for example) etc.

If it's OK to preprocess SQL, you could also use the M4 macro
processor which comes with every decent operating system (i.e. Unix).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are indexes blown?

2008-02-15 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Shoaib Mir [EMAIL PROTECTED] writes:

 On Fri, Feb 15, 2008 at 6:13 PM, Phoenix Kiula [EMAIL PROTECTED] wrote:



 Thanks. But I had installed from rpm. Can I just download that .so
 file and put in the lib folder for pgsql and then start using it?



 Well I would say download the source for the same version you have, copy it to
 your desktop machine, build it and then build the .so file for contrib module
 using 'make' and 'make install' once that is done copy the .so from lib
 folder of PG to your production PG box's lib folder.

But you have to ensure that you build PostgreSQL on your desktop
machine in exactly the same way as the RPM got built
(integer_datetimes etc).


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] 8.2/8.3 incompatibility

2008-02-08 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Stephan Szabo [EMAIL PROTECTED] writes:

 On Thu, 7 Feb 2008, Harald Fuchs wrote:
 This works fine in 8.2.4, but 8.3.0 rejects the ALTER TABLE with the
 following (somewhat misleading) error message:
 
 ERROR:  insert or update on table t2 violates foreign key constraint 
 t2_t1id_fk
 DETAIL:  Key (t1id)=(t1id1) is not present in table t1.

 If the types were considered not comparable, you should have gotten a
 message to that effect rather than a not present message.

Yes, this was really confusing.

 More to the point that comparison should have succeeded I think.

Well, it did succeed in 8.2.x, and I'm actually grateful that 8.3.0
noticed the sloppiness on my side.

 What do the following give?

 select * from t1 where id=CAST('t1id1' as VARCHAR(5));

This returns t1id1, as expected.

 select * from ONLY t2 fk LEFT OUTER JOIN ONLY t1 pk ON
  (pk.id = fk.t1id) WHERE pk.id IS NULL;

This returns an empty result set.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] 8.2/8.3 incompatibility

2008-02-07 Thread Harald Fuchs
I've found an incompatibility between PostgreSQL 8.2.4 and 8.3.0 which
is not clearly documented.  Here's a short example:

  CREATE TABLE t1 (
id CHAR(5) NOT NULL,
PRIMARY KEY (id)
  );

  INSERT INTO t1 (id) VALUES ('t1id1');
  INSERT INTO t1 (id) VALUES ('t1id2');
  INSERT INTO t1 (id) VALUES ('t1id3');

  CREATE TABLE t2 (
id SERIAL NOT NULL,
t1id VARCHAR(5) NOT NULL,
PRIMARY KEY (id)
  );

  INSERT INTO t2 (t1id) VALUES ('t1id1');
  INSERT INTO t2 (t1id) VALUES ('t1id2');
  INSERT INTO t2 (t1id) VALUES ('t1id3');

  ALTER TABLE t2
  ADD CONSTRAINT t2_t1id_fk
  FOREIGN KEY (t1id) REFERENCES t1 (id);

(Note the different column types.)

This works fine in 8.2.4, but 8.3.0 rejects the ALTER TABLE with the
following (somewhat misleading) error message:

  ERROR:  insert or update on table t2 violates foreign key constraint 
t2_t1id_fk
  DETAIL:  Key (t1id)=(t1id1) is not present in table t1.

Should this be documented explicitly?
Should the error message look different?


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] A select DISTINCT query? - followup Q

2008-01-28 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Phil Rhoades [EMAIL PROTECTED] writes:

 People,
 select count(*) as cnt, name from tst group by name having count(*) = 1


 This worked for my basic example but not for my actual problem - I get
 column comment must appear in the GROUP BY clause or be used in an
 aggregate function errors so I have a related question:

 With table:

 name comment

 1first comment
 2second comment
 3third comment
 3fourth comment
 4fifth comment
 5sixth comment

 - how can I use something like the previous select statement but where
 the comment field does not appear in the group by clause and gives the
 following result:

 1first comment
 2second comment
 4fifth comment
 5sixth comment

If you want to select both columns, but have uniqueness over the first
only, you can use a derived table:

SELECT tbl.name, tbl.comment
FROM tbl
JOIN (SELECT name FROM tbl GROUP BY name HAVING count(*) = 1) AS t
  ON t.name = tbl.name


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is news.postgresql.org down?

2008-01-28 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Rainer Bauer [EMAIL PROTECTED] writes:

 Hopefully it won't be down for too long as I use a newsreader to read
 the lists.

I use www.gmane.org for that.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] count(*) and bad design was: Experiences with extensibility

2008-01-10 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Chris Browne [EMAIL PROTECTED] writes:

 There may be a further optimization to be had by doing a
 per-statement trigger that counts the number of INSERTs/DELETEs done,
 so that inserting 30 tuples (in the table being tracked) leads to
 adding a single tuple with count of 30 in the summary table.

This would be nice, but at least the 8.2.4 docs say

  Statement-level triggers do not currently have any way to examine
  the individual row(s) modified by the statement.

Is this restriction removed in a later version?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Restart a sequence regularly

2007-11-22 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Richard Huxton [EMAIL PROTECTED] writes:

 Kathy Lo wrote:
 On 11/21/07, Richard Huxton [EMAIL PROTECTED] wrote:

 You probably shouldn't attach any meaning to the numbers from a sequence
 - they're just guaranteed to be unique, nothing else.

 What you say here contradicts the following.

 Actually, the sequence is formed by 4-digit of year and 6-digit of
 sequence.

 So you *are* attaching significance to the number (by adding the
 current year to the front of it).

 Don't block users - have multiple sequences. If you define
 my_seq_2007, my_seq_2008, my_seq_2009 etc and then wrap access to them
 in a function you can EXTRACT() the year from the CURRENT_DATE and use
 that to form your per-year unique value.

Since sequences don't guarantee consecutivity anyway, why not just use
one sequence and prepend the year, e.g. by a view?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] IP addresses

2007-11-19 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Tom Allison [EMAIL PROTECTED] writes:

 I am planning on doing a LOT of work with ip addresses and thought that the
 inet data type would be a great place to start.

Forget inet.  Check out http://pgfoundry.org/projects/ip4r/ and be happy.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Concatenate two queries - how?

2007-10-24 Thread Harald Fuchs
In article [EMAIL PROTECTED],
A. Kretschmer [EMAIL PROTECTED] writes:

 am  Wed, dem 24.10.2007, um 15:08:51 +0200 mailte Stefan Schwarzer folgendes:
 Now, I want to enable queries which display national as well as  
 regional values. I could probably work with independent queries, but  
 I think it would be cleaner and more efficient to get everything  
 into a single query.
 
 Can someone give me a hint how this would work?
 
 Thanks a lot!

 select ... UNION select ...

Apparently Stefan doesn't know about UNION, and thus he probably
doesn't know that UNION ALL is almost always preferrable.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Duplicate records returned

2007-06-26 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Richard Huxton [EMAIL PROTECTED] writes:

 $the_sql  =  SELECT projectname, username, sum(hours);
 $the_sql .=  FROM timerecs;
 $the_sql .=  WHERE projectname = projects.projectname ;
 $the_sql .=  AND projectname = restrictions.projectname;
 $the_sql .=  AND projects.parent = 'Projects';
 $the_sql .=  AND projects.pct  100;
 $the_sql .=  AND restrictions.hidden = 5;
 $the_sql .=  AND projectname = 'Testing';  # just for tsting
 $the_sql .=  AND username = 'long';# just for testing
 $the_sql .=  AND projectname = projects.projectname ;
 $the_sql .=  GROUP BY projectname, username;
 $the_sql .=  ORDER BY projectname, username;
 $the_sql .=  ;;

 You might want to read up on HERE documents for multi-line blocks of text.

In case the above code is Perl, I think

  my $sql = q{
SELECT ...
FROM ...
WHERE ...
GROUP ...
  };

looks nicer than a here-document.

 1. You've also not put all your tables into the FROM clause:
FROM timerecs, projects, restrictions
This *should* be generating a warning of some kind
 2. You're not qualifying which column comes from which table, which
 makes it harder to see what's happening. Try:
FROM timerecs t, projects p, restrictions r
WHERE t.projectname = p.projectname
AND ...
That's called table aliasing, where you give a short name to tables.

This still mixes JOIN conditions with other result restrictions.

  SELECT ...
  FROM timerecs t
  JOIN projects p ON p.projectname = t.projectname
  ...

makes it more explicit.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Importing *huge* mysql database into pgsql

2007-03-08 Thread Harald Fuchs
In article [EMAIL PROTECTED],
.ep [EMAIL PROTECTED] writes:

 Hello,
 I would like to convert a mysql database with 5 million records and
 growing, to a pgsql database.

 All the stuff I have come across on the net has things like
 mysqldump and psql -f, which sounds like I will be sitting forever
 getting this to work.

 Is there anything else?

If you really want to convert a *huge* MySQL database (and not your
tiny 5M record thingie), I'd suggest mysqldump -T. This creates for
each table an .sql file containing just the DDL, and a .txt file
containing the data.

Then edit all .sql files:
* Fix type and index definitions etc.
* Append a COPY thistbl FROM 'thispath/thistbl.txt';

Then run all .sql files with psql, in an order dictated by foreign keys.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgres SQL Syntax

2007-02-06 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Jim C. [EMAIL PROTECTED] writes:

 Maybe it is and maybe it isn't.  I wouldn't know.  I'm merely the
 unfortunate soul chosen to convert this from MySQL to Postgres. :-/

 I've been working on it for a week now. I've got to say that it pains me
 to know that there is apparently no open standard in use for
 importing/exporting data from one db to another.

Regarding just the data, you could use mysqldump --tab=DIR on the
MySQL side and COPY FROM on the PostgreSQL side.

 XML would do the job, wouldn't it?

If you have a problem and try to solve it with XML, then you have two problems.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] psql substitution variables

2007-01-23 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Merlin Moncure [EMAIL PROTECTED] writes:

 can anybody think of of a way to sneak these into dollar quoted
 strings for substitution into create function?

 would i would ideally like to do is (from inside psql shell)

 \set foo 500

 create function bar() returns int as
 $$
   declare
 baz int default :foo;
 [...]

Can't you put the psql call into a shell script and use shell variable
substitution for that?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PostgreSQL 8.2.0 and ip4r?

2006-12-09 Thread Harald Fuchs
I would like to upgrade to PostgreSQL 8.2.0, but there's one thing
stopping me: the need for ip4r (http://pgfoundry.org/projects/ip4r).
Has anyone managed to fix that?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL 8.2.0 and ip4r?

2006-12-09 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Martijn van Oosterhout kleptog@svana.org writes:

 On Sat, Dec 09, 2006 at 12:10:16PM +0100, Harald Fuchs wrote:
 I would like to upgrade to PostgreSQL 8.2.0, but there's one thing
 stopping me: the need for ip4r (http://pgfoundry.org/projects/ip4r).
 Has anyone managed to fix that?

 What exactly is the problem? Does it not compile or something?

Yes:

$ make USE_PGXS=1 
sed 's,MODULE_PATHNAME,$libdir/ip4r,g' ip4r.sql.in ip4r.sql
gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline 
-Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -fpic 
-DIP4R_PGVER=8002000 -I. -I/usr/include/postgresql/server 
-I/usr/include/postgresql/internal -D_GNU_SOURCE   -c -o ip4r.o ip4r.c
ip4r.c: In function 'ip4_cast_to_cidr':
ip4r.c:627: error: 'struct anonymous' has no member named 'type'
ip4r.c: In function 'ip4r_cast_from_cidr':
ip4r.c:967: error: 'struct anonymous' has no member named 'type'
ip4r.c: In function 'ip4r_cast_to_cidr':
ip4r.c:1004: error: 'struct anonymous' has no member named 'type'
make: *** [ip4r.o] Error 1
$


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL 8.2.0 and ip4r?

2006-12-09 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Martijn van Oosterhout kleptog@svana.org writes:

 Looking at CVS, line 967 is a blank line, so I have to ask what version
 you're compiling. I notice the CVS tree got some patches two months ago
 for 8.2 but there has been no release since then. Perhaps you should
 try the latest CVS version?

This was it!

I assumed that the ip4r-1.0.tgz presented on the PgFoundry site was
the most recent version.  After getting the five files via CVS,
everything seems to work now.

Thanks for your help!


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to use outer join in update

2006-12-08 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alban Hertroys [EMAIL PROTECTED] writes:

 Andrus wrote:
 In my current DBMS I can use
 
 create table t1 ( f1 int, f2 int );
 create table t2 ( f3 int, f4 int );
 update t1 set f1=t2.f3 from t1 left join t2 on  t1.f2=t2.f4

 That looks like a self-join on t1 without using an alias for the second
 instance of t1.

 I think you meant:
 update t1 set f1=t2.f3 from t2 where f2 = t2.f4

Or this one:

UPDATE t1
SET f1 = t2.f3
FROM t1 x
LEFT JOIN t2 ON x.f2 = t2.f4
WHERE x.f2 = t1.f2


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Dynamic partial index

2006-08-18 Thread Harald Fuchs
In article [EMAIL PROTECTED],
gustavo halperin [EMAIL PROTECTED] writes:

  Hello
  I'm interesting in a partial index for a rows that aren't older than
 6 mounts, something like the sentence below:
 /CREATE INDEX name_for_the_index ON table
 (the_column_of_type_date)   WHERE ( the_column_of_type_date 
 (current_date - interval '6 month')::date );/

 But this is not posible, I receive the next error:
 /ERROR:  functions in index predicate must be marked IMMUTABLE/

 So, what is the best solution for my problem?

Use a fixed date 6 months ago, and regularly recreate the index.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Best approach for a gap-less sequence

2006-08-14 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Jorge Godoy [EMAIL PROTECTED] writes:

 AgentM [EMAIL PROTECTED] writes:
 Since the gapless numbers are purely for the benefit of the tax people, you
 could build your db with regular sequences as primary  keys and then 
 regularly
 (or just before tax-time) insert into a table  which maps the gapless 
 sequence
 to the real primary key.

 That's also an interesting approach.  An auxiliary table like

transaction integer FK to the transactions table
transaction_nb integer  gapless sequence

 should do it.  A trigger inserting on this auxiliary table would also take
 care of everything...  If I have an after trigger I believe I wouldn't need
 any locking...  I have to think about this...  

Why putting gapless numbers into the database at all?  Just calculate them at
query time.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Best approach for a gap-less sequence

2006-08-14 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Richard Broersma Jr [EMAIL PROTECTED] writes:

 I am curious, can you calculate something like this using only sql? Or you 
 you need to employee a
 procedural language like plpsgql?

You could use something like

  SELECT (SELECT count(*) FROM tbl t2 WHERE t2.id  t1.id), whatever
  FROM tbl t1

but correlated subqueries are slow; thus incrementing the counter in
the application would be faster for huge reports.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best approach for a gap-less sequence

2006-08-14 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Jorge Godoy [EMAIL PROTECTED] writes:

 Harald Fuchs [EMAIL PROTECTED] writes:
 Why putting gapless numbers into the database at all?  Just calculate them at
 query time.

 And how would you retrieve the record that corresponds to invoice number
 #16355, for example?  Recalculating few records is fine, but millions of them
 everytime you need to recover some of those is something that doesn't look
 efficient to me... 

This would be

  SELECT whatever
  FROM tbl
  ORDER BY id
  LIMIT 1
  OFFSET 16355 -1

Since id is the primary key, this can use an index scan.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Best approach for a gap-less sequence

2006-08-14 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Scott Ribe [EMAIL PROTECTED] writes:

 Why putting gapless numbers into the database at all?  Just
 calculate them at query time.

 There is ABSOLUTELY NO WAY that would be acceptable for accounting or legal
 purposes. It would be the same as fabricating the numbers during an audit.

At some point in time those numbers get fabricated anyway.  As long
as you don't change the records inbetween, the technical effect would
be the same.  But you might be right that this is forbidden.  I don't
speak legalese.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] LISTEN considered dangerous

2006-08-02 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Flemming Frandsen [EMAIL PROTECTED] writes:

 I would still expect any
 reimplementation of notify messaging to honor the principle that a
 LISTEN doesn't take effect till you commit.

 Naturally, the listen should not do anything at all when followed by a
 rollback.

 However if you start listening in a transaction then you should get
 all events that have happened after the snapshot that the transaction
 represents (the start of the transaction).

Here you're contradicting yourself.  In your second paragraph you
state that LISTEN should get events unless later cancelled by a
ROLLBACK.  How should LISTEN know if its transaction will commit or
abort?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] join on next row

2006-06-20 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Sim Zacks [EMAIL PROTECTED] writes:

 I want my query resultset to be
 Employee,EventDate(1),EventTime(1),EventType(1),EventTime(2),EventType(2)
 Where Event(2) is the first event of the employee that took place
 after the other event.

 Example
 EventIDEmployeeEventDateEventTimeEventType
 1John6/15/20067:00A
 2Frank6/15/20067:15B
 3Frank6/15/20067:17C
 4John6/15/20067:20C
 5Frank6/15/20067:25D
 6John6/16/20067:00A
 7John6/16/20068:30R

 Expected Results
 John, 6/15/2006, 7:00, A, 7:20, C
 Frank, 6/15/2006, 7:15, B, 7:17, C
 Frank, 6/15/2006, 7:17, C, 7:25, D
 John, 6/16/2006, 7:00, A, 8:30, R

 To get this result set it would have to be an inner join on employee
 and date where the second event time is greater then the first. But I
 don't want the all of the records with a greater time, just the first
 event after.

You can filter the others out by an OUTER JOIN:

  SELECT e1.Employee, e1.EventDate,
 e1.EventTime, e1.EventType,
 e2.EventTime, e2.EventType
  FROM events e1
  JOIN events e2 ON e2.Employee = e1.Employee
AND e2.EventDate = e1.EventDate
AND e2.EventTime  e1.EventTime
  LEFT JOIN events e3 ON e3.Employee = e1.Employee
 AND e3.EventDate = e1.EventDate
 AND e3.EventTime  e1.EventTime
 AND e3.EventTime  e2.EventTime
  WHERE e3.EventID IS NULL
  ORDER BY e1.EventDate, e1.EventTime


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL ASCII encoding

2006-04-05 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Martijn van Oosterhout kleptog@svana.org writes:

 As a british user, latin9 will cover most of your needs, unless
 ofcourse someone wants to enter their name in chinese :)

Since british users don't use French OE ligatures or Euro currency
signs, even latin1 would do.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] partial resultset in java

2006-03-22 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Luckys [EMAIL PROTECTED] writes:

 I believe you should restrict number of rows that needs to be returned, or
 giving a choice to the user, although showing the total count. Even if you
 display all 20K records, no one is going to see them all,  you can even add
 one more LIKE condition to match the user's criteria.

I second that.  Whenever you connect an interactive application to a
DB backend and you don't know in advance some upper limit for the
number of rows returned by a query, append a LIMIT 1000 or something
like that.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] referential integrity without trigger

2006-02-09 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alexander Presber [EMAIL PROTECTED] writes:

 Hello everybody,
 Assuming I want to empty and refill table A (with roughly the same
 content, preferrably in one transaction) and don't want to completely
 empty a dependent table B but still keep referential integrity after
 the commit.

 Without disabling A's on-delete-trigger B will be be emptied on
 commit, even when I inserted exactly the same data into A that I
 deleted an instant before. That is because the trigger gets called on
 commit, no matter if the deleted rows have reappeared.

 If I disable the trigger, My referential integrity is most likely
 corrupted.
 Is there a clever, general scheme to recheck  and enforce foreign
 key contraints, after the responsible triggers have been disabled and
 reenabled?

 I hope this makes sense to you.

Not quite?  Why do you use an explicit trigger for checking
referential integrity?  Can't you just use a foreign key with ON
DELETE NO ACTION DEFERRABLE INITIALLY DEFERRED?


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Question on the use of bracket expressions in Postgres

2005-12-15 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Michael Glaesemann [EMAIL PROTECTED] writes:

 On Dec 15, 2005, at 0:29 , Jimmy Rowe wrote:

 select * from catalog where file_name like 'abc%def%.200[2-5]%';
 The following select keeps returning (0 rows).

 LIKE doesn't consider [2-5] to be a range, but rather the literal
 characters '[2-5]'. If you're looking for regex, take a look at the
 POSIX regex operator ~

 http://www.postgresql.org/docs/current/interactive/functions-
 matching.html#FUNCTIONS-POSIX-REGEXP

 See if something like file_name ~ 'abc.*def.*\.200[2-5]'

That's not quite the same because LIKE matching is anchored.
Try something like file_name ~ '^abc.*def.*\.200[2-5]$'


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] newbie design question re impact of VACUUM

2005-11-09 Thread Harald Fuchs
In article [EMAIL PROTECTED],
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:

 As a background, I'll be using Postgres in part as a processing queue
 for a 40-column stream of information (~ 250 bytes/row) with a
 sustained input rate of 20 rows/sec. This queue will be processed
 periodically (every few minutes), design constraints are to (1) only
 process each row once, and (2) keep the processed rows around for a
 period of time (say a month or so).

 My first (naive?) idea was to add a boolean was_processed column to
 the table (defaulted to false) and UPDATE it to true as part of (1).
 After reading Chapter 22, though, it seems that even a minor UPDATE
 like that copies the row and requires VACUUMing.

That's true, but there might be a way to avoid it.  If your queue
elements have a timestamp, you could run your processing routine not
over elements where was_processed is false, but over elements within
some time interval, e.g. the last minute.  This would eliminate the
need for an UPDATE.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a stored procedure ..with integer as the parameter

2005-10-25 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alex Turner [EMAIL PROTECTED] writes:

  delete * from user;  select * from table where my_id=$in_value  Am
  I just smoking crack here, or does this approach have some merit? 
  The former :-) The correct defense against SQL injection is proper
  escaping, not quoting.  How about $in_value = '1''; delete from
  user'?

 This would be escaped by magic_quotes resulting in:
 select * from table where my_id='\'1\'\'; delete from user \'', which would
 result in an error, and a failed attack would it not, which would be a good
 thing?

If your magic_quotes are magic enough to not blindly surrounding the
argument in quotes, but also escape dangerous chars like ' inside
the argument, then you're safe.

 I tried to create this scenario, but in a trasactional environment, it
 executes, but blew the transation so the data never committed as the select
 query generated an error with the insert on the end...

... and that's exactly what it should do.  You just need to catch the error
and generate a meaningful error message.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a stored procedure ..with integer as the parameter

2005-10-24 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Alex Turner [EMAIL PROTECTED] writes:

 1.  ( ) text/plain  (*) text/html   
 As sort of a side discussion - I have postulated that quoting all incomming
 numbers as string would be an effective defense against SQL Injection style
 attacks, as magic quotes would destory any end-quote type syntax:
 in_value=1
 select * from table where my_id='$in_value';
 as an example for PHP - Postgres will silenty perform an atoi on the string to
 make it a number, but it would prevent:
 in_value=1; delete * from user;
 select * from table where my_id=$in_value
 Am I just smoking crack here, or does this approach have some merit?

The former :-)
The correct defense against SQL injection is proper escaping, not quoting.
How about $in_value = '1''; delete * from user'?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Get postgresql workin in french...

2005-08-31 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Guy Doune [EMAIL PROTECTED] writes:

 Hi,
 I would know how to set the encoding (unicode, ASCII,
 etc.) for getting postgresql accepting my entry with
 accent an all the what the french poeple put over
 there caracter while they write...

French is covered both by Latin9 and Unicode.  Don't use Latin1 - it's
missing the French OE ligature.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Missing numbers

2005-06-01 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Simon Riggs [EMAIL PROTECTED] writes:

 If the WHERE clause said bdocs.doc_numero  7 we would hope that this
 was applied before the join.

Stating this would change the OUTER into an INNER JOIN, and this would
imply that the order of the restrictions is irrelevant - for the
result set both conditions must be satisfied.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Missing numbers

2005-05-31 Thread Harald Fuchs
In article [EMAIL PROTECTED],
josue [EMAIL PROTECTED] writes:

 Hello list,
 I need to track down the missing check numbers in a serie, table
 contains a column for check numbers and series like this:


 dbalm=# select doc_numero,doc_ckseriesfk from bdocs where doc_cta=1
 dbalm-# and doc_tipo='CHE' order by doc_numero;
   doc_numero | doc_ckseriesfk
 +
   19200  | 856938
   19201  | 856938
   19215  | 856938
   19216  | 856938
   19219  | 856938

 Missing numbers are:
 from 19202 to 19214 and 19217,19218


 Does anyone knows a way to get that done in SQL or plpgsql, thanks in
 advance

You could use something like that:

SELECT g.num
FROM generate_series ((SELECT min(doc_numero) FROM bdocs),
  (SELECT max(doc_numero) FROM bdocs)) AS g(num)
LEFT JOIN bdocs ON bdocs.doc_numero = g.num
WHERE bdocs.doc_numero IS NULL


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] perl and insert

2005-05-17 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Rich Doughty [EMAIL PROTECTED] writes:

 On 17 May 2005, Hrishikesh Deshmukh wrote:
 Hi All,
 
 Anybody knows how to use perl dbi to read a file line by line and
 insert into db!
 The books which i have tell you exclusively on running queries.

 it depends on what you need to achieve, but a good place to start would be
 something like:

while (my $line = FILE)
{
   $dbh-do ('INSERT INTO table (line) VALUES (?)', undef, $line);
}

 Where FILE is your open filehandle, and $dbh is your DBI connection, and
 you've modified the SQL as necessary.

 If performance is an issue, you may want to try this (although the
 performance gains depend on database you're using)

my $st = $dbh-prepare ('INSERT INTO table (line) VALUES (?)');

while (my $line = FILE)
{
   $st-execute ($line);
}

If there are many lines in the file, Hrishikesh might be better off
using COPY instead of INSERT.  perldoc DBD::Pg says how to do that.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  1   2   >