Re: [SQL] Logging select statements

2003-07-09 Thread Achilleus Mantzios

Rudi, i would suggest using a proved framework
for logging.

Why dont you try to build a function in PHP
that acts as syslog.
.i.e you have
facility (name of your app/page)
priority (usually e.g. info in your case)
action (what to do with the message, file, apache log, pgsql insert etc..)

Then in your calling function you specify facility,priority,your 
message, and the previous table is consulted as to which action to take.

In your message you could put in as ::text the numbers you want.

For instance if the page is named foo.php, then
the facility would be foo.php-impression,
foo.php-click.

So you only call a php function.

On Wed, 9 Jul 2003, Rudi Starcevic wrote:

> Thanks Achilleus,
> 
> I know there's a couple of ways I could do this.
> 
> In my first email I can see a senario of 1 select plus 100 inserts.
> 
> Another may be 1 select plus 1 insert.
> For example;
> In a table of 3000 rows a user submits a query which returns 100 rows.
> I could loop through the result set and build a string of id's ( 
> 1,2,5,7,8,9,44,22 etc ) and
> make one insert into a logging table of the entire string.
> 
> At a later time, say once every 24 hours, I could select each row of 
> id's and make further inserts into another
> logging table.
> Eg. I extract 1 row with a string of 100 key's and make 100 inserts into 
> a second log table.
> 
> I could even then use a 'count(id), date GROUP BY date' select to add a 
> single row to a further logging table
> which has one row per id with a count of all impressions/click's for 
> that day.
> 
> Thanks
> I'm just trying to explore way's of logging hits and maximize 
> performance for the end user.
> I hope I explained all that OK and didn't ramble to much.
> 
> Cheers
> Rudi.
> 
> 
> 
> 
> 
> 
> 
> 
> Achilleus Mantzios wrote:
> 
> >Thats why people who want entreprise apps
> >must use enterprise frameworks.
> >
> >In J2EE for instance you could use LOG4J
> >which is sorta equivalent of syslog for java.
> >
> >See if there is a logging module for PHP.
> >
> >PgSQL has no clue of who the user is.
> >I dont think delegating this logging task
> >to pgSQL is a good idea.
> >
> >Dont get me wrong,
> >I like and use php myself, but only when i know
> >the exact limits of how far the specific project
> >will go in the future.
> >
> >On Wed, 9 Jul 2003, Rudi Starcevic wrote:
> >
> >  
> >
> >>Hi,
> >>
> >>I have an application where user's can view records in a short form with 
> >>their first select
> >>and view a long form with a second select.
> >>The first view I term an impression.
> >>The second view I term a click.
> >>
> >>I'd like to log the impression's and click's.
> >>I'm wondering which is the most effiecient way to do this.
> >>
> >>I know I can do it in the application, PHP, by looping through the 
> >>result set and inserting into a logging table but
> >>am wondering if it quicker to write a rule or trigger so that each 
> >>individual select is logged into a logging table
> >>as it's selected.
> >>
> >>For example:
> >>If I have a table of 3000 row's and the user submits a query which 
> >>retrieve's 100 rows.
> >>In the first senario I could loop through the 100, using a language PHP 
> >>or Perl, and make 100 inserts after the first select is complete.
> >>Thus 1 select plus 100 inserts.
> >>
> >>Can you see a way to do this all in SQL that would be better/faster/more 
> >>efficient without using PHP/Perl ?
> >>
> >>Many thanks
> >>Regards
> >>Rudi.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>---(end of broadcast)---
> >>TIP 4: Don't 'kill -9' the postmaster
> >>
> >>
> >>
> >
> >  
> >
> 
> 

-- 
==
Achilleus Mantzios
S/W Engineer
IT dept
Dynacom Tankers Mngmt
Nikis 4, Glyfada
Athens 16610
Greece
tel:+30-210-8981112
fax:+30-210-8981877
email:  achill at matrix dot gatewaynet dot com
mantzios at softlab dot ece dot ntua dot gr


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Datatype conversion help

2003-07-09 Thread Richard Huxton
On Tuesday 08 Jul 2003 10:19 pm, Yasir Malik wrote:
> Yes, Mr. Nachbaur helped me out.  Thanks.  I don't think I can do
> to_char(, 'MM-DD-)
> because the date fields are originally stored as separate integers in my
> schema (they have to be that way).  I still can't understand why the extra
> space was added after the dash.  It just made my life more miserable.
> Yasir

You are not the only one it makes miserable. I believe it's there to provide 
compatibility with Oracle/some other db. Their function does it so ours 
should too. Very strange behaviour though.

-- 
  Richard Huxton

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Logging select statements

2003-07-09 Thread Rudi Starcevic
Achilleus,

Thanks - I'll look into that.

Cheers
Regards
Rudi.

> Rudi, i would suggest using a proved framework
> for logging.
> 
> Why dont you try to build a function in PHP
> that acts as syslog.
> .i.e you have
> facility (name of your app/page)
> priority (usually e.g. info in your case)
> action (what to do with the message, file, apache log, pgsql insert 
> etc..)
> 
> Then in your calling function you specify facility,priority,your 
> message, and the previous table is consulted as to which action to take.
> 
> In your message you could put in as ::text the numbers you want.
> 
> For instance if the page is named foo.php, then
> the facility would be foo.php-impression,
> foo.php-click.
> 
> So you only call a php function.
> 
> On Wed, 9 Jul 2003, Rudi Starcevic wrote:
> 
> > Thanks Achilleus,
> > 
> > I know there's a couple of ways I could do this.
> > 
> > In my first email I can see a senario of 1 select plus 100 inserts.
> > 
> > Another may be 1 select plus 1 insert.
> > For example;
> > In a table of 3000 rows a user submits a query which returns 100 rows.
> > I could loop through the result set and build a string of id's ( 
> > 1,2,5,7,8,9,44,22 etc ) and
> > make one insert into a logging table of the entire string.
> > 
> > At a later time, say once every 24 hours, I could select each row of 
> > id's and make further inserts into another
> > logging table.
> > Eg. I extract 1 row with a string of 100 key's and make 100 inserts into 
> > a second log table.
> > 
> > I could even then use a 'count(id), date GROUP BY date' select to add a 
> > single row to a further logging table
> > which has one row per id with a count of all impressions/click's for 
> > that day.
> > 
> > Thanks
> > I'm just trying to explore way's of logging hits and maximize 
> > performance for the end user.
> > I hope I explained all that OK and didn't ramble to much.
> > 
> > Cheers
> > Rudi.
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > Achilleus Mantzios wrote:
> > 
> > >Thats why people who want entreprise apps
> > >must use enterprise frameworks.
> > >
> > >In J2EE for instance you could use LOG4J
> > >which is sorta equivalent of syslog for java.
> > >
> > >See if there is a logging module for PHP.
> > >
> > >PgSQL has no clue of who the user is.
> > >I dont think delegating this logging task
> > >to pgSQL is a good idea.
> > >
> > >Dont get me wrong,
> > >I like and use php myself, but only when i know
> > >the exact limits of how far the specific project
> > >will go in the future.
> > >
> > >On Wed, 9 Jul 2003, Rudi Starcevic wrote:
> > >
> > >  
> > >
> > >>Hi,
> > >>
> > >>I have an application where user's can view records in a short form with 
> > >>their first select
> > >>and view a long form with a second select.
> > >>The first view I term an impression.
> > >>The second view I term a click.
> > >>
> > >>I'd like to log the impression's and click's.
> > >>I'm wondering which is the most effiecient way to do this.
> > >>
> > >>I know I can do it in the application, PHP, by looping through the 
> > >>result set and inserting into a logging table but
> > >>am wondering if it quicker to write a rule or trigger so that each 
> > >>individual select is logged into a logging table
> > >>as it's selected.
> > >>
> > >>For example:
> > >>If I have a table of 3000 row's and the user submits a query which 
> > >>retrieve's 100 rows.
> > >>In the first senario I could loop through the 100, using a language PHP 
> > >>or Perl, and make 100 inserts after the first select is complete.
> > >>Thus 1 select plus 100 inserts.
> > >>
> > >>Can you see a way to do this all in SQL that would be better/faster/more 
> > >>efficient without using PHP/Perl ?
> > >>
> > >>Many thanks
> > >>Regards
> > >>Rudi.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>---(end of broadcast)---
> > >>TIP 4: Don't 'kill -9' the postmaster
> > >>
> > >>
> > >>
> > >
> > >  
> > >
> > 
> > 
> 
> -- 
> ==
> Achilleus Mantzios
> S/W Engineer
> IT dept
> Dynacom Tankers Mngmt
> Nikis 4, Glyfada
> Athens 16610
> Greece
> tel:+30-210-8981112
> fax:+30-210-8981877
> email:  achill at matrix dot gatewaynet dot com
> mantzios at softlab dot ece dot ntua dot gr




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] max length of sql select statement ?

2003-07-09 Thread markus brosch
On Mon, 2003-07-07 at 18:20, Rod Taylor wrote: 
> > Could be a solution?!
> > The question is - how long could the IN be?
> 
> I'm not sure about IN specifically, but I know you can do:
> SELECT * FROM table WHERE col = '<1GB long file>';
> It tends not to be friendly for Ram though :)


Hi again!

After I decided to use this statement (discussion two days ago!):
SELECT * FROM table WHERE columnX IN ('a', 'b', 'c', 'd', 'e', ... );

I got this error :-(((

java.sql.SQLException: ERROR:  Expression too complex: nesting depth
exceeds max_expr_depth = 1

I was searching the web and only found this discussion:

http://groups.google.de/groups?hl=de&lr=&ie=UTF-8&oe=UTF-8&threadm=3B2EA1E9.843D940A%40selectacast.net&rnum=1&prev=/groups%3Fhl%3Dde%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26q%3DExpression%2Btoo%2Bcomplex%253A%2Bnesting%2Bdepth%2Bexceeds%2Bmax_expr_depth%2B%253D%2B1%26btnG%3DGoogle%2BSuche

Maybe we can disuss that problem here again?! What exactly means
"max_expr_depth"? Thanks for any help
Cheers Markus



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Logging select statements

2003-07-09 Thread Matthew Horoschun
Hi Rudi,

You can't trigger on a SELECT, but you could wrap your SQL in a set 
returning function...

http://techdocs.postgresql.org/guides/SetReturningFunctions

Here is a rough and ready solution:

CREATE TABLE access_log ( id int not null );

CREATE TABLE datatable (
id int not null primary key,
somedata varchar(255) not null
);
INSERT INTO datatable VALUES( 1, 'apple' );
INSERT INTO datatable VALUES( 2, 'orange' );
INSERT INTO datatable VALUES( 3, 'banana' );
CREATE OR REPLACE FUNCTION get_rows_and_log( varchar ) RETURNS SETOF 
record AS
'
	DECLARE
		r record;
	BEGIN
		FOR r IN EXECUTE ''SELECT * FROM '' || $1 LOOP
			INSERT INTO access_log VALUES( r.id );
			RETURN NEXT r;
		END LOOP;
		RETURN;
	END;
'
LANGUAGE 'plpgsql';

Now, as an example, do:

SELECT * FROM get_rows_and_log( 'datatable' ) AS data( id int, somedata 
varchar);

You'll get the data returned, and the log entries will be made.

You can put your WHERE clause in the parameter:

SELECT * FROM get_rows_and_log( 'datatable WHERE somedata LIKE ''%e''' 
) AS data( id int, somedata varchar);

Hope that is what you were after!

Cheers

Matthew.

On Wednesday, July 9, 2003, at 04:55  PM, Rudi Starcevic wrote:

Thanks Achilleus,

I know there's a couple of ways I could do this.

In my first email I can see a senario of 1 select plus 100 inserts.

Another may be 1 select plus 1 insert.
For example;
In a table of 3000 rows a user submits a query which returns 100 rows.
I could loop through the result set and build a string of id's ( 
1,2,5,7,8,9,44,22 etc ) and
make one insert into a logging table of the entire string.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Logging select statements

2003-07-09 Thread Rudi Starcevic
Matthew,

Gee thanks ..

I just read over Stephan's Set Returning Function last night ..
I was trying to see how I could use it.

> Hope that is what you were after!

Indeed it is. Your 'rough and ready solution' solution is a 
mighty fine place to begin.

Thanks aplenty to you and Achilleus for taking the time to
look at this for me - another reason why I love PostgreSQL ! :-)

Best regards
Rudi.

> Hi Rudi,
> 
> You can't trigger on a SELECT, but you could wrap your SQL in a set 
> returning function...
> 
> http://techdocs.postgresql.org/guides/SetReturningFunctions
> 
> Here is a rough and ready solution:
> 
> CREATE TABLE access_log ( id int not null );
> 
> CREATE TABLE datatable (
>   id int not null primary key,
>   somedata varchar(255) not null
>   );
> 
> INSERT INTO datatable VALUES( 1, 'apple' );
> INSERT INTO datatable VALUES( 2, 'orange' );
> INSERT INTO datatable VALUES( 3, 'banana' );
> 
> CREATE OR REPLACE FUNCTION get_rows_and_log( varchar ) RETURNS SETOF 
> record AS
> '
>   DECLARE
>   r record;
>   BEGIN
>   FOR r IN EXECUTE ''SELECT * FROM '' || $1 LOOP
>   INSERT INTO access_log VALUES( r.id );
>   RETURN NEXT r;
>   END LOOP;
>   RETURN;
>   END;
> '
> LANGUAGE 'plpgsql';
> 
> Now, as an example, do:
> 
> SELECT * FROM get_rows_and_log( 'datatable' ) AS data( id int, 
> somedata varchar);
> 
> You'll get the data returned, and the log entries will be made.
> 
> You can put your WHERE clause in the parameter:
> 
> SELECT * FROM get_rows_and_log( 'datatable WHERE somedata LIKE 
> ''%e''' ) AS data( id int, somedata varchar);
> 
> Hope that is what you were after!
> 
> Cheers
> 
> Matthew.
> 
> On Wednesday, July 9, 2003, at 04:55  PM, Rudi Starcevic wrote:
> 
> > Thanks Achilleus,
> >
> > I know there's a couple of ways I could do this.
> >
> > In my first email I can see a senario of 1 select plus 100 inserts.
> >
> > Another may be 1 select plus 1 insert.
> > For example;
> > In a table of 3000 rows a user submits a query which returns 100 rows.
> > I could loop through the result set and build a string of id's ( 
> > 1,2,5,7,8,9,44,22 etc ) and
> > make one insert into a logging table of the entire string.




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] max length of sql select statement ?

2003-07-09 Thread Rod Taylor
> Maybe we can disuss that problem here again?! What exactly means
> "max_expr_depth"? Thanks for any help

If I'm not mistaken, max_expr_depth is used to catch runaway recursion
(view a is select * from b, view b is select * from a).

It's a tunable in postgresql.conf.  Toss a couple of 0's behind the
number and uncomment the line.



signature.asc
Description: This is a digitally signed message part


Re: [SQL] Need help with complex query

2003-07-09 Thread Yasir Malik
Mr. Haller
That was what I was exactly looking for.  The guys at
experts-exchange.com or any other website could come up with that answer.
I figured it out without using DISTINCT ON, but the solution was
horrendous.  I have already turned in my assignment, however.  Thank you so
much for your help.  I hope to continue to learn from professionals like you.
Thank you so much,
Yasir

On Wed, 9 Jul 2003, Christoph Haller wrote:

> Date: Wed, 09 Jul 2003 16:46:43 +0200
> From: Christoph Haller <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: Re: [SQL] Need help with complex query
>
> Yasir,
> You were quite close already.
> That's exactly what DISTINCT ON was invented for (non-standard
> compliant)
> select DISTINCT ON (cust, prod) cust, prod, mn, dy, yr, quant
> from (select cust, prod, min(quant)
> from sales
> group by cust, prod) as x(c, p, q), sales
> where cust = x.c and prod = x.p and quant = x.q  order by cust, prod;
>   cust   |  prod  | mn | dy |  yr  | quant
> -++++--+---
>  Hamming | Milk   |  3 | 15 | 1998 |47
>  Jones   | Fruits |  3 | 31 | 2001 |45
>  Knuth   | Bread  |  2 | 21 | 1949 |13
>  Knuth   | Jam| 12 | 21 | 2002 |41
>  Knuth   | Milk   |  2 | 29 | 2000 |12
>  Kruskal | Jelly  | 11 | 30 | 1999 | 5
> (6 rows)
> Regards, Christoph
>
> >
> > Suppose I have a relation called sales with the following schema:
> > sales-Schema = (cust, prod, mn, dy, yr, quant, st)
> >
> > An instance of the relation would look something like this:
> > cust  prodmn  dy  yr  quant   st
> >   ==  ==  ==  =   ==
> > Knuth Milk02  29  200012  CA
> > Jones Fruits  03  31  200145  WY
> > Knuth Jam 12  21  200241  MN
> > Kruskal   Jelly   11  30  199912  NJ
> > Hamming   Milk03  15  199847  GA
> > Knuth Milk02  29  200012  CA
> > Kruskal   Jelly   11  30  19995   NJ
> > Knuth Milk06  23  200212  CA
> > Knuth Bread   02  21  194913  CA
> >
> > Note: The relation can have duplicate rows.
> >
> > Now, I want this query:  For each customer-product combination, find
> the
> > minimum quantity sold along with its respective date.  If there are
> more
> > than one minimum sales quantity for a customer product combination,
> print
> > only one of them.  So the query should return the following:
> >
> > cust  prodmn  dy  yr  quant
> >   ==  ==  ==  =
> > Jones Fruits  03  31  200145
> > Knuth Jam 12  21  200241
> > Hamming   Milk03  15  199847
> > Kruskal   Jelly   11  30  19995
> > Knuth Milk06  23  200212
> > Knuth Bread   02  21  194913
> >
> > I wrote the following query:
> > select cust, prod, mn, dy, yr, quant
> > from (select cust, prod, min(quant)
> >   from sales
> >   group by cust, prod) as x(c, p, q), sales
> > where cust = x.c and prod = x.p and quant = x.q;
> >
> > And I got the following relation:
> > cust  prodmn  dy  yr  quant
> >   ==  ==  ==  =
> > Knuth Milk02  29  200012
> > Jones Fruits  03  31  200145
> > Knuth Jam 12  21  200241
> > Hamming   Milk03  15  199847
> > Knuth Milk02  29  200012
> > Kruskal   Jelly   11  30  19995
> > Knuth Milk06  23  200212
> > Knuth Bread   02  21  194913
> >
> > which is not what I want because the Knuth-Bread combination is
> repeated;
> > I only want one of them.  I have tried many other variations of the
> query,
> > but the best I've done is something like this (by selection distinct
> > quantities out of the above table):
> > cust  prodmn  dy  yr  quant   st
> >   ==  ==  ==  =   ==
> > Jones Fruits  03  31  200145  WY
> > Knuth Jam 12  21  200241  MN
> > Hamming   Milk03  15  199847  GA
> > Knuth Milk02  29  200012  CA
> > Kruskal   Jelly   11  30  19995   NJ
> > Knuth Milk06  23  200212  CA
> > Knuth Bread   02  21  194913  CA
> >
> > Can anyone help me out?  Thanks in advance.
> >
>
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] max length of sql select statement ?

2003-07-09 Thread markus brosch
On Wed, 2003-07-09 at 17:45, Rod Taylor wrote:
> > Nobody a better idea? Why is the join of a temporary table (the IN
> > paramters) and the original table so slow? Any tricks here? 
> 
> Did you index and ANALYZE the temporary table?

No!

I have to do this "strange" and "long" statement also a view thousand
times ... so I always have to create the temp table again and again and
if I index the table it costs me more time. 

As far I am relativly new to SQL, I ask you:
Where's the difference between and indexed temp (!!!) table and an
unindexed one? The main table is of course fully indexed ;-)

Cheers, Markus



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] max length of sql select statement ?

2003-07-09 Thread Rod Taylor
On Wed, 2003-07-09 at 15:51, markus brosch wrote:
> On Wed, 2003-07-09 at 17:45, Rod Taylor wrote:
> > > Nobody a better idea? Why is the join of a temporary table (the IN
> > > paramters) and the original table so slow? Any tricks here? 
> > 
> > Did you index and ANALYZE the temporary table?
> 
> No!
> 
> I have to do this "strange" and "long" statement also a view thousand
> times ... so I always have to create the temp table again and again and
> if I index the table it costs me more time. 

So.. which costs more.  Building the index + fast query or doing the
slow query?

If you have anything more than a thousand lines in the temp table, I bet
indexing it would be the better way to go.

> As far I am relativly new to SQL, I ask you:
> Where's the difference between and indexed temp (!!!) table and an
> unindexed one? The main table is of course fully indexed ;-)

One has an index, which means that other (much faster) methods may be
available for the database to use.


signature.asc
Description: This is a digitally signed message part


Re: [SQL] Break referential integrity.

2003-07-09 Thread Jan Wieck
Stephan Szabo wrote:
On Wed, 2 Jul 2003, Rudi Starcevic wrote:

Hi,

I know that if you have a trigger and function then drop/replace the
function the trigger needs
to be drop/replaced too so that it can see the new function.
Is it the same for Ref. Integ. on table's too ?

If table B's foreign key references table A and you drop/replace table A
then the reference from table B to table A is broken and needs to be
recreated ?
In recent versions, you should not be drop table A without specifying
cascade which will drop the constraint for you (and thus you'll need to
recreate it).  In older versions, I'm not 100% sure what'd happen, but you
probably should drop and recreate it for good measure.
Older versions spit out a NOTICE and dropped the constraint, so they 
kinda defaulted to CASCADE.

Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] Home-brewed table syncronization

2003-07-09 Thread Michael A Nachbaur
Hello everyone,

While I still have plans to do some more work on RServ, it is apparent to me 
that I need a more immediate solution.  I'm not replicating my entire 
dataset, but rather just some "summary" tables that are maintained by stored 
procedures. This means RServ is an iffy proposition at best for me right now.

So, I'm looking at syncronizing 4 tables from one master database to several 
child databases. I'm thinking of doing the following with DBD::Multiplex:

DELETE FROM TableA;
INSERT INTO TableA (..) VALUES (...);


on all the child databases, but I'm not sure what kind of impact this would 
have on my servers.  My impression is that this would hammer the indexes, and 
might blow any memory optimization out the window.  Only a few records in my 
dataset will change from time-to-time, but just the process of determining 
what is different may take more effort than simply rebuilding.

What are your thoughts?  These tables will probably only have a maximum of 
10,000 rows in total, but I'm going to have to syncronize once per minute; as 
a result, I wouldn't like this to take any longer than about 10 seconds.

-- 
/* Michael A. Nachbaur <[EMAIL PROTECTED]>
 * http://nachbaur.com/pgpkey.asc
 */

"Out," he said. People who can supply that amount of firepower don't need to 
supply verbs as well.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Home-brewed table syncronization

2003-07-09 Thread Cliff Wells
On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote:

> So, I'm looking at syncronizing 4 tables from one master database to several 
> child databases. I'm thinking of doing the following with DBD::Multiplex:
> 
> DELETE FROM TableA;
> INSERT INTO TableA (..) VALUES (...);
> 
> 
> on all the child databases, but I'm not sure what kind of impact this would 
> have on my servers.  My impression is that this would hammer the indexes, and 
> might blow any memory optimization out the window.  Only a few records in my 
> dataset will change from time-to-time, but just the process of determining 
> what is different may take more effort than simply rebuilding.

Keep a timestamp associated with each record.  Only update the records
with timestamps later than your last sync.

-- 
Cliff Wells, Software Engineer
Logiplex Corporation (www.logiplex.net)
(503) 978-6726  (800) 735-0555


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] Home-brewed table syncronization

2003-07-09 Thread Michael A Nachbaur
On Wednesday 09 July 2003 02:28 pm, Cliff Wells wrote:
> On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote:
> > So, I'm looking at syncronizing 4 tables from one master database to
> > several child databases. I'm thinking of doing the following with
> > DBD::Multiplex:
> >
> > DELETE FROM TableA;
> > INSERT INTO TableA (..) VALUES (...);
> > 
> >
> > on all the child databases, but I'm not sure what kind of impact this
> > would have on my servers.  My impression is that this would hammer the
> > indexes, and might blow any memory optimization out the window.  Only a
> > few records in my dataset will change from time-to-time, but just the
> > process of determining what is different may take more effort than simply
> > rebuilding.
>
> Keep a timestamp associated with each record.  Only update the records
> with timestamps later than your last sync.

I'm dealing with an existing database structure that, though I can change it, 
has a lot of impact on the rest of my infrastructure.  If I can find a way of 
doing this without resorting to timestamps, I'd much rather do it that way.

-- 
/* Michael A. Nachbaur <[EMAIL PROTECTED]>
 * http://nachbaur.com/pgpkey.asc
 */

"Rome wasn't burned in a day. "


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Home-brewed table syncronization

2003-07-09 Thread Jason Earl
Michael A Nachbaur <[EMAIL PROTECTED]> writes:

> On Wednesday 09 July 2003 02:28 pm, Cliff Wells wrote:
>> On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote:
>> > So, I'm looking at syncronizing 4 tables from one master database to
>> > several child databases. I'm thinking of doing the following with
>> > DBD::Multiplex:
>> >
>> > DELETE FROM TableA;
>> > INSERT INTO TableA (..) VALUES (...);
>> > 
>> >
>> > on all the child databases, but I'm not sure what kind of impact
>> > this would have on my servers.  My impression is that this would
>> > hammer the indexes, and might blow any memory optimization out
>> > the window.  Only a few records in my dataset will change from
>> > time-to-time, but just the process of determining what is
>> > different may take more effort than simply rebuilding.
>>
>> Keep a timestamp associated with each record.  Only update the records
>> with timestamps later than your last sync.
>
> I'm dealing with an existing database structure that, though I can
> change it, has a lot of impact on the rest of my infrastructure.  If
> I can find a way of doing this without resorting to timestamps, I'd
> much rather do it that way.

Would it be possible to add another table (changelog) that contained
the primary key of the record that has changed and a timestamp?  The
changelog table could be maintained via triggers (on update, insert,
or delete).  Your synchronization software could then be relatively
simple.  It would simply need to check your changelog table for rows
that have changed.

Jason

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Datatype conversion help

2003-07-09 Thread Dmitry Tkach
What about lpad?

select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003';
 ?column? 

07-09-2003
(1 row)

I hope, it helps...

Dima

Yasir Malik wrote:

Thank you so much!  But my problem is that when I do
to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr,
'')
where mn, dy, and yr are ints, is that the output has a space after the
the dash.  For example, I get
07- 25- 1994
instead of what I want:
07-25-1994
Thanks,
Yasir
On Tue, 8 Jul 2003, Richard Rowell wrote:

 

Date: 08 Jul 2003 15:21:33 -0500
From: Richard Rowell <[EMAIL PROTECTED]>
To: Yasir Malik <[EMAIL PROTECTED]>
Subject: Re: [SQL] Datatype conversion help
On Tue, 2003-07-08 at 15:07, Yasir Malik wrote:
   

I've tried to_char(in_val, '99'), and that returns a string that is two
 

select to_char(9,'00');

   

---(end of broadcast)---
TIP 8: explain analyze is your friend
 



---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] Datatype conversion help

2003-07-09 Thread Yasir Malik
I used trim and here's what I came up with:
to_date(trim(to_char(yr, '') || trim(to_char(mn, '00')) ||
trim(to_char(dy, '00'))), 'MMDD')

Apparently to_char adds a space to the charecter you are casting.
Yasir

On Wed, 9 Jul 2003, Dmitry Tkach wrote:

> Date: Wed, 09 Jul 2003 18:40:37 -0400
> From: Dmitry Tkach <[EMAIL PROTECTED]>
> To: Yasir Malik <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SQL] Datatype conversion help
>
> What about lpad?
>
> select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003';
>   ?column?
> 
>  07-09-2003
> (1 row)
>
>
> I hope, it helps...
>
> Dima
>
> Yasir Malik wrote:
>
> >Thank you so much!  But my problem is that when I do
> >to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr,
> >'')
> >
> >where mn, dy, and yr are ints, is that the output has a space after the
> >the dash.  For example, I get
> >07- 25- 1994
> >
> >instead of what I want:
> >07-25-1994
> >
> >Thanks,
> >Yasir
> >
> >On Tue, 8 Jul 2003, Richard Rowell wrote:
> >
> >
> >
> >>Date: 08 Jul 2003 15:21:33 -0500
> >>From: Richard Rowell <[EMAIL PROTECTED]>
> >>To: Yasir Malik <[EMAIL PROTECTED]>
> >>Subject: Re: [SQL] Datatype conversion help
> >>
> >>On Tue, 2003-07-08 at 15:07, Yasir Malik wrote:
> >>
> >>
> >>>I've tried to_char(in_val, '99'), and that returns a string that is two
> >>>
> >>>
> >>select to_char(9,'00');
> >>
> >>
> >>
> >
> >---(end of broadcast)---
> >TIP 8: explain analyze is your friend
> >
> >
>
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Datatype conversion help

2003-07-09 Thread Dmitry Tkach
Yasir Malik wrote:

I used trim and here's what I came up with:
to_date(trim(to_char(yr, '') || trim(to_char(mn, '00')) ||
trim(to_char(dy, '00'))), 'MMDD')
Apparently to_char adds a space to the charecter you are casting.
 

I know :-)
And lpad doesn't - that's why I suggested it :-)
Dima

On Wed, 9 Jul 2003, Dmitry Tkach wrote:

 

Date: Wed, 09 Jul 2003 18:40:37 -0400
From: Dmitry Tkach <[EMAIL PROTECTED]>
To: Yasir Malik <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Datatype conversion help
What about lpad?

select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003';
 ?column?

07-09-2003
(1 row)
I hope, it helps...

Dima

Yasir Malik wrote:

   

Thank you so much!  But my problem is that when I do
to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr,
'')
where mn, dy, and yr are ints, is that the output has a space after the
the dash.  For example, I get
07- 25- 1994
instead of what I want:
07-25-1994
Thanks,
Yasir
On Tue, 8 Jul 2003, Richard Rowell wrote:



 

Date: 08 Jul 2003 15:21:33 -0500
From: Richard Rowell <[EMAIL PROTECTED]>
To: Yasir Malik <[EMAIL PROTECTED]>
Subject: Re: [SQL] Datatype conversion help
On Tue, 2003-07-08 at 15:07, Yasir Malik wrote:

   

I've tried to_char(in_val, '99'), and that returns a string that is two

 

select to_char(9,'00');



   

---(end of broadcast)---
TIP 8: explain analyze is your friend
 

   



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Datatype conversion help

2003-07-09 Thread Yasir Malik
I will surely use your suggestion in my future programs.
Thanks,
Yasir

On Wed, 9 Jul 2003, Dmitry Tkach wrote:

> Date: Wed, 09 Jul 2003 18:51:48 -0400
> From: Dmitry Tkach <[EMAIL PROTECTED]>
> To: Yasir Malik <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: [SQL] Datatype conversion help
>
> Yasir Malik wrote:
>
> >I used trim and here's what I came up with:
> >to_date(trim(to_char(yr, '') || trim(to_char(mn, '00')) ||
> >trim(to_char(dy, '00'))), 'MMDD')
> >
> >Apparently to_char adds a space to the charecter you are casting.
> >
> >
> I know :-)
> And lpad doesn't - that's why I suggested it :-)
>
> Dima
>
> >
> >On Wed, 9 Jul 2003, Dmitry Tkach wrote:
> >
> >
> >
> >>Date: Wed, 09 Jul 2003 18:40:37 -0400
> >>From: Dmitry Tkach <[EMAIL PROTECTED]>
> >>To: Yasir Malik <[EMAIL PROTECTED]>
> >>Cc: [EMAIL PROTECTED]
> >>Subject: Re: [SQL] Datatype conversion help
> >>
> >>What about lpad?
> >>
> >>select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003';
> >>  ?column?
> >>
> >> 07-09-2003
> >>(1 row)
> >>
> >>
> >>I hope, it helps...
> >>
> >>Dima
> >>
> >>Yasir Malik wrote:
> >>
> >>
> >>
> >>>Thank you so much!  But my problem is that when I do
> >>>to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr,
> >>>'')
> >>>
> >>>where mn, dy, and yr are ints, is that the output has a space after the
> >>>the dash.  For example, I get
> >>>07- 25- 1994
> >>>
> >>>instead of what I want:
> >>>07-25-1994
> >>>
> >>>Thanks,
> >>>Yasir
> >>>
> >>>On Tue, 8 Jul 2003, Richard Rowell wrote:
> >>>
> >>>
> >>>
> >>>
> >>>
> Date: 08 Jul 2003 15:21:33 -0500
> From: Richard Rowell <[EMAIL PROTECTED]>
> To: Yasir Malik <[EMAIL PROTECTED]>
> Subject: Re: [SQL] Datatype conversion help
> 
> On Tue, 2003-07-08 at 15:07, Yasir Malik wrote:
> 
> 
> 
> 
> >I've tried to_char(in_val, '99'), and that returns a string that is two
> >
> >
> >
> >
> select to_char(9,'00');
> 
> 
> 
> 
> 
> >>>---(end of broadcast)---
> >>>TIP 8: explain analyze is your friend
> >>>
> >>>
> >>>
> >>>
> >>
> >>
>
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] create view error

2003-07-09 Thread mila boldareva
Hi, Gary!

> CREATE
> create view loco_dets as
>   select * from locos l
>   left outer join 
> (select * from lclass) lc on lc.lcid = l.lclass
>   left outer join
(*)  (select lnumber from lnumbers) ln on ln.lnid = l.lid and ln.lncurrent
> = true
>   left outer join
> (select * from company) c on c.coid = lc.lcompany;
> ERROR:  No such attribute or function ln.lnid


your subselect on line (*) does not contain lnid in the list of
selected fields.

I suspect you can also use constructions like

 select * from locos l left outer join lclass lc on (lc.lcid = l.lclass)

 instead of what you use:
> select * from locos l
>   left outer join 
> (select * from lclass) lc on lc.lcid = l.lclass 

in this way you automatically avoid the error that you had!

cheers,
Mila


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] help yourself by helping others

2003-07-09 Thread Ali Adams



Dear All,
 
I am new to Relational Databases and SQL and my 
background in ODBs is clouding my way to solving what seems to be a simple 
problem. I am sure many of you have met it many times.
 
OK, I have a table as follows:
 
 
ID Machine   
Date 
Withdrawals
1  
1  
01/01/2003    1101
2  
2  
01/01/2003    2101
3  
3  
01/01/2003    3101
 
4  
1  
02/01/2003    1102
5  
2  
02/01/2003    2102
 
6  
1  
03/01/2003    1103
7  
3  
03/01/2003    3103
 
8  
2  
04/01/2003    2104
9  
4  
04/01/2003    4104
 
And i would like to create a monthly withdrawals 
report as follows:
 

Machine   Day1  Day2  
Day3  Day4 .. 
Day31
1 
11  12  
13  
 0   
   0
2 21   
   22  
0    
24 
0
3 31  
0    
33   
0  
0
4 
 0    
0   
0    
 0 
0
 
Can you please help?
 
Many thanks in advance.
 
Ali

www.geocities.com/aliadams
 


[SQL] max length of sql select statement ?

2003-07-09 Thread markus brosch
Hi All!

I was searching the archive and was wondering why nobody asked this
strange question (or I've not found it?):

"What is the max allowed length of a sql statement or query?"
I want to combine hundrets or thousands 'OR' within a select statement.
Possible or not?

cheers Markus


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] sort for ranking

2003-07-09 Thread Henshall, Stuart - TNP Southwest
Title: RE: [SQL] sort for ranking





Could you do something like the following:
SELECT sum_user,(SELECT count(sum_user)+1 FROM tbl_sums AS t WHERE t.sum_user>tbl_sums.sum_user) AS ranking FROM tbl_sums ORDER BY ranking

hth,
- Stuart
P.S. Sorry about format change, the disclaimer adder forces it :(


> -Original Message-
> From: Andreas Schmitz [mailto:[EMAIL PROTECTED]]
> Sent: 07 July 2003 15:14
> To: [EMAIL PROTECTED]
> Subject: [SQL] sort for ranking
> 
> 
> 
> Hello *,
> 
> I have a little problem that confuses me. We are gathering 
> values from a table 
> as a sum to insert them into another table. I also need to 
> get a ranking at 
> insert (i.e. Highest points will get first place and so on). 
> I tried ton 
> invole a sequence to qualify the ranking by select at insert.
> 
> So I tried the following (smaller example)
> 
> select setval('tipp_eval_seq',1);
> select sum_user,nextval('tipp_eval_seq')-1 as ranking from 
> tbl_sums order by 
> ranking desc, user_sum asc;
> 
>   user_sum | ranking 
> --+-
>    46 |  30
>    45 |  26
>    44 |  28
>    43 |  25
>    42 |   1
>    41 |   2
>    39 |   3
>    38 |  27
>    36 |  19
>    35 |  18
>    34 |  20
>    31 |  24
>    30 |  17
>    29 |  15
>    28 |  16
>    27 |  12
>    26 |  11
>    25 |  23
>    24 |  21
>    23 |  10
>    19 |  13
>    16 |   9
>    12 |   7
>    11 |   8
>    10 |  29
> 8 |   6
> 7 |   5
> 6 |  14
> 2 |   4
> 1 |  22
> (30 rows)
> 
> 
> As you can see, the sums are sorted correctly but the ranking 
> is a mess. I 
> recongnized that the select seems to follow primarily the 
> internal table 
> order. Is there any way to solve this nicely. Hints and solutions are 
> appreciated.
> 
> Thanks in advance 
> 
> -Andreas
> 
> 



DISCLAIMER:The information in this message is confidential and may be legally privileged. It is intended solely for the addressee.  Access to this message by anyone else is unauthorised.  If you are not the intended recipient, any disclosure, copying, or distribution of the message, or any action or omission taken by you in reliance on it, is prohibited and may be unlawful.  Please immediately contact the sender if you have received this message in error. Thank you.





[SQL] Recursive request ...

2003-07-09 Thread Benoît Bournon
I have to make a function that returns a tree with title and link of a 
table.

Recursively, a information depends on a parent information.

It is to organise a menu with parent dependance.

How is it possible and faster  ? in C ? pl/pgsql or other ?

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Home-brewed table syncronization

2003-07-09 Thread Raj Mathur
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

> "Michael" == Michael A Nachbaur  writes:

Michael> On Wednesday 09 July 2003 02:28 pm, Cliff Wells wrote:
>> On Wed, 2003-07-09 at 14:14, Michael A Nachbaur wrote: > So,
>> I'm looking at syncronizing 4 tables from one master database
>> to > several child databases. I'm thinking of doing the
>> following with > DBD::Multiplex:
>> >
>> > DELETE FROM TableA; > INSERT INTO TableA (..) VALUES (...); >
>> 
>> >
>> > on all the child databases, but I'm not sure what kind of
>> impact this > would have on my servers.  My impression is that
>> this would hammer the > indexes, and might blow any memory
>> optimization out the window.  Only a > few records in my
>> dataset will change from time-to-time, but just the > process
>> of determining what is different may take more effort than
>> simply > rebuilding.
>> 
>> Keep a timestamp associated with each record.  Only update the
>> records with timestamps later than your last sync.

Michael> I'm dealing with an existing database structure that,
Michael> though I can change it, has a lot of impact on the rest
Michael> of my infrastructure.  If I can find a way of doing this
Michael> without resorting to timestamps, I'd much rather do it
Michael> that way.

Had the same issue, so I made a separate table to store
meta-information about what records have been changed in my master
tables.  Note that I do not store the actual change, just which record
was affected and the nature of the change: delete, insert or update.

The deltas table is called, strangely enough, ``delta'', and here's
the code that manages it (for a table called pdetail (keys: package,
pool, timeslot):

- -- Table pdetail

create or replace function pdetail_update_delta()
returns opaque
as '
declare
mykeyval varchar(1024) ;
upd char(1) ;

begin
if TG_OP = ''UPDATE'' then
upd := ''U'' ;
mykeyval := OLD.package || ''|'' || OLD.pool || ''|''
|| OLD.timeslot ;
end if ;
if TG_OP = ''INSERT'' then
upd := ''I'' ;
mykeyval := NEW.package || ''|'' || NEW.pool || ''|''
|| NEW.timeslot ;
end if ;
if TG_OP = ''DELETE'' then
upd := ''D'' ;
mykeyval := OLD.package || ''|'' || OLD.pool || ''|''
|| OLD.timeslot ;
execute ''delete from delta where relation=''
|| TG_RELNAME || '' and keyval=''
|| mykeyval || '';'' ;
end if ;
insert into delta ( relation , keyval , timestamp , what )
values ( ''pdetail'' , mykeyval , now () , upd ) ;
if TG_OP = ''UPDATE'' or TG_OP = ''INSERT'' then
return NEW ;
end if ;
if TG_OP = ''DELETE'' then
return OLD ;
end if ;
end ;
' language plpgsql ;
create trigger pdetail_update_delta_trigger
after update on pdetail
for each row
execute procedure pdetail_update_delta() ;
create trigger pdetail_insert_delta_trigger
after insert on pdetail
for each row
execute procedure pdetail_update_delta() ;
create trigger pdetail_delete_delta_trigger
before delete on pdetail
for each row
execute procedure pdetail_update_delta() ;

Table delta itself looks like this:

create table delta
(
relationvarchar(32) , -- Table name to which update was made
keyval  varchar(1024) , -- Key value of the updated record
timestamp   timestamp without time zone default now() , -- When
whatchar(1)
check (what = 'U' or what = 'D' or what = 'I') ,

primary key ( relation , keyval , timestamp )
) ;

Not much experienced with PgSQL, so would appreciate any tips the
masters can give for improving the plpgsql code.  However, it works
for me as it is at the moment.

You are free to use this code under the terms of the GNU GPL.

Regards,

- -- Raju
- -- 
Raj Mathur[EMAIL PROTECTED]  http://kandalaya.org/
   GPG: 78D4 FC67 367F 40E2 0DD5  0FEF C968 D0EF CC68 D17F
  It is the mind that moves
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.6 and Gnu Privacy Guard 

iD8DBQE/DN78yWjQ78xo0X8RAsmXAJ4k1cq7mFiRxUb6EGO0R81MVfAWfgCfdGxN
K7g2SsvUAPedg7RH86OZcTY=
=JkN/
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] substr_count

2003-07-09 Thread Tom Rochester
Hey all,

I would like to achive something along the lines of:

SELECT field FROM table WHERE field ILIKE '$searchterm' ORDER BY 
substr_count(field, '$searchterm');

Of course the substr_count function does not exist. Is there anyway to 
do this?

I had a thought char_count(replace(field, !$searchterm, '')) might do 
the job but replace doesn't allow for 'replace everything NOT string';

Any Ideas?

Thanks in advance,

--
Tom Rochester
Systems Developer/Administrator
CHU Underwriting Agencies Pty Ltd
Level 5, 1 Northcliff Street, 
Milsons Point  NSW  2060

 02 8923 5346
 02 8923 5363
   +61416152792
 [EMAIL PROTECTED]
 http://www.chu.com.au/
"The information contained in this email and any attached files is strictly private and confidential. The intended addressee should read this email only. If the recipient of this message is not the intended addressee, please call CHU Underwriting Agencies Pty Ltd on +61 1300 361 263 and promptly delete this email and any attachments.

The intended recipient of this email may only use, reproduce, disclose or distribute the information contained in this email and any attached files with CHU's permission. If you are not the intended addressee, you are strictly prohibited from using, reproducing, disclosing or distributing the information contained in this email and any attached files.

CHU advises that this email and any attached files should be scanned to detect viruses. CHU accepts no liability for loss or damage (whether caused by negligence or not) resulting from the use of any attached files."



---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] trigger proceedures in sql

2003-07-09 Thread adivi



hi,
 
  can trigger 
proceedures ( procedures to be executed from within a trigger )
        not 
be written in sql.
 
  i was 
looking for examples and can find proceedures in 'c' only.
 
regards-adivi


Re: [SQL] Datatype conversion help

2003-07-09 Thread listrec
I tried

select
to_date(substring(to_char(yr,'0009'),2,4)||substring(to_char(mn,'09'),2,2)||
substring(to_char(dy,'09'),2,4),'MMDD');

which works fine

Detlef

-Ursprungliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Auftrag von Dmitry Tkach
Gesendet: Donnerstag, 10. Juli 2003 00:52
An: Yasir Malik
Cc: [EMAIL PROTECTED]
Betreff: Re: [SQL] Datatype conversion help


Yasir Malik wrote:

>I used trim and here's what I came up with:
>to_date(trim(to_char(yr, '') || trim(to_char(mn, '00')) ||
>trim(to_char(dy, '00'))), 'MMDD')
>
>Apparently to_char adds a space to the charecter you are casting.
>
>
I know :-)
And lpad doesn't - that's why I suggested it :-)

Dima

>
>On Wed, 9 Jul 2003, Dmitry Tkach wrote:
>
>
>
>>Date: Wed, 09 Jul 2003 18:40:37 -0400
>>From: Dmitry Tkach <[EMAIL PROTECTED]>
>>To: Yasir Malik <[EMAIL PROTECTED]>
>>Cc: [EMAIL PROTECTED]
>>Subject: Re: [SQL] Datatype conversion help
>>
>>What about lpad?
>>
>>select lpad (7, 2, 0) || '-' || lpad (9, 2, '0') || '-2003';
>>  ?column?
>>
>> 07-09-2003
>>(1 row)
>>
>>
>>I hope, it helps...
>>
>>Dima
>>
>>Yasir Malik wrote:
>>
>>
>>
>>>Thank you so much!  But my problem is that when I do
>>>to_char(mn, '00') || '-' || to_char(dy, '00') || '-' || to_char(yr,
>>>'')
>>>
>>>where mn, dy, and yr are ints, is that the output has a space after the
>>>the dash.  For example, I get
>>>07- 25- 1994
>>>
>>>instead of what I want:
>>>07-25-1994
>>>
>>>Thanks,
>>>Yasir
>>>
>>>On Tue, 8 Jul 2003, Richard Rowell wrote:
>>>
>>>
>>>
>>>
>>>
Date: 08 Jul 2003 15:21:33 -0500
From: Richard Rowell <[EMAIL PROTECTED]>
To: Yasir Malik <[EMAIL PROTECTED]>
Subject: Re: [SQL] Datatype conversion help

On Tue, 2003-07-08 at 15:07, Yasir Malik wrote:




>I've tried to_char(in_val, '99'), and that returns a string that is two
>
>
>
>
select to_char(9,'00');





>>>---(end of broadcast)---
>>>TIP 8: explain analyze is your friend
>>>
>>>
>>>
>>>
>>
>>



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])