Re: [GENERAL] Trigger vs web service

2011-04-05 Thread Jorge Godoy
If I was Yoda, I would say "The answer you seek is... it depends"

If I were you, I would test both solutions and check which one performs
better and impacts the least on your environment.

For example, if you have no logic at all on the database then I would code
that in the frontend.  If you have code in the database, then checking how
this specific function works there would also be worth a shot.

--
Jorge Godoy 


On Tue, Apr 5, 2011 at 09:16, Marc-André Goderre wrote:

>
>
> I receive a long string (about 1 per second) than content many information.
> For the moment it is directly inserted in the database (1 column).
>
> I have to treat the hole string every time i need information in it.
>
>
>
> Now, I want split the sting and save the informations in differents fields.
>
> I have 2 solutions and would like to have your opinion on them.
>
>
>
> 1- Program a trigger function detecting the orginal insert, split the
> string and fill the other field.
>
> 2- Program a web service for receiving the string, split it and insert the
> informations in the db.
>
>
>
> Witch is the fastest one (in performance).
>
>
>
> Thanks
>
>
>
> *Marc-Andre Goderre***
>
> TI Analyst
>
>
>
>
> --
> Ce message a été vérifié par le service de sécurité pour courriels *
> LastSpam* <http://www.lastspam.com>.
>


Re: [GENERAL] How to generate unique invoice numbers foreach day

2011-01-15 Thread Jorge Godoy
Why would you do that?

You can always reset the sequence at the end of the day.

--
Jorge Godoy 


On Sat, Jan 15, 2011 at 17:09, Andrus Moor  wrote:

>  There are 365 days in year.
> Do you really think pre-creating sequence for every day for every year is
> best solution ?
>
> Andrus.
>
> - Original Message -
> *From:* Jorge Godoy 
> *To:* Andrus Moor 
> *Cc:* pgsql-general@postgresql.org
> *Sent:* Saturday, January 15, 2011 8:41 PM
> *Subject:* ***SPAM*** Re: [GENERAL] How to generate unique invoice numbers
> foreach day
>
> Use a sequence.
>
> --
> Jorge Godoy 
>
>
> 2011/1/15 Andrus Moor 
>
>> Invoice numbers have format  yymmddn
>>
>> where n is sequence number in day staring at 1 for every day.
>>
>> command
>>
>> SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7),
>> '^[0-9]*'),'')::int),0)+1
>> FROM invoice
>> where date= ?invoicedate
>>
>> is used to get next free invoice number if new invoice is saved.
>>
>> If multiple invoices are saved concurrently from different processes, they
>> will probably get same number.
>>
>> How to get unique invoice number for some day in 8.1+ when multiple users
>> create new invoices ?
>>
>> Andrus.
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] How to generate unique invoice numbers for each day

2011-01-15 Thread Jorge Godoy
Use a sequence.

--
Jorge Godoy 


2011/1/15 Andrus Moor 

> Invoice numbers have format  yymmddn
>
> where n is sequence number in day staring at 1 for every day.
>
> command
>
> SELECT COALESCE(MAX(nullif(substring( substring(tasudok from 7),
> '^[0-9]*'),'')::int),0)+1
> FROM invoice
> where date= ?invoicedate
>
> is used to get next free invoice number if new invoice is saved.
>
> If multiple invoices are saved concurrently from different processes, they
> will probably get same number.
>
> How to get unique invoice number for some day in 8.1+ when multiple users
> create new invoices ?
>
> Andrus.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Copy From suggestion

2010-12-20 Thread Jorge Godoy
With OpenOffice.org that 65K limit goes away as well...

I don't know why it is still like that today for MS Office...  It is almost
2011 and they still think 64K is enough? :-)

--
Jorge Godoy 


On Mon, Dec 20, 2010 at 11:49, Mark Watson wrote:

>  Thanks, Adrian,
>
> I’ll try a windows compile of pgloader sometime during the holidays. It’s
> true that I already have a solution (export <= 65000 row chunks, import into
> Excel, export via Excel puts quotes around the text columns), but something
> faster and more efficient would really help in this case.
>
> -Mark
>  --
>
> *De :* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *De la part de* Adrian Klaver
> *Envoyé :* 18 décembre 2010 18:05
> *À :* pgsql-general@postgresql.org
> *Cc :* Mark Watson
> *Objet :* Re: [GENERAL] Copy From suggestion
>
>
>
> On Friday 17 December 2010 7:46:12 am Mark Watson wrote:
> > Hello all,
> > Firstly, I apologise if this is not the correct list for this subject.
> > Lately, I've been working on a data conversion, importing into Postgres
> > using Copy From. The text file I'm copying from is produced from an
> ancient
> > program and produces either a tab or semi-colon delimited file. One file
> > contains about 1.8M rows and has a 'comments' column. The exporting
> > program, which I am forced to use, does not surround this column with
> > quotes and this column contains cr/lf characters, which I must deal with
> > (and have dealt with) before I can import the file via Copy. Hence to my
> > suggestion: I was envisioning a parameter DELIMITER_COUNT which, if one
> was
> > 100% confident that all columns are accounted for in the input file,
> could
> > be used to alleviate the need to deal with cr/lf's in varchar and text
> > columns. i.e., if copy loaded a line with fewer delimiters than
> > delimiter_count, the next line from the text file would be read and the
> > assignment of columns would continue for the current row/column.
> > Just curious as to the thoughts out there.
> > Thanks to all for this excellent product, and a merry Christmas/holiday
> > period to all.
> >
> > Mark Watson
>
> A suggestion,give pgloader a look;
> http://pgloader.projects.postgresql.org/
>
> If I am following you it might already have the solution to the multi-line
> problem. In particular read the History section of the docs.
>
>
> Thanks,
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>  --
>
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 10.0.1170 / Virus Database: 426/3325 - Release Date: 12/19/10
>


Re: [GENERAL] Linux

2010-11-04 Thread Jorge Godoy
The choice depends more on what you want / need to have than what people
think you want / need.

If your corporation requires a support agreement, go either with Red Hat or
with SuSE (Novell).

If possible, have at least one of each of the above for a while -- one or
two years -- and see what is better in your environment.

I am more prone to use SuSE (SLES) as I have OpenSuSE on my laptop for years
now.


--
Jorge Godoy 


On Thu, Nov 4, 2010 at 13:00, Michael Gould <
mgo...@intermodalsoftwaresolutions.net> wrote:

> I know that this is probably a "religion" issue but we are looking to move
> Postgres to a Linux server.  We currently have a Windows 2008 R2 active
> directory and all of the other servers are virtualized via VMWare ESXi.  One
> of the reasons is that we want to use a 64 bit Postgres server and the UUID
> processing contrib module does not provide a 64 bit version for Windows.  I
> would also assume that the database when properly tuned will probably run
> faster in a *inx environment.
>
> What and why should I look at certain distributions?  It appears from what
> I read, Ubanta is a good desktop but not a server.
>
>
>
> Best Regards
> --
> Michael Gould, Managing Partner
> Intermodal Software Solutions, LLC
> 904.226.0978
> 904.592.5250 fax
>


Re: [GENERAL] Good candidate query for window syntax?

2010-09-10 Thread Jorge Godoy
Have you checked the OVERLAPS operator in the documentation?

http://www.postgresql.org/docs/8.4/interactive/functions-datetime.html



--
Jorge Godoy 


On Fri, Sep 10, 2010 at 10:03, Ketema Harris  wrote:

> Hello,  I have a table defined as:
>
> CREATE TABLE demo AS
> (
>id serial PRIMARY KEY,
>start_time timestamp without timezone,
>duration integer
> )
>
> A sample data set I am working with is:
>
>  start_time  | duration |   end_time
> -+--+-
>  2006-08-28 16:55:11 |   94 | 2006-08-28 16:56:45
>  2006-08-28 16:56:00 |   63 | 2006-08-28 16:57:03
>  2006-08-28 16:56:02 |   25 | 2006-08-28 16:56:27
>  2006-08-28 16:56:20 |   11 | 2006-08-28 16:56:31
>  2006-08-28 16:56:20 |   76 | 2006-08-28 16:57:36
>  2006-08-28 16:56:29 |   67 | 2006-08-28 16:57:36
>  2006-08-28 16:56:45 |   21 | 2006-08-28 16:57:06
>  2006-08-28 16:56:50 |   44 | 2006-08-28 16:57:34
>  2006-08-28 16:56:50 |   36 | 2006-08-28 16:57:26
>  2006-08-28 16:56:53 |   26 | 2006-08-28 16:57:19
>  2006-08-28 16:56:57 |   55 | 2006-08-28 16:57:52
>  2006-08-28 16:57:28 |1 | 2006-08-28 16:57:29
>  2006-08-28 16:57:42 |   17 | 2006-08-28 16:57:59
>  2006-08-28 16:57:46 |   28 | 2006-08-28 16:58:14
>  2006-08-28 16:58:25 |   51 | 2006-08-28 16:59:16
>  2006-08-28 16:58:31 |   20 | 2006-08-28 16:58:51
>  2006-08-28 16:58:35 |   27 | 2006-08-28 16:59:02
>
> generated by the query:
> SELECT start_time, duration, to_timestamp((extract(epoch from start_time) +
> duration))::timestamp as end_time
> FROM demo
> ORDER BY start_time, duration, 3;
>
> My goal is: To find the maximum number of concurrent rows over an arbitrary
> interval.  Concurrent is defined as overlapping in their duration.  Example
> from the set above: Assume the desired interval is one day.  Rows 1 and 2
> are concurrent because row 2's start_time is within the duration of row 1.
>  If you go through the set the max concurrency is 5 (this is a guess cause I
> did it visually and may have miscounted). I took a scan of how I tried to
> solve it manually and attached the image.  I tried using timelines to
> visualize the start, duration, and end of each row then looked for where
> they overlapped.
>
>
>
> My desired output set would be:
>
> max_concurrency | interval   (in this case grouped by day)
> +-
>5   |   2006-08-28
>
>
> if the interval for this set were different, say 30 minutes, then I would
> expect to see something like:
> max_concurrency | interval
> +
>0   |   2006-08-28 00:00:00 - 2006-08-28 00:29:59
>0   |   2006-08-28 00:30:00 - 2006-08-28 00:59:59
>0   |   2006-08-28 01:00:00 - 2006-08-28 01:29:59
>...continues.
>0   |   2006-08-28 16:00:00 - 2006-08-28 16:29:59
>5   |   2006-08-28 16:30:00 - 2006-08-28 16:59:59
>
> I think that a query that involves a window could be used to solve this
> question as the documentation says:
> "A window function call represents the application of an aggregate-like
> function over some portion of the rows selected by a query...the window
> function is able to scan all the rows that would be part of the current
> row's group according to the grouping specification"
> I am hoping that someone with more experience could help devise a way to do
> this with a query.  Thanks in advance.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>


Re: [GENERAL] Cannot access various Postgres sites

2010-04-17 Thread Jorge Godoy
I could read the release notes for 9.0 by then and still can now.  This is
from two different countries (Brazil first and now US), so this should be an
indicative that you might be having network issues.

Can you please try canging your DNS servers from whatever you have now to
Google's?

Make 8.8.4.4 primary and 8.8.8.8 secondary and try again.

You can also try using OpenDNS servers.

--
Jorge Godoy 


On Sat, Apr 17, 2010 at 11:57, Eric Ridge  wrote:

> I'm only subscribed to -general and -hackers, so if this message should go
> to a different list, please feel free to forward it along, but I've been
> unable to get to a couple of the Postgres websites for quite awhile.
>
> Back on March 20 Bruce Momjian posted a link in -hackers to the 9.0 release
> notes (http://developer.postgresql.org/pgdocs/postgres/release-9-0.html).
>  It blew my mind that so many were able to discuss that page because
> developer.postgresql.org didn't respond then, and it doesn't respond now.
>  :(  I'd still like to read that page.
>
> Today I wanted to peek inside the JDBC driver sources before I join their
> mailing list and ask a bunch of dumb questions, but jdbc.postgresql.orgisn't 
> responding either.
>
> I've also seen intermittent connectivity issues with planet.postgresql.org,
> but it seems to be working today.
>
> Is it just my network or is something up with these hosts?
>
> eric
>


Re: [GENERAL] General Question about database -- redirection

2010-04-15 Thread Jorge Godoy
create database db_1 template db_2;

This will create a new DB_1 using DB_2 as template.  Otherwise, you'll
change your code to connect to DB_2 instead of connecting to DB_1.

--
Jorge Godoy 


On Thu, Apr 15, 2010 at 14:49, akp geek  wrote:

> dear all -
>
>  I am not supposed to ask this. But, the question is I have 2
> databases DB_1 and DB_2. I have a script that loads few tables in DB_1. The
> 2 databases are identical. In case , if database DB_1 is dropped, Is there
> any  command that I can issue to use the DB_2?
>
>
> Regards
>
>


Re: [GENERAL] serial columns with replication/cluster

2010-02-04 Thread Jorge Godoy
Sean,

take a look at UUID type.  It might suit you better than serial.

Changing the increment also works, but puts a lot of restrictions on you
(such as planning to prevent collision, having to change the increment on
several nodes when adding a new node, etc.).


Regards,
--
Jorge Godoy 


On Wed, Feb 3, 2010 at 23:21, Sean Hsien  wrote:

> Dear all,
>
> At work we're considering using postgres in a cluster. But I am
> wondering what implications does multi-master write replication have
> on auto-incrementing serial columns?
> E.g. Do we need to fiddle with the increment amount depending on the
> number of nodes we have?
>
> --
> Thanks and regards,
> Sean
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Shall I apply normalization in the following case?

2010-02-03 Thread Jorge Godoy
You can always normalize and not use an artificial key.

You'd end up with:


measurement (normalization)
===
id | value | measurement_unit_id | measurement_type_id
--

> 1   0.23 mmwidth
> 2   0.38 mmwidth
> 2   0.72 mmwidth
>
>
> measurement_unit_id
> ===
> value
> --
>   mm
>
 m
  cm
  in
  cm²
  m³

>
>
> measurement_type_id
> ===
>   value
> --
>   width
>
 area
 depth
 volume



And so on.  You'd benefit from a normalized structure, you'd have
constraints checking for valid units and types and you wouldn't need join to
get the resulting information.



--
Jorge Godoy 


On Wed, Feb 3, 2010 at 23:20, Yan Cheng Cheok  wrote:

>
> For example, for the following table,
>
>
> measurement (without normalization)
> ===
> id | value | measurement_unit | measurement_type
> 
> 1   0.23 mmwidth
> 2   0.38 mmwidth
> 2   0.72 mmwidth
>
>
> If I normalize to the following format, I will encounter several problem
> compared to table without normalization
>
>
>
> measurement (normalization)
> ===
> id | value | measurement_unit_id | measurement_type_id
> --
> 1   0.23 11
> 2   0.38 11
> 2   0.72 11
>
>
> measurement_unit_id
> ===
> id | value
> --
> 1  | mm
>
>
> measurement_type_id
> ===
> id | value
> --
> 1  | width
>
>
> (1) When rows grow to few millions in table measurement, the join operation
> on normalization table, is *much* slower compared to non-normalization
> table.
>
> One of the most frequent used query, To retrieve "value",
> "measurement_unit" and "measurement_type",  I need to join measurement +
> measurement_unit_id + measurement_type_id.
>
> For non-normalization table, I need NOT to join.
>
> Right now, I cannot justify why I need to use normalization. I afraid I
> miss out several important points when I turn into un-normalize solution.
> Please advice if you realize I miss out several important points.
>
> Thanks
> Yan Cheng CHEOK
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Experience with many schemas vs many databases

2009-11-15 Thread Jorge Godoy
Frank,

I had the same questioning a while ago and another thing that made me think
was the amount of data per user.

In the end, I decided on using a single DB and single schema and add a
clause to split everything by each customer (customer_id).

I then added an index on that column and my code became simpler and fast
enough.

This also allowed me to work with some other aggregates that provided very
useful "global" statistics.

--
Jorge Godoy 


On Sun, Nov 15, 2009 at 04:28, undisclosed user
wrote:

> Hello everyone,
>
> I have hit a wall on completing a solution I am working on. Originally, the
> app used a db per user (on MyIsam)the solution did not fair so well in
> reliability and performance. I have been increasingly interested in Postgres
> lately.
>
> Currently, I have about 30-35k users/databases. The general table layout is
> the sameonly the data is different. I don't need to share data across
> databases. Very similar to a multi-tenant design.
>
> Here are a few questions I have:
>
> 1. Could postgres support this many DBs? Are there any weird things that
> happen when the postgres is used this way?
> 2. Is the schema method better? Performance, maintainability, backups,
> vacuum? Weird issues?
>
>
> Any incite is greatly appreciated.
>
> Thanks.
> Frank
>


[GENERAL] Using WITH queries on VIEWs

2009-08-30 Thread Jorge Godoy
Hi!


Is it possible to use WITH queries (
http://www.postgresql.org/docs/8.4/interactive/queries-with.html) on VIEWs?

I have a rather complex view that I could optimize with it...


Regards,
--
Jorge Godoy 


Re: [GENERAL] storing repeating dates / events

2008-09-06 Thread Jorge Godoy
Em Saturday 06 September 2008 14:34:22 [EMAIL PROTECTED] escreveu:
> yes i am trying to build a alerting system where events will repeat
> weekly/monthly/annually
>
> what is the best way to store this kind of information in postgres?

Use timestamps.  And model your application with them.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] storing repeating dates / events

2008-09-06 Thread Jorge Godoy
Em Saturday 06 September 2008 14:12:49 [EMAIL PROTECTED] escreveu:
> Hi
> I want to store dates / events for example birthdays (or may 5th) that
> repeats every year..
> what is the best way to do in postgres?
> if i use timestamp it is going to be use the current year.. how do i do
> this?

Along with the timestamp store a boolean that indicates if the event should 
consider the year or not.

But you might surely design it better, specially for things that repeat on 
intervals other than yearly.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] upgrading from 8.1.4 to 8.3.3

2008-08-28 Thread Jorge Godoy
Em Thursday 28 August 2008 06:53:28 Joao Ferreira gmail escreveu:
> Hello all,
>
> I'm getting this error after installing pg-8.3.3 on a test system which
> had 8.1.4
>
> shell> su postgres -c "/usr/bin/postmaster -D /var/pgsql/data"
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.1,
> which is not compatible with this version 8.3.3.
>
>
> I guess I should get a pg_dumpall before upgrading and put the data back
> after the upgrade.
>
> Is this correct ?
>
> Is there a more apropriate approach ?

The dump & restore approach for major version upgrades is the appropriate, 
correct and documented way to upgrade.

Check the docs and it is there.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Probably been asked a hundred times before.

2008-06-24 Thread Jorge Godoy
On Tuesday 24 June 2008 11:30:14 David Siebert wrote:
> Which disto is best for running a Postgres server?
> I just installed OpenSuse and downloaded and compiled the latest version
> of Postgres.  It isn't that big of a hassle but I noticed that almost
> none of the big distros keep all that up to date with Postgres as far as
> what they have in their repositories.
>
> I was wondering if anybody has made an Postgres centric distro?  Just a
> nice stripped down server Distro that is ideal for running a database
> server on?
>
> More just curious since I already got mine installed and compiled.

I'm running OpenSuSE 11.0 and I have PostgreSQL 8.3.1 right from the 
installation DVD.

I know it isn't 8.3.3, but they couldn't package and test everything until the 
release date.

There are packagers for Fedora here...  But I would rather compile PG myself 
than switch to Fedora just because of a PostgreSQL package.

Generating new RPMs / updating existing ones isn't so hard.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Database design questions

2008-06-18 Thread Jorge Godoy
On Wednesday 18 June 2008 05:43:25 David wrote:
> Hi list.
>
> There are some database design-related issues I've pondered about for some
> time.
>
> But first:
>
> * Is this the correct list to ask these questions on?
>
> * Should I split this into separate threads instead of 1 thread for
> all my questions?

I would submit all of the questions in separate messages.  It is tiresome to 
read everything, you'll loose a lot of context after one or two messages 
levels or reply and people won't read the email because of its size.

Regards,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>




signature.asc
Description: This is a digitally signed message part.


Re: [GENERAL] Source RPM for 8.3.3?

2008-06-14 Thread Jorge Godoy
On Saturday 14 June 2008 10:33:15 Devrim GÜNDÜZ wrote:
> Hi,
>
> On Fri, 2008-06-13 at 10:24 -0700, Kevin Regan wrote:
> > I was browsing the postgresql download site, but I wasn't able to find
> > the source RPM for 8.3.3.  Is it available on the site?
>
> I just finished uploading 8.3.3 SRPMs. They will be on
> ftp.postgresql.org in a few hours, after sync.
>
> Also you can grab them from here:
>
> http://yum.pgsqlrpms.org/srpms/8.3/
>
> Regards,

Devrim,

have you ever thought about using OpenSuSE's build service?  Your RPMs are 
more complete -- i.e. they include the scripting languages as well -- than the 
ones available for OpenSuSE...

And the build service already creates packages for Fedora and other 
distributions.

https://build.opensuse.org/

Here's a list of supported distributions:


Distribution 


openSUSE Factory 


openSUSE 10.3 


openSUSE 10.2 


SUSE Linux 10.1 


SLE 10 


SUSE Linux 10.0 


SUSE Linux 9.3 


SLES 9 


CentOS 5 


RHEL 4 


RHEL 5 


Fedora 6 with Extras 


Fedora 7 with Extras 


Fedora 8 with Extras 


Mandriva 2006 


Mandriva 2007 


Mandriva 2008




It lacks Fedora 9, though...  But I believe that it wouldn't be hard to have 
it added to the list.  Specially to have more people using those "standard" 
RPM packages for all the distributions above.


Regards,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Check if table or View exists

2008-03-31 Thread Jorge Godoy
Em Monday 31 March 2008 22:35:01 Jeff Williams escreveu:
> I am developing an application where I will have a default table and/or
> view for a select statement for my application.  If a client has a special
> requirement I would like to have a alternate table of the same name in a
> different schema with the change structure.
>
> Schema A
>Default table - coretable.foo
>
> Schema B
>Client specific table client.foo
>
> How can I check if a table or alternate view exists in schema B.
>
> ie
>
> if client.foo exists // check for a view or table
>   // my code will use client.foo table
> else
>   // my code will use coretable.foo
> endif
>
> Also is it possible to check via function/via in postgress and not have to
> rely on my application to do the check.  If so how.

I'd give both the same name and make the client schema first in the search 
path.  Then, you should just call the function and it would follow the search 
path order and give you what you want without having to check anything.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using tables in other PostGreSQL database

2008-03-27 Thread Jorge Godoy
Em Thursday 27 March 2008 08:29:04 Pettis, Barry escreveu:
> An addon  Being self schooled in databases to me this seems to be a
> kludge.  If you work in a large company environment the odds that
> someone somewhere is all ready storing or collecting data that you need
> ( by this I mean base data ) could probably be pretty high.  So why, if
> PostGre is so old/established, is the ability to share information
> between databases have to be done through an add on.
>
> So let me give an example to help clarify.
> 1.  I work in a manufacturing environment
> 2.  Our product can have 150 to 450 different / unique process steps
> 3.  We have a description of each process step
> 4.  So with a product we can look at it's flow and see the descriptions
> of each step
>
> Now say person A pulls this information on a daily basis and then
> summarizes the product manufacturing information and creates a table
> that has say the total number of process modules ( aka group of steps ),
> the total number of steps, the total number of a particular type of
> step.
>
> Now let's say that another person NEEDS that very information in a query
> or table in their own database.  Are you saying that each person needs
> to generate this.  To me the sharing of information seems to be so basic
> that within a said postgre server, that as along as you have access to a
> said database you should be able to say use the data stored here.  And
> that that ability should be a rudimentary ability not an addon.
>
> Reason why I don't' have ability to install addon's onto the database.

It sounds to me like your company could make a good use of a DBA to organize 
all that.

Users should just use the data, not plan the database and keep multiple copies 
of information around.

One person designing all this would be able to organize the information, keep 
its integrity, safety / secrecy and while doing all that also provide the 
people using the information a better way to get it.

If everyone is creating their own database, then getting access to the 
information isn't the biggest problem.  Guaranteeing that all reports are 
generated from the same information -- imagine sales reporting something from 
last month while marketing is doing the same for this month and manufacture 
is insterested on the history for the same month but comparing it to the last 
three years history?  A big mess... 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connect to postgres from a dynamic IP

2008-03-03 Thread Jorge Godoy
Em Monday 03 March 2008 13:17:03 você escreveu:
>
> My understanding is no password is sent in the clear with md5 per:
>
> http://www.postgresql.org/docs/8.3/interactive/auth-methods.html#AUTH-PASSW
>ORD

But the MD5 hash is.  This page states that the password can't be directly 
sniffed, but one can still get the hash of the password and perform a 
dictionary attack against it on a local copy (i.e., without ever trying to 
connect to the server).

After a successful attack then one can connect directly to the server as if 
the password was known to him/her.

Crypting the channell -- be it with SSL or SSH, for example -- will prevent 
the sniffer from being able to capture the hash, so your password will be 
safer.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Connect to postgres from a dynamic IP

2008-03-03 Thread Jorge Godoy
Em Monday 03 March 2008 08:08:36 Raymond O'Donnell escreveu:
> On 03/03/2008 11:01, dfx wrote:
> > The question il: Is there a method to avoid to insert the addesses of
> > the clients in the pg_hba.conf and to allow connections from internet
> > with security assured only by username and password?
>
> Yes, that's what people have been explaining: you insert a line
> something like:
>
>host  [database]   [user]   0.0.0.0/0   md5

But make it "hostssl" instead of "host", to require some cryptography in the 
channel used, specially to authenticate the connection.

Opening your access to everyone without crypto sounds like something you don't 
want to do.  Specially if users can change their own passwords... 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Connect to postgres from a dynamic IP

2008-03-03 Thread Jorge Godoy
Em Monday 03 March 2008 07:01:17 dfx escreveu:

> it is possible to connect to PostgreSQL server from a client with a dynamic
> IP (or from an unknown IP)?
>
> How I have to configure pg_hba.conf (and/or eventually my router, where the
> server is located)?

You'll have to make PostgreSQL accept connections from ANY IP in the possible 
range assigned to the networks where your customers are.  There are IP ranges 
for specific countries available...  You can also allow access from anywhere 
in the world.

I'd also recommend that you require the use of some cryptography such as SSL 
for that.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-23 Thread Jorge Godoy
Em Friday 22 February 2008 13:10:20 Justin escreveu:
> I'm not after small code i'm after maintainable code where the date
> range for this report would be always changing.  So if i can get it to
> one select statement i would on have to pass in some variables and it
> would create the moving average.  Plus what if the sales people decide
> they want to change moving average from 10 weeks to 5 weeks or change it
> to 15 weeks.   People drive me nuts with i want it to do this or that ,
> Of course they have no idea how complicated it sometimes to get what
> they want.

Then just add more parameters to your function.  Instead of just start date 
also include an optional period, in weeks, days, hours, whatever you think is 
the better granularity for this.

So, in your interface, add the period and make "10" the default value. 



-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] need some help on figuring out how to write a query

2008-02-22 Thread Jorge Godoy
Em Thursday 21 February 2008 18:37:47 Justin escreveu:

> Now i could write a function to do this or do it in C++ program that
> creates query with all kinds of unions.  I'm wondering if there is a way
> to create this in a single select statement??
> I can't think of a way to do it???

Why you need it in one query?  Think of maintenability not on code size.

Solve the problem in parts, calculating it for one week -- or ten, you can use 
the interval type -- and then moving on...

The function would look like: 

WHILE start_date + '10 weeks'::interval < today:
SELECT INTO weeks_avg avg(value) FROM table WHERE date BETWEEN 
start_date AND 
start_date+'10 weeks'::interval;
start_date:=start_date + '1 week'::interval;
END WHILE;


Start from that and you'll have it done.  (Of course, above is pseudo code and 
untested...)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL 8.3.0 RPMs are available for download

2008-02-22 Thread Jorge Godoy
Em Thursday 21 February 2008 14:33:03 Julio Cesar Leiva escreveu:
> I found the RPMs for postgresql8.3 suse here
>
> http://software.opensuse.org/search
>
> just type postgresql and choose your O.S

But then, there are no PL and contrib and whatever packages to go with it.  It 
is still just the client and server, without the bells and whistles :-)

If you use other packages that depend on PG you can't use these.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] longest prefix match

2008-02-20 Thread Jorge Godoy
Em Wednesday 20 February 2008 05:55:07 Dragan Zubac escreveu:
>
> Anybody got any ideas/experiences/links for 'longest prefix match'
> solution in PostgreSQL ?
> Basically,put some telephone prefices in some kind of trie,and be able
> to perform fast lookups ?

Prefix or suffix?  

For prefix you can use "SELECT number FROM table WHERE number LIKE '123%'".

For suffix you change the "%" to the beginning of the string, but then loose 
the ability to use indices.  (Unfortunately, using suffixes is really 
interesting for caller IDs since you don't always receive area code, country 
code, etc.)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL]

2008-02-20 Thread Jorge Godoy
Em Wednesday 20 February 2008 03:56:37 Scott Marlowe escreveu:
> On Feb 19, 2008 11:39 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> http://www.postgresql.org/docs/8.3/static/plpgsql-cursors.html
> section 38.7.3.5.  But then I go to the index page for plpgsql at
> http://www.postgresql.org/docs/8.3/static/plpgsql.html and there are
> only entries for 38.7.5.1 through 3...
>
> Is there some problem with the doc rendering going on here?  Pic of
> what I'm seeing attached.

I believe that to save "space" just two levels of the index are being shown.  
Maybe the titles should allow guessing the contents better...  Or maybe some 
items should be promoted to an upper level :-)

It is very uncommon to go up to the latest level of an index in a summary.


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] the feasibility of sending email from stored procedure in Postgres

2008-02-17 Thread Jorge Godoy
Em Friday 15 February 2008 12:36:37 Adam Rich escreveu:
> > I would instead queue messages (or suitable information about them) in
> > a table, and have a process outside PostgreSQL periodically poll for them
>
> Why poll when you can wait?
>
> http://www.postgresql.org/docs/8.2/interactive/sql-notify.html

To use cron and prevent system resources being allocated when not needed?

To prevent one other external system to keep a connection open to the 
database?

To guarantee that if the external system crashed somehow during one batch 
execution it would still keep sending emails when it was activated again?


I can see notify being useful to GUI clients that need to update some 
information in near real time.  Even for web applications it is easier to 
code a "poll" and a refresh using AJAX to prevent full screen redraws. 

I don't see LISTEN/NOTIFY as an useful tool for sending emails, though.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Universal libpq.a ?

2008-02-05 Thread Jorge Godoy
Em Tuesday 05 February 2008 13:27:08 Larry Rosenman escreveu:
> On Tue, 5 Feb 2008, Dave Page wrote:
> >
> > Yeah, but can you wrap it all up into a patch to the build system that
> > Tom would approve of? :)
>
> I knew that was coming, and it's on my list :)

The hardest part: "that Tom would approve" :-)


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [pgsql-advocacy] PostgreSQL Certification

2008-02-04 Thread Jorge Godoy
Em Monday 04 February 2008 14:38:31 Csaba Nagy escreveu:

> Why don't you go ahead and create those special lists and make general
> collect all of them ? Some sort of hierarchy of lists... if doable at
> all, that could make everybody happy...

That's an excellent idea. 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PGSQL ERROR: FATAL: terminating connection due to administrator command

2008-02-04 Thread Jorge Godoy
Em Monday 04 February 2008 08:42:16 [EMAIL PROTECTED] escreveu:
> I have a PostgreSQL 8.2.6 on FreeBSD 6.3 (but the described behavior
> appears during several earlier minor versions as well) - which powers a
> php based web application. What I experience is the message (below)
> which appears during the first 5-7 clicks after the database has been
> cleanly imported (dropped and imported) -
>
> PGSQL ERROR: FATAL: terminating connection due to administrator command
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
>
> The message does not appear on every click. It can appear on the 1st,
> then not on the 2nd, then on the 3rd and so on. After few more clicks
> everything goes well until the next import.
>
> I could not find anything meaningful in the logs...
>
> Has somebody else experienced anything similar?

Have you checked your code and seen what is really happening on these clicks?  
I had some cases where looking for unexisting information or trying to 
operate on unexisting data crashed the specific process of the backend that 
was serving my application.

Enable query logging and see what is happening.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [OT] "advanced" database design (long)

2008-02-04 Thread Jorge Godoy
Em Monday 04 February 2008 07:03:47 Dawid Kuroczko escreveu:
> On Feb 4, 2008 5:14 AM, Alex Turner <[EMAIL PROTECTED]> wrote:
> > I"m not a database expert, but wouldn't
>
> [...]
>
> > give you a lot less  pages to load than building a table with say 90
> > columns in it that are all null, which would result in better rather than
> > worse performance?
>
> Well, but PostgreSQL's NULLs occupy almost no space, or rather a bit of
> space, that is one bit exactly. ;-)  I am pretty much sure that
> storage-wise looking NULLs
> are more efficient.

I'd say 1 byte every 8 NULLs instead of 1 bit.  If you only have 1 NULL, it 
will cost you 1 byte (not 1 bit).  If you have 9, it will cost you 2 bytes 
(not 9 bits).

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Better alternative for Primary Key then serial??

2007-12-13 Thread Jorge Godoy
Em Wednesday 12 December 2007 03:42:55 pilzner escreveu:
>
> Does stuff like this cause any aches and pains to developers out there, or
> do I just need to get in a new mindset??? Also, is there a way to be sure
> the primary key is *ONLY* ever given a value by serial, and not subject to
> updates???

Shouldn't the definition of a primary key be an immutable thing that is unique 
to the row?  If you change it, then it is not immutable anymore...

You can also have triggers to prevent PK updates...  But I wouldn't go this 
route.

Why are you needing updating the PKs of your data?


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Creating Aggregate functions in PLpgSQL

2007-12-13 Thread Jorge Godoy
Em Wednesday 12 December 2007 11:57:48 Richard Broersma Jr escreveu:
>
> There is a power distribution hierarchy that they would like to analyze.
>
> 1) the average pF of all motor in a Motor Control Center (MCC).
> 2) the average pF of all MCCs that are fed from a Load Center (LC).
> 3) the average pF of all LC that are fed from the facility main feed.
>
> The pF is measured between 0 and 1.  1 being purely effective power and 0
> being purge reactive power. The EEs want to identify MCCs or LCs that may
> need Capacitor banks to help offset the effective of reactive power
> inherent in the inductive load of motors.

Actually pF is measured from -1 to 1.

There is a problem of what type of reactive power (inductive or capacitive) 
you have on your facility.  Since you are working with motors you are worried 
with the inductive type, but it would be nice if you could get ready before 
they tell you that they want doing some pF correction on the facility and you 
need to have that counted as well.




-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Hijack!

2007-12-12 Thread Jorge Godoy
Em Wednesday 12 December 2007 10:39:32 Alvaro Herrera escreveu:
>
> It's not just you.  Much as I am annoyed by top-posting, I am much more
> so by people who top-post at the bottom.  Hey, did I say something
> stupid?  No -- think about it.  These guys do exactly the same thing as
> top-posters, except it is much worse because the actual text they wrote
> is harder to find.

The worst thing is people who bottom-posts at top-posted messages...  Can you 
see the mess?


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Hijack!

2007-12-12 Thread Jorge Godoy
Em Tuesday 11 December 2007 15:47:27 Joshua D. Drake escreveu:
> On Tue, 11 Dec 2007 17:37:27 +
>
> Gregory Stark <[EMAIL PROTECTED]> wrote:
> > "Gregory Williamson" <[EMAIL PROTECTED]> writes:
> > > Confidentiality Notice: This e-mail message, including any
> > > attachments, is for the sole use of the intended recipient(s) and
> > > may contain confidential and privileged information and must be
> > > protected in accordance with those provisions. Any unauthorized
> > > review, use, disclosure or distribution is prohibited. If you are
> > > not the intended recipient, please contact the sender by reply
> > > e-mail and destroy all copies of the original message.
> >
> > FWIW this would be another item on the netiquette FAQ.
>
> O.k. but the above is *not* user controlled. I think the community
> needs to suck it up and live with that.

And the good thing is that the indiscriminate use of those disclaimers tend to 
make them void when really needed.  After all, if there are lots of 
legitimate and intended posts to public mailing lists, who would guess 
something that is there shouldn't be? :-)

Always a good reference: http://www.goldmark.org/jeff/stupid-disclaimers/


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Recommendations for a datasync scenario ?

2007-12-02 Thread Jorge Godoy
Em Friday 30 November 2007 05:02:25 Aarni Ruuhimäki escreveu:
>
> I followed the recent thread about 'replication in Postgres' but still any
> info on experience of similar circumstances and pointers / comments /
> recommendations are more than welcome.

You problem is not one where replication would help so much.  You can design 
your solution so that you *always* feed a local database and this local 
database syncs, from time to time, with a central database.

What to use for syncs?  Some script you can write easily.

All entries should have the location as part of their PK, so that there are no 
clashes and you don't need to rely on sequences and the correct value being 
used everywhere.

It is the same kind of problem that we have on supermarkets and POSs: the POS 
has to sell even if the connection with the server is down. 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] plpython array support

2007-11-19 Thread Jorge Godoy
Em Monday 19 November 2007 19:29:51 Sean Davis escreveu:
> Just a simple question--does plpythonu (8.3beta) have support for
> arrays?  I don't see a specific mention of it in the docs, so I
> suppose not.

Arrays work for a long time now.  I've been using them since 8.1, for sure, 
but I think that on 7.4 I already had code with arrays inside the database... 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] view management

2007-11-18 Thread Jorge Godoy
Em Friday 16 November 2007 18:57:24 Ed L. escreveu:
>
> I often have need for views that reference views that reference
> views, and so on.  When I need to make a small update to one of
> the views, I am faced with having to drop and recreate all
> dependent views even if the driving change just adds another
> column to the view, for example.  I might have to drop and
> recreate many tens of views just to make a change to a single
> view.  What a PITA.  How do others manage this?

I have the same problem.

DB2 does have some kind of path invalidation where you have to recreate the 
path to make your views usable again.

So, it is possible to work with views like "SELECT * FROM table" to leverage 
table changes and things like that.

If PostgreSQL had similar resources to "block" the usage of a (some) view(s) 
until another command was issued to prevent that need of dropping and 
recreating all dependent objects it would be great.

A suboptimal alternative would be allowing the number of columns be greater 
than it was before -- i.e., add new columns to the view --, but block name 
changes for those columns and removing columns.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?

2007-11-11 Thread Jorge Godoy
Em Sunday 11 November 2007 04:51:20 rihad escreveu:
> pre-hydration, too. This all resulted in much effort not directly
> related to the business logic of my app, but rather on overriding
> Symfony's way of doing everyday web-programming tasks (like form
> validation, results listing, editing). Now I don't really want to work
> around further design inefficiencies of Symfony/Propel by trying
> updatable views. Really frustrating. Easier to just forgo any
> web-framework and write quality code yourself instead, like
> phpclasses.org's Manuel Lemos once said in his article... That said,
> Symfony 1.1-DEV/Doctrine begin to look promising.

I hope it works in the next release then...  It looks like this ORM is so 
broken that ...  I can't say how it was chosen for your project.

Anyway, you are trying to solve problems from your ORM inside your database 
that has no problems.  This makes things hard.  And they get even harder when 
you refuse to use advanced features of the database that would help solving 
the problems...

I use ORMs in Python and I don't have this problem.  I work with views, 
functions, big tables, etc. and performance gets better and better every time 
I optimize *any* of the two sides: application or database.  It is always an 
enhancement, as if I was coding things directly by hand.



-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] any way for ORDER BY x to imply NULLS FIRST in 8.3?

2007-11-08 Thread Jorge Godoy
Em Wednesday 07 November 2007 13:54:32 rihad escreveu:
>
> May I, as an outsider, comment? :) I really think of ASC NULLS FIRST
> (and DESC NULLS LAST) as the way to go. Imagine a last_login column that
> sorts users that have not logged in as the most recently logged in,
> which is not very intuitive. I vote for sort_nulls_first defaulting to
> false in order not to break bc.

But then, when ordering by login date, you should use COALESCE and infinity 
for them 
(http://www.postgresql.org/docs/8.2/interactive/datatype-datetime.html).



-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Populating large DB from Perl script

2007-11-01 Thread Jorge Godoy
Em Thursday 01 November 2007 16:57:36 Kynn Jones escreveu:
>
> But it occurred to me that this is a generic enough problem, and that
> I'm probably re-inventing a thoroughly invented wheel.  Are there
> standard techniques or resources or Pg capabilities to deal with this
> sort of situation?

You can restore the database without the constraints and then add them back 
after you restored the last table.

You can also use pg_dump / pg_restore / psql to do that.

You can also use COPY. 

When you want to keep the referential integrity checks in place since the 
beginning, you have to respect the order you need to restore your data.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] print command in sproc

2007-10-29 Thread Jorge Godoy
Em Tuesday 23 October 2007 07:17:53 Goboxe escreveu:
> Hi,
>
> What is the equivalent MSSQL 'print' command in pg sproc?

What does the MSSQL 'print' command "prints"?

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Postgres SQL Client for Suse Linux

2007-10-29 Thread Jorge Godoy
Em Tuesday 23 October 2007 09:03:23 yogesh escreveu:
> Hello Frnds,
>
> I have installed postgres on Suse Linux using the YAST-->Software.
> I have to check and Manipulate the tables and check the data..is
> there are any client application to work on the Suse
> Linux.

Sure!

A lot of them.  They include psql, OpenOffice.org, gda, ODBC and other.  If 
you can be more specific it will be easier to help you.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select count() out of memory

2007-10-25 Thread Jorge Godoy
Em Thursday 25 October 2007 13:36:09 [EMAIL PROTECTED] escreveu:
>
> Regarding dumps and restore; the system will always be offline during
> those operations and it will be so for several days, because a new project
> might start at another location in the world, so the travelling there
> takes time. In the mean time, all admin tasks can be performed without
> problems, even backup operations that take 3 days.

This sounds a lot like oil exploration...  Data gathered from sensors is 
usually a few TBs, explosions have definite intervals, interference between 
sensors, etc.

Putting the data inside the DB fast is part of the solution, getting it out 
fast to be processes / analyzed is another part.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] running postgres

2007-10-18 Thread Jorge Godoy
Em Thursday 18 October 2007 14:57:27 brian escreveu:
> Ken Johansson wrote:
> > Hello,
> >
> > Sorry everyone for the stupid rant.  Wont happen again.  And thanks for
> > offer assistance.  I seem to have it running and once i have a valid
> > question ill post it.
> >
> > Sorry again..
> >
> > Ken
>
> We'll put it down as "thinking out loud" then. It happens. ;-)

OK...  But was it harder or easier than MS SQL Server? :-)



-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Crosstab Problems

2007-10-18 Thread Jorge Godoy
Em Thursday 18 October 2007 16:37:59 Joe Conway escreveu:
> Tom Lane wrote:
> > so it's trying to pstrdup a null result from SPI_getvalue.
> >
> > Obviously it shouldn't crash, but I'm not sure what it *should* do in
> > this case.  Joe?
>
> The row is pretty useless without a rowid in this context -- it seems
> like the best thing to do would be to skip those rows entirely. Of
> course you could argue I suppose that it ought to throw an ERROR and
> bail out entirely. Maybe a good compromise would be to skip the row but
> throw a NOTICE?

If I were using it and having this problem I'd rather have an ERROR.  It isn't 
uncommon for people not look at their logs and it isn't uncommon for them 
just run command from some language using a database adapter that might not 
return the NOTICE output.  The ERROR wouldn't pass unnoticed.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Suggestions for Remote Procedure Calls from PG, please?

2007-10-18 Thread Jorge Godoy
Em Thursday 18 October 2007 01:44:33 Bret Schuhmacher escreveu:
>
> Thanks for the reply, Tom.  I was thinking I could have my remote
> process send a message back to PG via XMLBlaster, too.  XMLBlaster is
> a MOM-like message-queuing app that guarantees delivery to
> subscribers. (www.xmlblaster.org).  The problem, as you stated,
> though, is transactional integrity :-(.  Hmmm, I'll see about the
> to-do queue idea.

You can try implementing a queue, Bret.  Make it a FIFO queue and poll from it 
regularly.  Ten make your transaction insert a record on that queue and take 
your action based on that.

A table as simple as:

id  SERIAL,   -- you can have a routine to 
reset this when empty
table   TEXT,  -- includes schema
primary_key TEXT   -- to allow for numeric and text PKs

would allow you to retrieve the row that has been changed and take your action 
based on that.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Design Question (Time Series Data)

2007-10-04 Thread Jorge Godoy
On Thursday 04 October 2007 06:20:19 Pavel Stehule wrote:
>
> I did good experience with 2 variant. PostgreSQL needs 24bytes for
> head of every row, so isn't too much efective store one field to one
> row. You can simply do transformation between array and table now.

But then you'll make all SQL operations complex, you will have problems using 
aggregators, etc.  For example, with a normalized design one can query the 
average value of a specific serie using simple commands and given the use of 
indices this could be highly optimized.  Now, using an array, he'll be doing 
a seqscan on every row because he needs to find if there was a value for the 
given series, then he'll need extracting those values and finally calculating 
the average (I know you can select an element of the array, but it won't be 
easy on the planner or the loop to calculate the average because they'll need 
to do all that and on every row). 

I'd use the same solution that he was going to: normalized table including a 
timestamp (with TZ because of daylight saving times...), a column with a FK 
to a series table and the value itself.  Index the two first columns (if 
you're searching using the value as a parameter, then index it as well) and 
this would be the basis of my design for this specific condition.

Having good statistics and tuning autovacuum will also help a lot on handling 
new inserts and deletes.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] COPY for .CSV files problem

2007-10-01 Thread Jorge Godoy
On Monday 01 October 2007 05:20:52 pere roca wrote:
>   Hi everybody,
>   I want to enter a .CSV file using COPY comand and plpgsql. It enters
> lat,lon and some data. In the CSV data there is no field (such as
> "user_name" or current_time) that allow distinguish future queries for
> different users (ex: select x,y from table where user_name=z; after
> entering lat,lon I would like to update the_geom in  another table using
> the lat,lon of ONLY this user).
>
>   I could update this kind of data ("user_name") using php but next user
> could not enter his data by COPY (it only lets fill the destination table
> if ALL the fields are the same as the CSV or text file).
>   So, I have a problem. I thought that may be I can reference/select data
> of a SINGLE USER using parameters like default_time without need to insert
> this data as a field in the table (otherwise, problems with COPY). Maybe
> can I use catalog?

COPY won't allow you to process your data.  You either write something to 
insert record by record or you change your data to include the needed 
information before using COPY.

I would change the input file to include the user name.


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql and SSL

2007-09-20 Thread Jorge Godoy
On Thursday 20 September 2007 11:41:00 Tom Lane wrote:
> "Albe Laurenz" <[EMAIL PROTECTED]> writes:
> > Jorge Godoy wrote:
> >> Even though one can require connections using only SSL on the
> >> server side, I don't see a method (in pg_hba.conf) that
> >> would allow clients with SSL certificates.
> >
> > Nor do I.
>
> If you mean *require* clients to have certificates, that's not
> determined by pg_hba.conf, it's determined by whether you provide
> a root.crt file.  See
> http://www.postgresql.org/docs/8.2/static/ssl-tcp.html

Thank you!  Complemented with 
http://www.postgresql.org/docs/8.2/static/libpq-ssl.html  this is exactly 
what we were guessing the OP asked for...

I'll have to dig if the libraries I use support that.  It would be much more 
interesting changing certificates once a year than hardcoding passwords on 
code...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql and SSL

2007-09-20 Thread Jorge Godoy
On Thursday 20 September 2007 05:15:39 Albe Laurenz wrote:
>
> What does pg_hba.conf look like?

I don't know if I misunderstood him, but I thought he was willing to have SSL 
on both sides, i.e., both the client and the server identify themselves 
trough SSL certificates.

Even though one can require connections using only SSL on the server side, I 
don't see a method (in pg_hba.conf) that would allow clients with SSL 
certificates.

All I have here is:

# METHOD can be "trust", "reject", "md5", "crypt", "password",
# "krb5", "ident", "pam" or "ldap".  Note that "password" sends passwords
# in clear text; "md5" is preferred since it sends encrypted passwords.


At least, this is what I understand by "client certificate"...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Partial index with regexp not working

2007-09-12 Thread Jorge Godoy
On Wednesday 12 September 2007 09:34:55 Richard Huxton wrote:

> To be honest, I'd probably just have a separate column "uid_type", set
> it when creating the user and then just have a partial index WHERE
> uid_type='IP'

Or have a separate column with the user ID and have "(anonymous)" when you 
have the IP address on the table (maybe even NULL, it all depends on what you 
want). 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ON UPDATE trigger question

2007-09-12 Thread Jorge Godoy
On Wednesday 12 September 2007 15:56:13 Josh Trutwin wrote:
> If I create an ON UPDATE trigger run on each row after update, does
> the trigger fire only on rows affected by the update or for all rows?
>
> For example:
>
> CREATE TRIGGER my_update_trigger
>AFTER UPDATE ON my_table
>FOR EACH ROW
>EXECUTE PROCEDURE my_update_proc;
>
> UPDATE my_table SET my_val = my_val * 2;
>
> Will the trigger fire on rows that have NULL for my_val?

I haven't tested what you asked (you can do that easily), but if this is your 
concern and if you have a huge table you might want to add a WHERE 
clause: "WHERE my_val IS NOT NULL".  


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Suse RPM's

2007-08-03 Thread Jorge Godoy
On Friday 03 August 2007 10:04:27 Devrim GÜNDÜZ wrote:
> Complain to Reinhard, he is CC'ed to this e-mail.
>
> ftp://ftp.suse.com/pub/projects/postgresql/
>
> has only 8.2.0...
>
> BTW, I have promised to build SuSE RPMs some time ago; however Reinhard
> said that they will keep the packages up2date.
>
> Reinhard, if SuSE is busy and/or is not willing to keep the packages
> updated, I can also build SuSE RPMs for the community...
>
> Kind regards,

Having updated packages would be great!  Specially if they would be updated 
for, e.g., OpenSuSE 10.1 and OpenSuSE 10.2 (both already released) and not 
just for OpenSuSE 10.3 (still under "development").

Having packages for some of the older releases makes using both PostgreSQL and 
OpenSuSE eaiser (for PostgreSQL it doesn't make much difference since we can 
compile things by hand, but would definitely draw more attention to 
OpenSuSE...  I had a few cases where it had to be handed down because of the 
lack of updates -- not security related, of course -- to "older" releases).

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Suse RPM's

2007-08-03 Thread Jorge Godoy
On Friday 03 August 2007 07:38:19 Chris Coleman wrote:
> Hi,
>
> Does anyone know of anywhere to get Suse 10.1 RPMs of recent (8.2 and
> 8.3) versions of postgres?
>
> The postgres website only has fredora and redhat ones listed, and using
> rpmfind.net I can only find 8.0.13 ones for 10.0.

I usually get a .src.rpm, the tarball and run rpmbuild -bb.  It works fine and 
I have new packages very fast.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Accent-insensitive search

2007-07-12 Thread Jorge Godoy
On Monday 09 July 2007 18:33:49 turbovince wrote:
> Hello, I would like to perform some accent-insensitive searches on my
> database, which means that a select query with condition, say, WHERE
> NAME = 'HELLÔ' would return records where name is 'HELLO' as well.
>
> My data is encoded in Unicode (UTF8) and therefore I cannot use
> Postgre's to_ascii() trick to achieve accent-insensitive searches.
>
> Is there any way I could perform such searches with an UTF8 encoding ?

Check the translate() documentation.  It isn't the best thing in the world to 
have to use it and code things by yourself, but it works.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Can't change working directory to C:/Documents and Settings in Windows

2007-07-01 Thread Jorge Godoy
On Sunday 01 July 2007 22:25:24 Casey Crosbie wrote:
> Jorge,
>
> Thanks for the suggestion. But unfortunately, I tried both
>   \cd "C:/Document~1" and just \cd C:/"Document~1" and neither worked.

Sorry.  It should be up to 8 chars: "Docume~1" or some variation like that 
(I've seen ~2  due to some unknown reason).  This looks like a Windows 
problem on finding directories with spaces in its name.  The same happens 
with diacriticals...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Can't change working directory to C:/Documents and Settings in Windows

2007-07-01 Thread Jorge Godoy
On Sunday 01 July 2007 21:51:08 Casey Crosbie wrote:
> Andrej,
>
> Thanks for the suggestion. As you said, I tried enclosing "Documents and
> Setting" and even "C:/Documents and Settings" and neither worked. Please
> let me know if you have any other ideas.

This looks like that old Windows bug.  Try using "Document~1" as the 
directory name.


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Geographic data sources, queries and questions

2007-05-23 Thread Jorge Godoy
"Chuck D." <[EMAIL PROTECTED]> writes:

> 1) The first is this.  I have 3 tables.  Country, state and city.  Country 
> has 
> a country_id to identify a country, state has a state_id and country_id to 
> identify a state, and city has a city_id, state_id and country_id (for easy 
> reference) to identify it.  I then have a table for users that stores their 
> city, state and country ID's along with other info about them.

I don't believe this is good design.  You'll have to have a trigger or
something to verify that the country_id+state_id on the city table are
exactly equal to the country_id+state_id on the state table.  If you
don't, you might have something like (using US city names...) "country:
USA -> state: NY" -> "country: Zimbabwe -> state: NY -> city: New
York".

It isn't a problem of "any country and any state" on the city table, but
a problem of "this state inside that particular country".  I'd drop the
country column.

> My problem came recently when I questioned the integrity of the data and 
> needed to make some changes.  I thought to myself that maybe storing the ID 
> wasn't as good as storing the ISO or FIPS 2 letter abbreviation.  The only 
> problem the abbreviation could changed at some point by the regulating bodies 
>  
> and all rows in all tables would need to be updated.

You have integrity problems because you denormalized your model too much
and tried to attach the same information on two different places without
requiring those to be equal.

> The question is, for the purposes of querying or searching is it better to 
> store and search a 2 byte integer that is indexed for country or state ID's, 
> or is it better to store and search a 2 byte CHAR abbreviation?

It all depends: surrogate primary keys or ... :-)  (old flame starter)


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Views- Advantages and Disadvantages

2007-05-10 Thread Jorge Godoy
Ron Johnson <[EMAIL PROTECTED]> writes:

> On 05/10/07 21:24, Tom Lane wrote:
>> "Leif B. Kristensen" <[EMAIL PROTECTED]> writes:
>>> Would it be reasonable to suggest that later versions of PostgreSQL 
>>> could examine if a function changes data, and quietly marks a function 
>>> as 'stable' if it doesn't?
>> 
>> My instinctive CS-major reply to that is "only if you've found a
>> solution to the halting problem".  However, it's possible that we could
>> detect this case for a useful subset of real-world functions ... not
>> sure offhand what could be covered.
>
> If there are no INSERT, UPDATE or DELETE statements in the function?

And all functions called from inside the one being run as well
(recursive condition, of course)...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>


pgpijw7CEq76Z.pgp
Description: PGP signature


Re: [GENERAL] Converting time to float

2007-04-27 Thread Jorge Godoy
Richard Huxton <[EMAIL PROTECTED]> writes:

>> SELECT '00:10:00'::TIME / '1:00:00'::TIME;  -- Answer is: 
>> 0.1667
>
> But it doesn't make any sense to divide one time by another, does it? Are you
> sure it's not intervals you want?

It "doesn't matter", I just used time because I thought it might be
easier or exist something for the same type and that the concept was
easier to understand with it when compared to interval.  If we take the
physical sense of the data, then interval is a more correct type (and is
the one used, but since I could convert from one to the other freely, I
didn't bother with details on my previous message). :-)

The 1 hour time / interval is a reference and I need to get the value of
a time parameter on base 100 instead of base 60.  This is why I'm doing
this operation.

> Anyway, try something like this:
>
> SELECT extract(epoch from ('14:02:04'::time)) / extract(epoch from
> ('01:00:00'::time));
>  ?column?
> --
>  14.03444
>
> The "epoch" is in seconds (from midnight 1970-01-01 for timestamps) so gives
> you the result you want.

It does.  And is prettier. ;-)

The old code was


CREATE OR REPLACE FUNCTION aux.f_v_measured_time_base10(
 p_measured_time INTERVAL, OUT o_measured_time_base10 FLOAT) AS $_$
DECLARE
BEGIN
o_measured_time_base10:=EXTRACT(HOUR FROM p_measured_time);
o_measured_time_base10:=o_measured_time_base10 + 
(EXTRACT(MINUTE FROM p_measured_time) / 60.0);
o_measured_time_base10:=o_measured_time_base10 + 
(EXTRACT(SECOND FROM p_measured_time) / 3600.0);
END;
$_$ LANGUAGE plpgsql IMMUTABLE STRICT;


The new one is

CREATE OR REPLACE FUNCTION aux.f_v_measured_time_base10(
 p_measured_time INTERVAL, OUT o_measured_time_base10 FLOAT) AS $_$
DECLARE
BEGIN
o_measured_time_base10:=(EXTRACT(EPOCH FROM p_measured_time) / 
         EXTRACT(EPOCH FROM '1:00:00'::INTERVAL));
END;
$_$ LANGUAGE plpgsql IMMUTABLE STRICT;



Thanks for your help.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Converting time to float

2007-04-27 Thread Jorge Godoy

Hi!


I am needing to convert from TIME type to floats and taking 1h as the
unit, so what I'd like is something that would allow me doing operations
like: 

SELECT '00:10:00'::TIME / '1:00:00'::TIME;  -- Answer is: 0.1667
SELECT '00:30:00'::TIME / '1:00:00'::TIME;  -- Answer is: 0.5
SELECT '01:10:00'::TIME / '1:00:00'::TIME;  -- Answer is: 1.1667

or

SELECT '00:10:00'::TIME / '1 hour'::INTERVAL;  -- Answer is: 
0.1667
SELECT '00:30:00'::TIME / '1 hour'::INTERVAL;  -- Answer is: 0.5
SELECT '01:10:00'::TIME / '1 hour'::INTERVAL;  -- Answer is: 
1.1667

i.e., I want to see how many times one time value fits in another.  I'll
have times with just seconds, minutes and seconds and I might have times
with hours, minutes and seconds to make this fit.


My first idea is converting each part of the time individually by diving
it by 1.0 (hours), by 60.0 (minutes) or by 3600.0 (for seconds) and then
adding it all up...  Of course I won't do the division for hours and I'm
using floats here to for a float division instead of an integer
division. 


Any hints or a better recipe? :-)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-25 Thread Jorge Godoy
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> Hello all,
>
> I know I may be asking too much, but I have a very limited C/C++ (as well as
> PostgreSQL internal architecture) knowledge. I've tried compiling the C
> source code Manuel sent as a PostgreSQL loadable module on Visual Studio
> .NET 2003 (C++) without success (lots of missing identifiers, "int Datum"
> redefinition and other things I didn't really understood). The comments in
> the Postgres 8.0 manual didn't help much.
>
> If anyone could put me on the right direction on how to write/build C/C++
> PostgreSQL on the Windows platform (specifically Windows XP) I would be
> grateful as I really need this thing working as soon as possible.
>
> Thanks,
>
> Marcelo.

If you really need this, shouldn't you consider hiring someone that
works professionaly with PostgreSQL?

They'd certainly do the work for you and you'll accomplish your target
on the due date.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-24 Thread Jorge Godoy
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> @Richard: I've thought about having one DB user for each APP user. However,
> a coworker told me that it would infeasible to do that on the web
> enviroment, specifically for J2EE where a DB connection pool is used, so I
> gave up on that.

Why?  You can always "SET SESSION AUTH TO " before anything.  It
works with transaction pools and you can even enhance your application
security by a second layer inside the database itself.  DB passwords
don't need to be known by users since they won't connect directly and
your connection will be made with a priviledged user. 

> @Jorge: Is this "connection id" you say equivalent to the "applicationid"
> mentioned in the ibm db2 article? If so, how could I get this data through
> my application?

It all depends on what you want to make it unique.  I believe that a
simple process will be dedicated to each connection, so if you get its
PID you'll be done.  Each time a user accesses the database you insert
or update a record with his PID and then you make your triggers work
with that.  There will be more logic, but you got the idea.

Another option is using the transaction ID or something that always
change.

You just need a unique value that lasts for a connection and isn't
shared with any other user connected at the same time you are.

There are a lot of functions that you can use.  You just have to be sure
when you want the information and what information you need.  Take a
look at the ones available in pg_catalog for your specific PostgreSQL
version. 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Audit-trail engine: getting the application's layer user_id

2007-04-24 Thread Jorge Godoy
"Marcelo de Moraes Serpa" <[EMAIL PROTECTED]> writes:

> I forgot to add the link to the article I've mentioned:
>
> http://www-128.ibm.com/developerworks/db2/library/techarticle/0302stolze/0302stolze.html#section2b
>
> This is what I'd like to do on PostgreSQL,

So, translating it to a simpler example:

You want that your function gets the connection ID it is using and
ties it to your current user ID at your application and then have
all your tables use a trigger to retrieve the user name from the
auxiliar table that maps "connection ID -> user", right?

That's what's in that page: a UDF (user defined function) named
getapplicationid() that will return the user login / name / whatever and
triggers.

What is preventing you from writing that?  What is your doubt with
regards to how create that feature on your database?



-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] where clause help

2007-04-23 Thread Jorge Godoy
Ketema <[EMAIL PROTECTED]> writes:

> Man so simple!  is your solution the same as:
>
> num_provisioned < num_products AND (num_open_issues + num_provisioned
> + num_canceled) < num_prods
>
> which is what i finally came up with

This can be simplified to "num_open_issues + num_provisioned +
num_canceled < num_prods", without the AND and the other statement. 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Passing arrays to stored procedures

2007-04-20 Thread Jorge Godoy
William Garrison <[EMAIL PROTECTED]> writes:

> WHERE customerid = ANY($1);
> Results in the error:
> ERROR: op ANY/ALL (array) requires array on right side
>
> I tried casting the character string to an array afterward:
>
> WHERE customerid = ANY($1::bytea);
> which results in:
> ERROR: cannot cast type character varying to bytea

You meant array or bytea?


neo=# select '{1, 2, 3}'::int[];
  int4   
-----
 {1,2,3}
(1 record)

neo=# 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] unique constraint on 2 columns

2007-04-20 Thread Jorge Godoy
Jonathan Vanasco <[EMAIL PROTECTED]> writes:

> I need a certain unique constraint in pg that i can't figure out.
>
> Given:
>
>   create table test_a (
>   id serial ,
>   name_1 varchar(32) ,
>   name_2 varchar(32)
>   );
>
> I need name_1 and name_2 to both be unique so that:
>   name_1 never appears in name_1 or name_2
>   name_2 never appears in name_2 or name_1
>
>
> a standard 2 column unique index / constraint will not accomplish this.

But a trigger will...  

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] programmatic way to fetch latest release for a given major.minor version

2007-04-10 Thread Jorge Godoy
Listmail <[EMAIL PROTECTED]> writes:

>>> Yeah yeah, but terminology aside, having 2 or three digits in each
>>> attribute is just wrong!
>>
>> Terminology aside, why? The unit is "8.1" not "8" and "1". It makes no
>> sense to say you're on version 8, in the given context, so why should the
>> XML data pretend there is?
>>
>> //Magnus
>
>   Just pretend that :
>
>   - version = a tuple of integers (a, b, c, ...)
>   - major = (a, b)
>   - minor = (c, ...)
>
>   Besides, that is sortable (unlike strings where 15 < 2) :

But then, floats are as sortable as integers and 8.3 < 15.1...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-03-27 Thread Jorge Godoy
"Merlin Moncure" <[EMAIL PROTECTED]> writes:

> On 3/27/07, Erik Jones <[EMAIL PROTECTED]> wrote:
>> Not having looked myself, this is as much a question as a suggestion, but
>> are there not postgres dba training seminars/courses you could recommend
>> they send their dba's to?
>
> There are some classes out there but in my opinion your best bet (from
> point of view of looking for good talent) is to get people that found
> their way to postgresql themselves.  In that sense you want to hook up
> with people from the mailing lists or develop contacts from within the
> community.  So, training classes are useful for beefing up on
> knowledge and learning new tricks, but  postgresql dbas are born, not
> made :)

I have the same opinion.  Just look around and see how many "certified
something" are there and how many of them *really* know the product, its
details, how to work with it.

Certifications don't even certify the minimum knowledge.  They are like tests
that we do in school: they show how we are feeling and what we "know" (or
memorized during the night) at the instant of the test.  Some people even
cheat on tests (not that I'm saying it is done or is common with certification
tests...).

So, if I have a good memory to retain information for a week, I'll excel in
certification tests.  But then, what after that week?

I'm against certifications for any product.  It just doesn't show the
reality. 


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: Fwd: [GENERAL] Approximate join on timestamps

2007-03-20 Thread Jorge Godoy
"Rhys Stewart" <[EMAIL PROTECTED]> writes:

> had a similar problem a while back. so i made and abs_time function:
>
> CREATE OR REPLACE FUNCTION abs_time(interval)
>  RETURNS interval AS
> $BODY$
> BEGIN
> if
>$1 < '00:00:00'::interval
> then
>return ($1 * -1)::interval;
> else
>return $1;
> END IF;
> END;
> $BODY$
>  LANGUAGE 'plpgsql' VOLATILE;

I believe that you can declare this IMMUTABLE.  For a given interval it will
always return the same value, so you can benefit from some optimization.

http://www.postgresql.org/docs/8.2/interactive/xfunc-volatility.html


An IMMUTABLE function cannot modify the database and is guaranteed to
return the same results given the same arguments forever. This
category allows the optimizer to pre-evaluate the function when a
query calls it with constant arguments. For example, a query like
SELECT ... WHERE x = 2 + 2 can be simplified on sight to SELECT
... WHERE x = 4, because the function underlying the integer addition
operator is marked IMMUTABLE.



-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Approximate join on timestamps

2007-03-20 Thread Jorge Godoy
"Phil Endecott" <[EMAIL PROTECTED]> writes:

> I have two tables containing chronological data, and I want to join them using
> the timestamps.  The challenge is that the timestamps only match 
> approximately.
>
> My first attempt was something like
>
>   t1 join t2 on (abs(t1.t-t2.t)<'1 min'::interval)
>
> Of course there is no "abs" for intervals, and I couldn't think of anything
> better than this
>
>   t1 join t2 on (t1.t-t2.t<'1 min'::interval and t2.t-t1.t<'1 min'::interval)
>
> What indexes could I add to make this moderately efficient?
>
> But that query isn't really good enough.  There is no single "epsillon" value
> that works for this data set.  I really want to find the closest match.
>
> I feel that it ought to be possible to step through the two tables in
> timestamp order matching up elements.  Is there any way to express this is 
> SQL?
>
> (One detail is that the left table has fewer rows than the right table, and I
> want one output row for each row in the left table.)
>
> Many thanks for any suggestions.

Untested, but what about something like a function that does (pseudocode below):


  select (min(t1.t) > ref_time) as above_t1
  select (max(t1.t) < ref_time) as below_t1
  if ((above_t1 - below_t1) =< '0 seconds'::interval then
 return above_t1
  else
 return below_t1


to find out the nearest time with regards to t1 when compared to a reference
time that should be the time you're looking for.

Do the same for t2... 


I haven't checked the docs if there's something that already makes your life
easier :-)


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Design / Implementation problem

2007-03-19 Thread Jorge Godoy
"Filip Rembiałkowski" <[EMAIL PROTECTED]> writes:

> hmmm. just a general notice:
>
> A customer loyalty program, which expires earned points, not to let
> the customer "win" anything valuable?
> If I were your client, I wouldn't be happy with this.

On the other hand, having the possibility is better than having nothing...
This is common to "force" the customer to buy more and more often.  You have
12 months to earn 3000 points.  If you have 2850 points, then you'll consider
buying thing to earn 150 more points to win something...  But if you don't
have any incentive, then why should you care buying something "now"?

This is very common with miles for flights.  If you fly often, you get
upgrades, discounts, etc.  If you don't, then you pay the fare as everybody
else.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Design / Implementation problem

2007-03-18 Thread Jorge Godoy
Naz Gassiep <[EMAIL PROTECTED]> writes:

> that calculating the point
> balance on the fly is not an unfeasibly heavy duty calculation to be done at
> every page view?

One alternative to calculate it everytime is calculating it once a day.  If
there are calculations for today, then just read the calculated value.
Otherwise calculate it. 

If the user earns more points, make an entry at the raw table (for the
expiration process) and increments today points.  Do the same for points
spent. 



-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is This A Set Based Solution?

2007-03-16 Thread Jorge Godoy
Tino Wildenhain <[EMAIL PROTECTED]> writes:

> Show me a user which really clicks on 1000 or more checkboxes on a
> webpage or similar ;)
> I'd think around 20 values is plenty.

On the other hand, show me a page with 1000 or more checkboxes to be clicked
at once and I'd show a developer / designer that needs a new career... :-)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_dumpall and version confusion

2007-03-16 Thread Jorge Godoy
"Dave Page" <[EMAIL PROTECTED]> writes:

>> --- Original Message ---
>> From: "Joshua D. Drake" <[EMAIL PROTECTED]>
>> To: Dave Page <[EMAIL PROTECTED]>
>> Sent: 15/03/07, 23:51:18
>> Subject: Re: [GENERAL] pg_dumpall and version confusion
>> 
>> If you want that, create a wrapper program that calls to different
>> statically compiled versions of pg_dump.
>
> I can't even begin to imagine how difficult that would be on Windows!

As difficult as a new ".bat" file? 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Multiple TRIGGERs and OLD versus NEW "records"

2007-03-14 Thread Jorge Godoy
Tom Lane <[EMAIL PROTECTED]> writes:

> Jorge Godoy <[EMAIL PROTECTED]> writes:
>> If I change some column -- e.g. fill in some automatic calculated column or a
>> timestamp column --, when I run the second trigger will OLD be the data 
>> stored
>> at the database already or will it be affected by the NEW data returned from
>> the previous trigger?
>
> OLD is the data actually in the database.  NEW is the currently proposed row.
> So prior triggers can change what you see as NEW, but not OLD.
>
>   regards, tom lane

Thanks, Tom.  It's like I supposed.  I was going to test it tonight when I get
to my office but you saved me from that :-)



-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Multiple TRIGGERs and OLD versus NEW "records"

2007-03-14 Thread Jorge Godoy

Hi!


(Consider these UPDATE triggers)


I can't test this for now since I'm not with a test database handy for now,
but I was thinking on what happens to OLD and NEW when running multiple
triggers.

For example, for the first trigger OLD is the equivalent to the data that is
stored at the database already, NEW is the data I'm sending to the database.
If I change some column -- e.g. fill in some automatic calculated column or a
timestamp column --, when I run the second trigger will OLD be the data stored
at the database already or will it be affected by the NEW data returned from
the previous trigger?

I know that NEW propagates, but does OLD change?  I suppose not and I can't
confirm that now...  If anyone has the answer, sharing would be nice.  If I
don't get any answer, I'll test and post the results later. :-)



TIA,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] OT: Canadian Tax Database

2007-03-10 Thread Jorge Godoy
omar <[EMAIL PROTECTED]> writes:

> I'm curious what people think about the following statement considering the
> database typing talk being brought up here.  My experience is that more times
> than not I have to put data validation in my client code even when it's
> available on the server, if for no other reason that users don't understand
> what foreign key violation, etc messages mean.  It begs the question of
> whether it's really necessary on the server or not.  SQLite seems to take the
> position that it isn't since there is no referential integrity and the
> following.  To be honest, there's a lot of power in the ability to view
> everything as a string, with of course proper data validation.

I believe that data validation is essential at the server side.  The ideal
situation to me is something like data validation on server, errors /
exceptions being risen and then catched by the client code that will translate
them to a suitable message.

Inserting data validation on client side helps with simple input and eliminate
the average number of roundtrips needed for getting the data stored, but
shouldn't be the only validation done.


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Omar Eljumaily <[EMAIL PROTECTED]> writes:

> But you're always returning Monday, right?  Your grouping will be correct, but
> to get the actual truncation date, you have to subtract back.
>
> select (date_trunc('week', '2007-03-07'::date + 5)::date-5);
> select (date_trunc('week', '2007-03-06'::date + 5)::date-5);
> select (date_trunc('week', '2007-03-08'::date + 5)::date-5);

Indeed.  This gives the correct result.  So, we can change '5' for:  

 7 - ('dow desired' - 1)

Replacing the above queries, then:

# select (date_trunc('week', '2007-03-07'::date + (7 - (3 - 1::date - (7 - 
(3 - 1));
  ?column?  

 2007-03-07
(1 row)

# select (date_trunc('week', '2007-03-06'::date + (7 - (3 - 1::date - (7 - 
(3 - 1));
  ?column?  

 2007-02-28
(1 row)

# select (date_trunc('week', '2007-03-08'::date + (7 - (3 - 1::date - (7 - 
(3 - 1));
  ?column?  

 2007-03-07
(1 row)



Parameterizing the desired day shouldn't be hard. ;-)


We subtract one from the desired day because PostgreSQL returns '1' for the
date_part('week') considering Mondays as the first day of the week. 



Thanks, Omar.  This makes the function easier to write.  I hope it also solves
your problem.



Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Bruno Wolff III <[EMAIL PROTECTED]> writes:

> On Fri, Mar 09, 2007 at 23:07:26 -0300,
>   Jorge Godoy <[EMAIL PROTECTED]> wrote:
>> 
>> But how to get the date if the first day of the week is a Wednesday?  This
>> example is like the ones I've sent with separate queries that needed being
>> combined -- in a function, probably -- to get the desired result. 
>
> If you want to group on weeks that start on Wednesdays add 5.

I believe you either missed my post with several queries showing what I wanted
or you didn't understand the point.

If I run this query:

   select date_trunc('week', '2007-03-08'::date + 5);

it fails even for that date.  The correct answer, would be 2007-03-07 and not
2007-03-12.  I want the first day of the week to be Wednesday and hence I want
the Wednesday for the week the date is in.  (Wednesday was arbitrarily chosen,
it could be Thursday, Tuesday, Friday, etc.)


> postgres=# select date_trunc('week', '2007-03-07'::date + 5);
>date_trunc
> 
>  2007-03-12 00:00:00-05
> (1 row)

This should be 2007-03-07 since 2007-03-07 *is* a Wednesday and that's when
the week starts. 

> postgres=# select date_trunc('week', '2007-03-06'::date + 5);
>date_trunc
> 
>  2007-03-05 00:00:00-06
> (1 row)

This should be 2007-02-28 since this is the first day of the week for the week
that starts on Wednesday 2007-02-28 and ends on 2007-03-06. 

> postgres=# select date_trunc('week', '2007-03-08'::date + 5);
>date_trunc
> 
>  2007-03-12 00:00:00-05
> (1 row)

This should return the same date as the first query (2007-03-07).  2007-03-12
is a Monday, and weeks should always start on Wednesday on my arbitrary
question. 


This is why I can't envision a simple query for that but it is easy with a
function.

Again, the function should do something like:

   - make the date calculation (e.g. add some interval or nothing at all...)

   - get the resulting 'dow'

   - if it is > than the arbitrary day that was determined to be the first
 day of the week (Wednesday on my example), then return
 date_trunc('week') + 2 days (2 for moving from Monday to Wednesday,
 for different first days the shift should be different)

   - if it is < than the arbitrary day that was determined to be the first
 day of the week (Wednesday, again), then return date_trunc('week') -
 5 days (-5 for moving from Monday to the previous Wednesday)


The result when asked for the first day should always be the Wednesday that is
equal to the date or that ocurred right before it.  It is the same idea that
is implemented today that returns Monday, but instead of Monday I want another
day that in my posts happened to be exemplified by Wednesday.



I don't want you to expend your time.  It was just a question that got
answered indirectly with a "there's no way to do that without using a
function" due to the complexity above and the lack of such feature in
PostgreSQL.  It is simple to have it as a function, though.

I don't know any RDBMS that implements that.  All of them require some
operations to get the desired result.



Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Bruno Wolff III <[EMAIL PROTECTED]> writes:

> On Fri, Mar 09, 2007 at 20:13:11 -0300,
>   Jorge Godoy <[EMAIL PROTECTED]> wrote:
>> Bruno Wolff III <[EMAIL PROTECTED]> writes:
>> 
>> > No, it has to be inside the function so that the modular arithmetic is
>> > applied to it.
>> 
>> Then there's the error I've shown from your command.  Can you give me a
>> working one?  This was with PostgreSQL 8.2.3.
>
> postgres=# select date_trunc('week', current_date + 1);
>date_trunc
> 
>  2007-03-05 00:00:00-06
> (1 row)
>
> It turns out DOW isn't available for date_trunc. You can probably use
> extract to get what you want. You probably should check that it works
> at DST transitions, since the date value is cast to a timestamp and
> if DST transitions happen at  in your time zone, you might get an
> unexpected answer.
>
> postgres=# select extract(dow from current_date + 1);
>  date_part
> ---
>  6
> (1 row)

But how to get the date if the first day of the week is a Wednesday?  This
example is like the ones I've sent with separate queries that needed being
combined -- in a function, probably -- to get the desired result. 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
"Ted Byers" <[EMAIL PROTECTED]> writes:

> I  don't buy the suggestion that server side code is less error prone that
> client side code, but be that as it may, we're talking about a function that
> has one line of code.  And given what you just said, you don't want the day of
> the week, you want a function that returns the week of the year.  This can be
> had from the same Perl functions I mentioned before, with a minor alteration
> in how you call it.  my suggestion would be to create that one line function
> that invokes the relevant Perl function, which can then be invoked in your
> select statement (presumably with a group clause to avoid mixing data from
> different years).  It should take about ten to fifteen minutes to write and
> test?


There's no need to use Perl. 

neo=# select extract('week' from now());
 date_part 
---
10
(1 registro)

neo=# 


Today is a day at the tenth week of the year.




-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Omar Eljumaily <[EMAIL PROTECTED]> writes:

> Ted, my reason for asking the question that I believe precipitated this thread
> was that I wanted a single sql statement that aggregated time data by week.
> Yes, I could do the aggregation subsequently in my own client side code, but
> it's easier and less error prone to have it done by the server.

If you work closer to the data you have more efficiency.  To do what you want
you can write a function using plpgsql -- I've posted some ideas -- and that
is not hard at all.

But if there was something to make it easier to write this it would be great ;-)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
"Ted Byers" <[EMAIL PROTECTED]> writes:

> Out of curiosity, why does the database need to know this, or to be able to
> calculate it?  There are lots of things that would be useful to me, if the

It was a curiosity.  But it would make working with some dates easier.  I've
given some examples but if you really want I may search for the messages and
repost them for you. 

> RDBMS I'm using at the time supported them (particularly certain statistical
> functions - ANOVA, MANOVA, nonlinear least squares regression, time series
> analysis, &c.), but given that I can readily obtain these from other software
> I use, and can if necessary put the requisite code in a middleware component,
> I would rather have the PostgreSQL developer's focus on issues central to

You can have those using R and plR inside the database. ;-)

> having a good DB, such as ANSI standard compliance for SQL, or robust pooling,
> &c. and just leave me a mechanism for calling functions that are external to
> the database for the extra stuff I need.  I would prefer a suite of
> applications that each does one thing well than a single application that does
> a mediocre job on everything it allegedly supports. What would be 'nice' and
> what is practical are often very different things. I know what you're after is
> simple, but remember the good folk responsible for PostgreSQL have only finite
> time available to work on it, and thus, when they're making choices about
> priorities, I'd rather they ignore even simple ancillary stuff and focus on
> what really matters.

If I have to do calculations with dates inside the database the worst thing
I'd like to do was retrieving part of it, going to some external code, coming
back to the database and so on.

If there was something inside the database then I'd really like to know and
use it.  I don't see how worse it would be when compared to other non-ANSI
extensions that are already available.

> I just recently finished a project in which the data processing needed
> information similar to what you're after, but instead of doing it in the
> database, we opted to do it in the Perl script I wrote that fed data to the
> database.  In fact, it wasn't so much the day of the week that mattered to the

There's no feeding here.  Imagine that I'm filtering huge selects to be
processed externaly.  I wouldn't like to get some millions of rows instead of
hundreds or a few thousands of them.

> processing algorithm but the resulting dates for the immediately preceding
> business day and the immediately following business day.  It was those dates
> we fed to the database rather than the weekday.  There are several Perl
> packages (see CPAN) supporting this kind of calculation.  These are generally

I know Perl.  I have already thought it for IBM... ;-)

> outstanding (and would probably be useful if you want to create your own
> stored function implemented in Perl), but you may have to customize them by
> providing additional configuration information such as timezone and statutory
> and religious holidays if you need to determine business days in addition to
> just the day of the week.  the day of the week can be obtained in Perl with a
> single function call!

As in several other languages.  Even in plpgsql, with simple calculations like
I've shown.  As I said, writing a function for that is simple enough and I
just wanted to know if there was anything that could be done by the database.

I never asked for any new implementation.

> I just took a quick break to read about the date functions available within
> PostgreSQL, and while apparently nice, you have much greater flexibility, and
> many more functions, in these Perl packages I mentioned.  If you just want a
> function call, I'd suggest you create a function that just dispatches a call
> to the Perl function that best meets your needs.  In a sense, you are not
> really rolling your own.  You're just dispatching the call to a function in a
> Perl package.

And to do that you have to write a function...


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Bruno Wolff III <[EMAIL PROTECTED]> writes:

> No, it has to be inside the function so that the modular arithmetic is
> applied to it.

Then there's the error I've shown from your command.  Can you give me a
working one?  This was with PostgreSQL 8.2.3.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Bruno Wolff III <[EMAIL PROTECTED]> writes:

> On Fri, Mar 09, 2007 at 14:59:35 -0300,
>   Jorge Godoy <[EMAIL PROTECTED]> wrote:
>> It is not hard to calculate, as you can see... but it would be nice if
>> "date_trunc('week', date)" could do that directly.  Even if it became
>> "date_trunc('week', date, 4)" or "date_trunc('week', date, 'Wednesday')" it
>> would be nice...  :-)  And that is what I was trying to ask ;-)
>
> Use date_trunc('week', current_day + 1) and date_trunc('dow', current_day + 1)
> to have a one day offset from the standard first day of the week. 


I believe there's more than that...  Probably the "+1" should be outside the
date_trunc, anyway.  It might help, but I still see the need to to do
calculations...  Specially if it was Tuesday today...



neo=# select date_trunc('dow', current_date + 1);
ERRO:  unidades de timestamp with time zone "dow" são desconhecidas
neo=# select date_part('dow', current_date + 1);
 date_part 
---
 6
(1 row)

neo=# select date_trunc('week', current_date + 1);
   date_trunc   
--------
 2007-03-05 00:00:00-03
(1 row)

neo=# select date_trunc('week', current_date);
   date_trunc   

 2007-03-05 00:00:00-03
(1 row)

neo=# 




-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Alvaro Herrera <[EMAIL PROTECTED]> writes:

> Jorge Godoy escribió:
>
>> Just to repeat my question:
>> 
>> (I don't want to write a function, I can do that pretty easily...  And I was
>> asking if there existed some feature on the database that...  It's just a
>> curiosity)
>> 
>>   Given a date X it would return me the first day of the week so that I can
>>   make this first day an arbitrary day, e.g. Friday or Wednesday.
>
> When you say "it would return", what's the "it"?

The function that came with the database, the feature, the something. :-)

> I wasn't proposing to use any function, just putting a simple expression
> in the SELECT's result list (and maybe the GROUP BY, etc).

So I'm blind on how to do that.  Maybe some "CASE"?


Here's what I was asking for (Sunday=0, Saturday=6, to remember ;-)):


testdb=# select current_date;
date

 2007-03-09
(1 row)

testdb=# select current_date + '3 weeks'::interval;
  ?column?   
-
 2007-03-30 00:00:00
(1 row)

testdb=# select date_trunc('week', current_date + '3 weeks'::interval);
 date_trunc  
-
 2007-03-26 00:00:00
(1 row)

testdb=# select date_part('dow', date_trunc('week', current_date + '3 
weeks'::interval));
 date_part 
---
 1
(1 row)

testdb=# 



This is the standard behavior.  It returns me the first monday.  Now, if I had
the week starting on Wednesdays, I should get 2007-03-28 instead of
2007-03-26.

I can check in a function to see if the returned date is before or after my
desired week-start-day (as in Wednesdays, for example) and if date_part('dow',
date) is bigger than it return the value for Monday + 2 days, if it is lower
then return Monday - 5 days. 

For example, again:


testdb=# select date_part('dow', current_date + '3 weeks'::interval);
 date_part 
---
 5
(1 row)

testdb=# select date_trunc('week', current_date + '3 weeks'::interval) + '2 
days'::interval;
  ?column?   
-
 2007-03-28 00:00:00
(1 row)

testdb=# 


That would be the "first day" of the week in three weeks from now, with weeks
starting on Wednesdays. 

If I had asked for this 3 days ago:


testdb=# select date_trunc('week', current_date - '3 days'::interval + '3 
weeks'::interval);
 date_trunc  
-
 2007-03-26 00:00:00
(1 row)

testdb=# select date_part('dow', current_date - '3 days'::interval + '3 
weeks'::interval);
 date_part 
---
 3
(1 row)

testdb=# select date_trunc('week', current_date - '3 days'::interval + '3 
weeks'::interval) - '5 days'::interval;
  ?column?   
-
 2007-03-21 00:00:00
(1 row)

testdb=# 


Then if it was Tuesday, the week three weeks from now would have started on
Wednesday, 2007-03-21.


It is not hard to calculate, as you can see... but it would be nice if
"date_trunc('week', date)" could do that directly.  Even if it became
"date_trunc('week', date, 4)" or "date_trunc('week', date, 'Wednesday')" it
would be nice...  :-)  And that is what I was trying to ask ;-)



Thanks for your attention, Alvaro. :-)


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Setting week starting day

2007-03-09 Thread Jorge Godoy
Bruno Wolff III <[EMAIL PROTECTED]> writes:

> On Thu, Mar 08, 2007 at 20:32:22 -0300,
>   Jorge Godoy <[EMAIL PROTECTED]> wrote:
>> Alvaro Herrera <[EMAIL PROTECTED]> writes:
>> 
>> As I said, it is easy with a function. :-)  I was just curious to see if we
>> had something like Oracle's NEXT_DAY function or something like what I
>> described (SET BOW=4; -- makes Thursday the first day of week):
>
> If you are actually using "date" you can get the effect you want by adding
> a constant integer to the date in the date_trunc function. That seems
> pretty easy.


I couldn't see where to specify that integer.  Or, if it to sum it up to the
date, something that calculates it automatically.

http://www.postgresql.org/docs/8.2/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC

Adding an integer I'd still have to write the verifications (such as the one I
mention below for Oracle's NEXT_DATE()) to get the desired result.


Just to repeat my question:

(I don't want to write a function, I can do that pretty easily...  And I was
asking if there existed some feature on the database that...  It's just a
curiosity)

  Given a date X it would return me the first day of the week so that I can
  make this first day an arbitrary day, e.g. Friday or Wednesday.


Oracle's NEXT_DAY() gets closer to that, but would still require a few
operations (checking if the returned date is before the given date or if after
then subtract one week from this returned value, kind of a
"PREVIOUS_DATE()"...).


With a function I could make it easily, but then I'd have to wrap all
calculations with that...  It was just something to make life easier.  From
the answers I'm getting I see that there's no way to do that without a
function and that I'm not missing any feature on PG with regards to that ;-)

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] one-to-one schema design question and ORM

2007-03-09 Thread Jorge Godoy
Rick Schumeyer <[EMAIL PROTECTED]> writes:

> I can think of two ways to do this:
>
> 1) a 1-1 relationship where the user table contains a FK to the employee
> table.  Since not all users will be employees, the FK will sometimes be null.
> In rails, the user class would "belong_to employee" while employee "has_one
> user".
>
> 2) Create a link table that has FKs to both the user and employee table.  This
> make sense because I'm not sure that the concept of "there might be a linked
> employee" belongs in the user table.  This moves it to a separate table
> designed for that purpose.  But then again, it may just be a needless extra
> table.
>
> Would you prefer one solution over the other?

It all depends on what you'll be doing and how often.  Remember that ORMs
usually "select *", so you might end up using more memory / resources than
you'd be willing to.

If the RoR mapper can do lazy loadings, then this might not be too bad...

Anyway, you might also add the extra table to make it a place to gather more
information that will be relevant to your system only.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Beginner's Questions

2007-03-09 Thread Jorge Godoy
Don Lavelle <[EMAIL PROTECTED]> writes:

> My database is quite small (only 13 lucky tables, though that may expand a
> little) and will not hold a great amount of data.  (There  will be at most
> records in the thousands for the single-user or tens  of thousands for the
> multi-user.)  I will either use Java or C++ for  the project.  I would run
> PostgreSQL as a child process.

What do you mean by "a child process"?  PostgreSQL is run as a server and then
you connect to it (either through TCP or using sockets).  Or you were talking
about something you'll do with your code?

> Is PostgreSQL overkill for such a project?  My other choices are to go with a
> flat-file format or to use an embedded SQL server.  The  reason to go with
> PostgreSQL are that I don't have to write as much  code, it's known to be
> reliable for what I'm doing, and it's known to  be reliable for what I might
> be doing.  The reason to not go with  PostgreSQL is that it might be too much
> for a modest personal  computer; I don't know that the computers running this
> will even have  XP.  (I'm not a MS Windows user, myself, unless I have to be.
> My  development boxes are a 1.1 GHz Athlon with 512 RAM with XP and a
> dual-processor G4 with MacOS 10.4.)
>
> Are there ballpark requirements for what such a database will need to run?

You have the required hardware.  I have PostgreSQL running on worse conditions
and performing very well.


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Setting week starting day

2007-03-08 Thread Jorge Godoy
Alvaro Herrera <[EMAIL PROTECTED]> writes:

> Jorge Godoy escribió:
>
>> I mean, if I wanted to do the above but instead of Sunday or Monday as the
>> starting day I'd like using Fridays or Wednesdays...
>> 
>> Is it possible?  Writing a new function shouldn't be too hard -- it's a 
>> matter
>> of truncating the week on a day and shifting the date forward or backward --,
>> but something like a "SET bow=5" (to make the API consistent with the 'dow'
>> that already exists) would be really great!
>
> Is it not just a matter of adding a constant and then taking modulo 7?

As I said, it is easy with a function. :-)  I was just curious to see if we
had something like Oracle's NEXT_DAY function or something like what I
described (SET BOW=4; -- makes Thursday the first day of week):


NEXT_DAY 

Syntax

 
Purpose
Returns the date of the first weekday named by char that is later than the
date d. The argument char must be a day of the week in the date language of
your session, either the full name or the abbreviation. The minimum number of
letters required is the number of letters in the abbreviated version. Any
characters immediately following the valid abbreviation are ignored. The
return value has the same hours, minutes, and seconds component as the
argument d. 


Example

This example returns the date of the next Tuesday after March 15, 1998. 

SELECT NEXT_DAY('15-MAR-98','TUESDAY') "NEXT DAY"
 FROM DUAL;
 
NEXT DAY
-

16-MAR-98 



So, I'd have something like: "SELECT NEXT_DAY(now()+'5 weeks'::INTERVAL,
'THURSDAY');" to give me the next Thursday 5 weeks from now. 


Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Setting week starting day (was: Re: [GENERAL] Tabulate data incrementally)

2007-03-08 Thread Jorge Godoy
Richard Huxton  writes:

> Omar Eljumaily wrote:
>> I want to tabulate time data on a weekly basis, but my data is entered on a
>> daily basis.
>>
>> create table time_data
>> {
>>employee varchar(10),
>>_date date,
>>job varchar(10),
>>amount
>> }
>>
>> So I want to tabulate with a single sql command.  Is that possible?
>
> Try one of these:
>
> => SELECT date_trunc('week',now());
>date_trunc
> 
>  2007-03-05 00:00:00+00
>
> => SELECT extract(week from now());
>  date_part
> ---
> 10


Hi!


I'm hijacking this thread a bit...  Is it possible to specify dinamically the
day of the week when week starts?  

I mean, if I wanted to do the above but instead of Sunday or Monday as the
starting day I'd like using Fridays or Wednesdays...

Is it possible?  Writing a new function shouldn't be too hard -- it's a matter
of truncating the week on a day and shifting the date forward or backward --,
but something like a "SET bow=5" (to make the API consistent with the 'dow'
that already exists) would be really great!


Why doing that?  Imagine an accounting office where all their activities
should be closed and values summed up every Wednesday.  Or a company that
tracks the end of their activies weekly and consider the end of the week on
Thursdays (so that they can send invoices on Friday).

Being able to count "the first day of the 'week' 5 weeks from now" for the
above situations would make things easier to code. :-)


-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Determine users and roles

2007-03-08 Thread Jorge Godoy
RPK <[EMAIL PROTECTED]> writes:

> I want to determine the usernames, privileges and roles of the users that are
> created for a particular database. I am using PGSQL 8.2.3 on Windows XP. I
> want to run SQL query for this.

In PostgreSQL users aren't created per database but per cluster.

Run the equivalent of "psql -E" in your command prompt and issue a "\du".


Be seeing you,
-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] No buffer space available

2007-03-08 Thread Jorge Godoy
"Nik" <[EMAIL PROTECTED]> writes:

> Now I wonder why is it that I started getting WSANOBUFS errors from
> Windows Socket System if this same setup with the same machines has
> been operational for over a year? The query size did not change and
> the number of connections to the server did not change. Is there any
> good reason for this error to start showing up?

Did your Windows change?  (Updates, service packs, etc.)



-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Query timing

2007-03-04 Thread Jorge Godoy
Naz Gassiep <[EMAIL PROTECTED]> writes:

> Using EXPLAIN ANALYZE I can get the execution time of a query. Is there a
> command I can use to get the execution time without the planning information?
> I just need to time lots of queries that have complex plans and it'd be easier
> if I didn't have pages and pages of planning info between tries. The queries
> themselves are one line each, but refer to views, which make the plans pretty
> complex. Being able to see several times in one terminal window would be
> easier.

Write the queries in a file, run them with psql -f and grep for the desired
lines only.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] date format

2007-03-04 Thread Jorge Godoy
Garry Saddington <[EMAIL PROTECTED]> writes:

> I have 'datestyle ISO,DMY' set in postgresql.conf but the date output is 
> still 
> rendered in the format (y,m,d) . How can I change this behaviour? 

ISO means y-m-d...  

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] rpm containing pgdump

2007-03-01 Thread Jorge Godoy
"Frank Church" <[EMAIL PROTECTED]> writes:

> Which of the postgresql rpms contains pgdump. I have downloaded
> postgresql-server and postgresql-libs and pgdump is not included.
> Which rpm contains it?

On my SuSE box it is the 'postgresql' RPM. 

-- 
Jorge Godoy  <[EMAIL PROTECTED]>

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


  1   2   3   >