Re: [GENERAL] TEXT field's length limit in PostgreSQL

2001-10-18 Thread Dr. Evil

> I know that my question may seem trivial, but unfortunately I haven't found
> any understandable info on this topic. It's about length limit of the TEXT
> field in PostgreSQL. I've heard that the TEXT fields in psql are limited to
> 4096 bytes but it's just unbelievable for me. So if I've got text of more
> than 4096 bytes size, for example article - do I really have to put it in
> the LOB field and use these terrific lo-functions to access it ?? Please, if
> anyone can help me - write to [EMAIL PROTECTED] .

Under PG7.0, there was an 8k limit on the size of rows, so you had to
use LOB for things bigger than that, but that limit is gone in PG7.1.
I'm not sure what the max size on TEXT is, but it's much much bigger
than 4096 bytes.  Certainly you could store entire magazine articles
in it.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[GENERAL] drop column

2001-10-18 Thread Peter Pilsl

is there a trick to drop a column in a table ? or do I need to
recreate the table and drop the old one ?

thnx,
peter

-- 
mag. peter pilsl

phone: +43 676 3574035
fax  : +43 676 3546512
email: [EMAIL PROTECTED]
sms  : [EMAIL PROTECTED]

pgp-key available

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] creating a custom server

2001-10-18 Thread Victor Ivanov

Hi!

First, sorry if this is off topic.  Second, WARNING: broken english
ahead :)

Is it possible to create a 'custom' server which can accept normal
connections from normal clients, but use different language (or just
'proxy' the queries to the real server)?  A friend of mine told me this
is a feature of some commercial sql server.  What I acutaly want to do
is to add another layer between the client and the server.

For example, a client connects and issues a query like 'show sessions'.
My server checks if the client has permission (according to a table in
postgres) and then 'translates' the query to 'select * from sessions' or
something else.  The result is then sent back to the client (possibly
modified).

Another way to do this is to get the communication library and use
it in my application...

Any ideas/hints/etc will be appreciated.

-- 
Players win and winners play
Have a lucky day

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] yet another db corruption issue (more info)

2001-10-18 Thread Jon Lapham


Indeed, the problem has been tracked down to hardware.  Bad RAM.  I was (with
the kind help of some list members) able to rebuild the database without any
data loss.  

Basically, what I had to do was update 'pg_class->relnatts' from 19 to 18. 
This single change allowed me to actually work with the fraction table, but
without the ability to access the missing 'editdate' attribute.

Then, I tried to add a new column to 'fraction' using "ALTER TABLE fraction
ADD editdate ...", this barfed saying that the "editdate" colun already exists.

So, I then ran a "ALTER TABLE fraction RENAME editdate TO editdate2". This
command suceeeded, but I still couldn't see the missing attribute in \d
fraction.  So, I re-updated pg_class and set relnatts back to 19.  Now I could
see and access the 'editdate2' attribute!  A simple rename command put
everything back together again.

Now I can sleep.


-- 
-**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
 Jon Lapham 
 Extracta Moléculas Naturais, Rio de Janeiro, Brasil
 email: [EMAIL PROTECTED]  web: http://www.extracta.com.br/
***-*--**---***---




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Backup Postgre Windows to Linux

2001-10-18 Thread Flávio Brito

Hi ALL

I imported a database from Access 2000 to Postgre for Windows but now I can't 
backup my database to export to Postgre for Linux . 
I tried to use pg_dumpall > db.sql, its freeze on Windows2000 for more than a 
hour. My database has 6 MB. 

I can backup my database directory on Windows (using ZIP) and put it on Linux?

Thanks

Flávio Brito
Rio de Janeiro
Brasil

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] COMMIT after an ERROR?

2001-10-18 Thread J C Lawrence

On Sat, 13 Oct 2001 07:13:25 -0700 (PDT) 
Stephan Szabo <[EMAIL PROTECTED]> wrote:

> Well, that's difficult.  The spec is very hard to understand about
> that.  I believe we determined that our behavior was very very
> close to complient, but that it does not match the standard
> reading of that section.  

My reading of the spec came out as:

  The spec in regard to exact behaviour at COMMIT in the presence of
  errors is, in spec language, "undefined".

  The current PostgresQL behaviour doesn't violate the spec, but is
  also one of many possible mutually contradictory behaviours (such
  as not rolling back on error) that don't appear to violate the
  spec.

Ahh well, its not like inserting explicit ROLLBACKs is that hard.

-- 
J C Lawrence
-(*)Satan, oscillate my metallic sonatas.
[EMAIL PROTECTED]   He lived as a devil, eh?
http://www.kanga.nu/~claw/  Evil is a name of a foeman, as I live.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit

2001-10-18 Thread Jochem van Dieten

Bruce Momjian wrote:

>>Bruce Momjian writes:
>>
>>
Break the SQL code that has been implemented for prior versions??
 Bummer ;((.

>>>Yes, but we don't follow the MySQL behavior, which we copied when we
>>>added LIMIT.  Seems we should agree with their implementation.
>>>
>>Isn't it much worse to not follow PostgreSQL behavior than to not follow
>>MySQL behavior?
>>
> 
> Another idea:  because our historical Limit #,# differs from MySQL, one
> idea is to disable LIMIT #,# completely and instead print an error
> stating they have to use LIMIT # OFFSET #.  Although that would break
> both MySQl and old PostgreSQL queries, it would not generate incorrect
> results.


I would say the relevant behaviour is neither the one that MySQL 
historically uses nor the one that PostgreSQL historically uses, but the 
one that is specified in the relevant standards. Since nobody brought 
this up yet I presume these standards leave the implementation of LIMIT 
open (I tried to google myself, but I couldn't exactly find it).
Is that correct or does (any of the) the SQL standards specify a behaviour?

Jochem


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [GENERAL] To Postgres Devs : Wouldn't changing the select limit

2001-10-18 Thread Alexey Borzov

Greetings, Bruce!

At 18.10.2001, 02:34, you wrote:

>> Isn't it much worse to not follow PostgreSQL behavior than to not follow
>> MySQL behavior?

BM> Another idea:  because our historical Limit #,# differs from MySQL, one
BM> idea is to disable LIMIT #,# completely and instead print an error
BM> stating they have to use LIMIT # OFFSET #.  Although that would break
BM> both MySQl and old PostgreSQL queries, it would not generate incorrect
BM> results.

It doesn't seem like a good idea. The best solution, IMHO, would
be to introduce optional "MySQL-compatibility mode" for LIMIT in 7.2
Later LIMIT #,# can be marked deprecated in favour of LIMIT #,
OFFSET #
But please, don't *break* things; while this change may make life
easier for some people migrating from MySQL far more people would
be pissed off...

-- 
Yours, Alexey V. Borzov, Webmaster of RDW.ru



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Need for protocol description

2001-10-18 Thread Dinesh Parikh



Hi All
I want protocol descriptions of Postgresql 7.1 I 
mean how Postgres listens request at default port 5432. If one wants to write to 
execute some query using its program(like in C, java) directry by sending data 
at server socket 5432 then how this task can be achieved. Can anybody tell me 
from where i can find this.
 
Thanks
Regards 
Dinesh Parikh


Re: [GENERAL] TEXT field's length limit in PostgreSQL

2001-10-18 Thread Brett W. McCoy

On Thu, 18 Oct 2001, [iso-8859-2] Micha³ 'Samuel' Modestowicz wrote:

> I know that my question may seem trivial, but unfortunately I haven't found
> any understandable info on this topic. It's about length limit of the TEXT
> field in PostgreSQL. I've heard that the TEXT fields in psql are limited to
> 4096 bytes but it's just unbelievable for me. So if I've got text of more
> than 4096 bytes size, for example article - do I really have to put it in
> the LOB field and use these terrific lo-functions to access it ?? Please, if
> anyone can help me - write to [EMAIL PROTECTED] .

The limitation is gone in 7.1.  According to the documentation, the length
is unlimited.  Realistically, I think sizes of around 1G are the discrete
limit.

-- Brett
  http://www.chapelperilous.net/

Even moderation ought not to be practiced to excess.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] newbie authentication/automated backup (pg_dumpall) questions

2001-10-18 Thread Alan

Hi everyone.

Just got postgres 7.1.3 (debian unstable) going after an upgrade from
7.0.x.  I have things *mostly* working now, with a few questions
regarding authentication.

What I'd like to have is the following two situations dealt with:

 - unsupervised backups using something like pg_dumpall that can run
from cron either as root or the postgres user (su -c "pg_dumpall...")
 - access to the database through web apps such as message boards or
similar using the Pg module from a webserver 

In 7.0 you could run pg_dumpall as the postgres user, so cron took care
of backups very nicely, and from the webserver running as a different
user (www-data) using Pg::connectdb(...) and passing the postgresql
user/pass (the shell username/password that is).  No one without 
postgres shell account access could access the database which is fine by
me.  This all worked fine.

Now 7.1 is here and I'm lost :(  I've never done any real "user
management" using postgres other than setting a password in the shell
for the postgres user.

Currently my situation is this: 

/etc/postgres/pg_hba.conf

local all crypt
local all 127.0.0.1 255.0.0.0 ident sameuser

With this I can set up a cgi with the line:
Pg::connectdb("dbname=$database user=$dbuser password=$dbpass");

And properly connect via my webserver user (www-data) to postgres just
dandy.

However, what I can't do is automated backups :(  In fact, I can't seem
to run pg_dumpall at all!

-
postgres@master:~$ pg_dumpall 
--
-- pg_dumpall (7.1.3)  
--
\connect template1
DELETE FROM pg_shadow WHERE usesysid <> (SELECT datdba FROM pg_database
WHERE datname = 'template0');

Password: 
psql: Password authentication failed for user 'postgres'

DELETE FROM pg_group;

Password: [password]
Password: [password]

--
-- Database ufies
--
\connect template1 postgres
CREATE DATABASE "ufies" WITH TEMPLATE = template0 ENCODING =
'SQL_ASCII';
\connect ufies postgres
Connection to database 'ufies' failed.
fe_sendauth: no password supplied

pg_dump failed on ufies, exiting
postgres@master:~$ 
-

Note that above I only put in the password the second and third time, not
the first time (ufies is the name of the main db BTW).

It was suggested to me on IRC that passing -h 127.0.0.1 would solve my
problems, but I get:

-
postgres@master:~$ pg_dumpall -h 127.0.0.1
--
-- pg_dumpall (7.1.3)  -h 127.0.0.1 
--
\connect template1
DELETE FROM pg_shadow WHERE usesysid <> (SELECT datdba FROM pg_database
WHERE datname = 'template0');

psql: Peer authentication failed for user 'postgres'

DELETE FROM pg_group;

psql: Peer authentication failed for user 'postgres'
psql: Peer authentication failed for user 'postgres'
postgres@master:~$ 
-


I've looked through the manuals and list archives, but I couldn't find
something similar to this :(  If anyone has any advice (even which FM to
read :) I'd certainly appreciate it!

TIA

Alan


-- 
Arcterex <[EMAIL PROTECTED]>   -==-   http://arcterex.net
"I used to herd dairy cows. Now I herd lusers. Apart from the isolation, I
think I preferred the cows. They were better conversation, easier to milk, and
if they annoyed me enough, I could shoot them and eat them." -Rodger Donaldson

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [GENERAL] [Off-topic, Meta-question] Problems with psql-general

2001-10-18 Thread Marc G. Fournier

On Thu, 18 Oct 2001, David Pirotte wrote:

> Allan Engelhardt wrote:
> >
> > Sorry for the offtopic nature, but:
> >
> > 1. Am I the only one who are having problems with the listerver not honouring the 
>NOMAIL option?
> > I prefer to read the list on news and am a little annoyed with the volume of 
>e-mails I'm suddenly getting.  It used to work fine...
> >
> > 2. The list (comp.databases.posrgresql.general) seems to have been removed from 
>several downstream NNTP servers.  Anybody know why, and, again, am I the only one 
>with this problem?
> >
> > Comments by email <[EMAIL PROTECTED]> will be appreciated.
> >
> > Allan.
>
> same problems
>
> i posted both to this list before and to
>
>    subscribe-nomail [EMAIL PROTECTED]
>   Illegal list "[EMAIL PROTECTED]".
>   No valid commands processed.

get rid of the @postgresql.org part ...



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Please help

2001-10-18 Thread Mihai Gheorghiu

Last night postmaster crashed while running a lengthy querry.
The message I found was that a log file grew too big.
Postmaster was down. I tried to restart it , but it would not. I restarted
the system, but it would not start either.
Manual start with a log file:
FATAL 1:  configuration file `postgresql.conf' has wrong permissions
I checked and all files in /var/lib/pgsql/data have 777 !!!
I changed permissions of postgresql.conf to 600 and the new message I get
is:
DEBUG:  database system was shut down at 2001-10-17 18:56:30 EDT
DEBUG:  open(logfile 19 seg 73) failed: No such file or directory
DEBUG:  Invalid primary checkPoint record
DEBUG:  open(logfile 19 seg 73) failed: No such file or directory
DEBUG:  Invalid secondary checkPoint record
FATAL 2:  Unable to locate a valid CheckPoint record
/usr/bin/postmaster: Startup proc 1221 exited with status 512 - abort
What can I do?
Thank you all.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Getting OID after Insert

2001-10-18 Thread Martijn van Oosterhout

On Thu, Oct 18, 2001 at 10:29:09AM -0400, Bruce Cota wrote:
> Thank you!
> 
> Hopefully that getlastoid refers to the last oid
> created in this session or transaction?  I can't find it
> mentioned in the docs anywhwere.

last insert, iirc.

> Nextval and curval aren't specific to the transaction or
> session, right?  so that would be problematic  in a
> multi-user environment.

Sure they are. Currval returns the last value returned in this transaction.
This is a production database system and not having it multiuser safe would
be stupid no?

So you can do stuff like:

insert into invoice (customer,date,etc...);
insert into item (invoice_id, ...) values ( currval('invoice_id_seq'), ... );

etc...

multiuser safe. very cool. Remember, oid are not really guarenteed to be
unique...

-- 
Martijn van Oosterhout <[EMAIL PROTECTED]>
http://svana.org/kleptog/
> Magnetism, electricity and motion are like a three-for-two special offer:
> if you have two of them, the third one comes free.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] Accessing PostgreSQL through Apache and PHP4 on Linux

2001-10-18 Thread Holt, Jack C.









I am running PostgreSQL 7.1.3, RedHat 7.1
(kernel 2.4.2-2), and PHP/4.0.4pl1. 
When I try to run a PHP page found below I get an error that says:

 

Fatal error: Call to undefined function: pg_connect()
in /var/www/html/psolver.php on
line 10 

 

Additionally,
even though pgsql.so IS in /usr/lib/php4 (and php.ini tells PHP to look there for
extensions), I get the following error on the call to dl().

 

Warning: Unable to load dynamic library
'/usr/lib/php4/pgsql.so' - libpq.so: cannot load shared object file: No such
file or directory in /var/www/html/psolver.php
on line 7

 

Code snippet:

---



  dl("pgsql.so");

 

  //connect to database

  $conn =
pg_connect("","","","psolver");

  if(!$conn) {

    print("Couldn't
connect to psolver");

    exit;

  }

?>








Re: [GENERAL] TEXT field's length limit in PostgreSQL

2001-10-18 Thread Holt, Jack C.

Take a look at www.postgresql.org/docs/index.php?datatype-character.html. Be
sure to take a look at the comments at the bottom of the page. The
documentation agrees with you, but Peter Eisentraut gives a different take.

 -Original Message-
From:   Brett W. McCoy [mailto:[EMAIL PROTECTED]] 
Sent:   Thursday, October 18, 2001 6:50 AM
To: Michal 'Samuel' Modestowicz
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] TEXT field's length limit in PostgreSQL

On Thu, 18 Oct 2001, [iso-8859-2] Micha³ 'Samuel' Modestowicz wrote:

> I know that my question may seem trivial, but unfortunately I haven't
found
> any understandable info on this topic. It's about length limit of the TEXT
> field in PostgreSQL. I've heard that the TEXT fields in psql are limited
to
> 4096 bytes but it's just unbelievable for me. So if I've got text of more
> than 4096 bytes size, for example article - do I really have to put it in
> the LOB field and use these terrific lo-functions to access it ?? Please,
if
> anyone can help me - write to [EMAIL PROTECTED] .

The limitation is gone in 7.1.  According to the documentation, the length
is unlimited.  Realistically, I think sizes of around 1G are the discrete
limit.

-- Brett
  http://www.chapelperilous.net/

Even moderation ought not to be practiced to excess.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Getting OID after Insert

2001-10-18 Thread Tom Lane

Bruce Cota <[EMAIL PROTECTED]> writes:
> Is there a way, in SQL, to access the oid of the row created
> by an immediately preceding insert statement?

If you are writing a psql script, recent psql versions maintain
a LASTOID variable:

regression=# insert into int4_tbl default values;
INSERT 3357467 1
regression=# select :LASTOID;
 ?column?
--
  3357467
(1 row)

regression=#

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] FTI Queries and Explain (long)

2001-10-18 Thread Gordan Bobic

[Broken SQL instead of performance issue fixed]

It would appear that when I define the index on the FTI table (string and 
oid) to be unique (which makes sense, since there is little point in having 
duplicate rows in this case), a lot of inserts fail where they shouldn't. I 
am guessing that if the insert into the look-up FTI table fails, the insert 
into the master table fails as well.

I can understand that this might be useful for matches where the number of 
occurences is important, but in this particular application, that is not the 
case. Before I go and look into modifying the fti function code for my 
specific purpose, it would be nice to have a confirmation of this behaviour - 
otherwise it may take me a while to find what I'm looking for. ;-)

Another question - there are (as often happens) multiple ways of doing what I 
want in SQL, but I am not sure what is the fastest and most efficient way of 
doing it (in theory at least).

I want to do a multi-criterion search on the same field (the FTI indexed 
one), and have separate AND, NOT and OR search terms.
AND = "terms that must occur in the text"
OR = "terms of which at least one has to occur in the text"
NOT = "terms which must not occur in the text"

Initially, before FTI, I used a big ILIKE query which worked reasonably well. 

I should point out that my test bed machine for this is a Pentium 100 MHz 
with 128 MB of RAM and an IDE disk. My database is expected to be around 
50K-100K records, and about 100-200 MB on disk in PostgreSQL files (that's 
what the disk consumption of the vacuumed database was before FTI).

Using the same example data set as before, yhe query was something like:

SELECT  *
FROMJobs
WHERE   (
Description ILIKE '%AndTerm1%'  AND
Description ILIKE '%AndTerm2%'  AND
Description ILIKE '%AndTerm3%'
...
)
AND
(
Description ILIKE '%OrTerm1%'   OR
Description ILIKE '%OrTerm2%'   OR
Description ILIKE '%OrTerm3%'
...
)
AND
(
Description NOT ILIKE '%OrTerm1%'   AND
Description NOT ILIKE '%OrTerm2%'   AND
Description NOT ILIKE '%OrTerm3%'
...
)

This usually returned the required data within 30 seconds or so, after, 
obviously, doing as sequential search through the database due to the 
non-anchored ILIKE match.

After implementing FTI, the insertion speed has gone through the floor (as 
expected), but the select speed doesn't seem to be that much greater, even 
when using the index (string, oid) on the FTI look-up table. On simple 
queries that only require one or two terms there is a big speed improvement, 
but on queries with three or more terms, the improvement is not that great.

The queries typically return 10 - 200 rows (give or take a bit, depending on 
the specific query).

The queries I am using at the moment to replace the above ILIKE solution are 
in the form

SELECT  Jobs.*
FROMJobs,
Jobs_Description_FTI
WHERE   Jobs_Description_FTI.string = $And/Or/NotTerms[$i]  AND
Jobs_Description_FTI.id = Jobs.oid

The AND queries are INTERSECTed together, OR queries and UNIONed together, 
both are UNIONed, and then the NOT queries are EXCEPTed.

In some cases, this has yielded a signifficant improvement in performance, as 
Tom suggested (thanks for that, it was much appreciated). Sometimes, however, 
things go the other way.

To cut the long story short, I seem to have tracked the problem down to a 
certain situation.

If there is, say, 10K records in the master table, there is about 4M records 
in the lookup table. This in itself isn't an issue. Queries that return small 
numbers of records, e.g.

SELECT  count(*)
FROMJobs_Description_FTI
WHERE   string = 'linux'

(returns ~300 rows)

happen more or less instantaneously.

However, a very similar query such as:

SELECT  count(*)
FROMJobs_Description_FTI
WHERE   string = 'nt'

(returns ~30K rows)

takes around two-three minutes.

I tried doing a

SELECT  count(*)
FROMJobs
WHERE   Description ILIKE '%nt%'

(returns 11K records out of 12K)

and that only takes about 10 seconds or so.

SELECT  count(*)
FROMJobs
WHERE   Description ILIKE '% nt %'

returns ~800 records out of 12K, which is much more reasonable.

Ideally, that should be

SELECT  count(*)
FROMJobs
WHERE   Description ~* '.*[!a-z]nt[!a-z].*'

or something like that, which yields a similar number of records to the 
previous query, but is slower.

I am fully aware that this is fairly normal under the circumstances, but I 
need a way of defeating this performance issue. The only way of doing that 
that I can see at the moment is to:

1) Modify the FTI function to split the text field only at non-alphanumeric 
characters, and only return whole words, rather than substrings of words.

2) Allow the insert into master table to s

Re: [GENERAL] Large Files?

2001-10-18 Thread Tom Lane

"Andrew Crouch" <[EMAIL PROTECTED]> writes:
> -rw---  1 postgres  dbadmin  122880 Oct 18 13:28 1269   <-- What is this
> file

pg_log, as you could find out with an inquiry against pg_class:

test71=# select relname from pg_class where relfilenode = 1269;
 relname
-
 pg_log
(1 row)

test71=#

No, you can't delete it or shorten it, except by dump/initdb/reload.

7.2 will have a better approach to pg_log maintenance ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] creating a custom server

2001-10-18 Thread Keary Suska

Off the top of my head, I don't recall any specific 'middleware'
implementations that do this exactly, but it wouldn't be entirely difficult
to make one of your own. If you are familiar with Perl or Python, making a
parallel processing network server is not rocket science, and you have
complete control over the desired protocol.

Keary Suska
Esoteritech, Inc.
"Leveraging Open Source for a better Internet"

> From: Victor Ivanov <[EMAIL PROTECTED]>
> Date: Thu, 18 Oct 2001 12:17:00 +0300
> To: [EMAIL PROTECTED]
> Subject: [GENERAL] creating a custom server
> 
> Hi!
> 
> First, sorry if this is off topic.  Second, WARNING: broken english
> ahead :)
> 
> Is it possible to create a 'custom' server which can accept normal
> connections from normal clients, but use different language (or just
> 'proxy' the queries to the real server)?  A friend of mine told me this
> is a feature of some commercial sql server.  What I acutaly want to do
> is to add another layer between the client and the server.
> 
> For example, a client connects and issues a query like 'show sessions'.
> My server checks if the client has permission (according to a table in
> postgres) and then 'translates' the query to 'select * from sessions' or
> something else.  The result is then sent back to the client (possibly
> modified).
> 
> Another way to do this is to get the communication library and use
> it in my application...
> 
> Any ideas/hints/etc will be appreciated.
> 
> -- 
> Players win and winners play
> Have a lucky day
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org
> 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Backup Postgre Windows to Linux

2001-10-18 Thread Joel Burton

On Thu, 18 Oct 2001, [iso-8859-1] Flávio Brito wrote:

> Hi ALL
>
> I imported a database from Access 2000 to Postgre for Windows but now I can't
> backup my database to export to Postgre for Linux .
> I tried to use pg_dumpall > db.sql, its freeze on Windows2000 for more than a
> hour. My database has 6 MB.
>
> I can backup my database directory on Windows (using ZIP) and put it on Linux?

Might want to try pg_dump (rather than dumpall), perhaps even using the
-t option to pick just one table. This could let us find out if you can
dump *anything* or not.

-- 

Joel BURTON  |  [EMAIL PROTECTED]  |  joelburton.com  |  aim: wjoelburton
Independent Knowledge Management Consultant


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] FTI - unique look-up records, and no sub-words?

2001-10-18 Thread Gordan Bobic

Hi.

After looking again at the other email I've sent earlier, I realized that it 
goes on for far too long, so I'll try to summarize my question more briefly.

1) How can FTI be made to NOT break up words into sub-strings? Most of those 
are likely to be useless in my particular application. In fact, this feature 
is why my FTI queries run slower on certain strings than sequential ILIKE 
scans.

1.1) How do I make FTI only separate the text string into words on 
non-alphanumeric characters (i.e. [!a-z|!A-Z|!0-9] - I think...). Is that 
what it already does?

2) How can I make inserts work when enforcing unique records in the FTI 
table? There is hardly any need for duplicates in the FTI table...

At the moment, the search for a particular string can take signifficantly 
longer with FTI than using the plain ILIKE. FTI also returns 30K matches in 
the FTI table for only 10K of records in the master table, which is clearly 
not very useful. Doing a DISTINCT on the FTI table with that string will 
yield 10K records, indicating that there are about 3 times more records in 
the FTI table which are just duplicates taking up space.

I am NOT prepared to capitulate and use something MS SQL instead. How can I 
get this to work with PostgreSQL? Who maintains the FTI contrib?

Kind regards.

Gordan

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html