Re: [GENERAL] Recovery from another installation

2008-05-18 Thread Camilo Sperberg
postgres 2.5 ??? wow ... that's an old version... Consider an upgrade...

>From what i know, it shouldn't be a problem, because it's primarily the same
sub-version... It would be another story if it was from 2.5 to 2.6 (in that
case, you should use pg_dump_all) ... But i really don't know if with a so
low version everything could go fine ...

Greetings ;) (and consider an upgrade :P)

On Sun, May 18, 2008 at 1:44 AM, dfx <[EMAIL PROTECTED]> wrote:

> Dear Sirs,
>
> my question if it is possible to recovery a set of databases from a
> directory 'data' of a different machine.
>
> I have a machine with Postegres 2.5 and Windows 2000 crashed (OS does not
> start) but with filesystem OK, and another machine (with XP PRO and
> Postgres
> also 2.5). Can I copy the 'data' directory of the first machine (W2000) to
> the second machine? what I have to do?
>
> Thank you.
>
> Domenico
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Mailed by:
UnReAl4U - unreal4u
ICQ #: 54472056
www: http://www.chw.net/


[GENERAL] pg_standby gets stuck on a smaller wal file

2008-05-18 Thread Vlad Kosilov
I've been having problem with pgsql-8.2.5 master/slave warm standby 
replication setup where occasionally master node generates a wal file 
smaller then expected 16MB. pg_standby on slave gets stuck on such short 
files, and replication halts from that moment on. we have to do 
pg_start_backup/ rsync of data to slave / pg_stop_backup and restart 
slave in order to recover. database I'm replicating is write mostly.


this is process on slave that is waiting on log file which is smaller in 
size:
postgres 14277 11857  0 03:34 ?00:00:00 sh -c 
/usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 -w 0 -t 
/usr/local/recovery_control/quit_recovery /usr/local/wal_archive 
000103400020 pg_xlog/RECOVERYXLOG 2>> 
/usr/local/pgsql/data/standby.log
postgres 14278 14277  0 03:34 ?00:00:00 
/usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 -w 0 -t 
/usr/local/recovery_control/quit_recovery /usr/local/wal_archive 
000103400020 pg_xlog/RECOVERYXLOG


here is a sample list of wal_archive directory slave, note the size of 
000103400020 expected wal file is less then 16MB:


...
-rw--- 1 postgres postgres 16777216 May 17 03:19 
000103400017
-rw--- 1 postgres postgres 16777216 May 17 03:21 
000103400018
-rw--- 1 postgres postgres 16777216 May 17 03:22 
000103400019
-rw--- 1 postgres postgres 16777216 May 17 03:24 
00010340001A
-rw--- 1 postgres postgres 16777216 May 17 03:26 
00010340001B
-rw--- 1 postgres postgres 16777216 May 17 03:27 
00010340001C
-rw--- 1 postgres postgres 16777216 May 17 03:29 
00010340001D
-rw--- 1 postgres postgres 16777216 May 17 03:30 
00010340001E
-rw--- 1 postgres postgres 16777216 May 17 03:32 
00010340001F


-rw--- 1 postgres postgres 13746176 May 17 03:34 
000103400020


-rw--- 1 postgres postgres 16777216 May 17 03:35 
000103400021
-rw--- 1 postgres postgres 16777216 May 17 03:37 
000103400022
-rw--- 1 postgres postgres 16777216 May 17 03:38 
000103400023
-rw--- 1 postgres postgres 16777216 May 17 03:40 
000103400024
-rw--- 1 postgres postgres 16777216 May 17 03:41 
000103400025
-rw--- 1 postgres postgres 16777216 May 17 03:43 
000103400026
-rw--- 1 postgres postgres 16777216 May 17 03:45 
000103400027

...

skipping a bit further I see there is at least one other instance where 
wal file is shorter then normal 16MB:


-rw--- 1 postgres postgres 16777216 May 17 05:42 
00010340006F
-rw--- 1 postgres postgres 16777216 May 17 05:44 
000103400070
-rw--- 1 postgres postgres 16777216 May 17 05:46 
000103400071
-rw--- 1 postgres postgres 16777216 May 17 05:47 
000103400072


-rw--- 1 postgres postgres16384 May 17 05:50 
000103400073


-rw--- 1 postgres postgres 16777216 May 17 05:51 
000103400074
-rw--- 1 postgres postgres 16777216 May 17 05:52 
000103400075
-rw--- 1 postgres postgres 16777216 May 17 05:54 
000103400076


why would a master node create a wal file smaller then normal 
checkpoint_segment size and how can this be avoided. I need a reliable 
replication mechanism even at a cost of longer recovery on standby.


master's postgresql.conf:
..
fsync = on
wal_sync_method = open_sync
wal_buffers = 128
checkpoint_segments = 64
archive_command = 'test ! -f /usr/local/wal_archive_local/%f && cp %p 
/usr/local/wal_archive_local/%f'


archive files are then moved  on master to standby every other minute:

rsync -aq --remove-sent-files /usr/local/wal_archive_local/ 
slave::wal_archive/


slave's recovery.conf:
restore_command = '/usr/local/pgsql/bin/pg_standby -d -k 255 -r 7 -s 10 
-w 0 -t /usr/local/recovery_control/quit_recovery /usr/local/wal_archive 
%f %p 2>> /u

sr/local/pgsql/data/standby.log'

both servers are identical Dell PE1950 servers with 4 sas hd w/hardware 
RAID 1+0 running:
2.6.18-8.1.8.el5 #1 SMP Tue Jul 10 06:39:17 EDT 2007 x86_64 x86_64 
x86_64 GNU/Linux


Thanks much in advance
V.

--

Vladimir (Vlad) Kosilov
Senior Systems Administrator
Contigo Systems Inc.
604.683.3106 (phone)
604.648.9886 (fax)
[EMAIL PROTECTED]
www.contigo.com
 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 on-demand dictionary loading & using functions in tsearch2

2008-05-18 Thread Teodor Sigaev
 * Considering the database is loaded separately for each session, does 
this also imply that each running backend has a separate dictionary 
stored in memory?


Yes.


As for downsides, I only really see two:
 * Tracking updates of dictionaries - but it's reasonable to believe 
that new connections get open more often than the dictionary gets 
updated. Also, this might be easily solved by stat()-ing the dictionary 
file before starting up session, and only have the server reload it if 
there's a notified change.

 * Possibly complicated to implement?


Keeping dictionary up to date - it's a most difficult part here. Configuration 
of dictionary might be done by ALTER command - so, parent process (and all 
currently running backends) should get that information to reload dictionary.



As for my second question, is it possible to use functions in tsearch2? 
For example, writing my own stemmer in PL/pgSQL or in C as a postgres 
function.


Yes, of course, you can develop your dictionary (-ies) and parser. Dut only in 
C, because they are critical for performance.

--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql MVCC, Read Committed Isolation Level and taking "snapshot"

2008-05-18 Thread Ivan Sergio Borgonovo
On Fri, 16 May 2008 09:55:56 -0400
Andrew Sullivan <[EMAIL PROTECTED]> wrote:

> On Fri, May 16, 2008 at 09:06:11AM +0200, Ivan Sergio Borgonovo
> wrote:
> > Is
> > BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE
> > what I'm looking for?

> Yes.

Perfect, thanks.

What is the effect of having nested functions all declared
SERIALIZABLE?

What if just the outermost is declared SERIALIZABLE?

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql service stops every few minutes on Vista Biz

2008-05-18 Thread alisazegar

Hi All,

I'm having difficulties with Postgres lately. Postgres service for (pgsql
8.2 )kept stopping every few minutes.
Then I read a note somewhere saying that there is a bug in 8.2 which causing
the problem.
I switched to 8.3 and have the same problem. 

What I found so far is that the last event before stopping is  "loaded
library LOG: loaded library "$libdir/plugins/plugin_debugger.dll" which I
see it in windows logs (events) for applications.


Any hint or idea to solve this problem is appretiated.

Regards,

Ali
-- 
View this message in context: 
http://www.nabble.com/Postgresql-service-stops-every-few-minutes-on-Vista-Biz-tp17300359p17300359.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql service stops every few minutes on Vista Biz

2008-05-18 Thread Craig Ringer

alisazegar wrote:

Hi All,

I'm having difficulties with Postgres lately. Postgres service for (pgsql
8.2 )kept stopping every few minutes.
Then I read a note somewhere saying that there is a bug in 8.2 which causing
the problem.
I switched to 8.3 and have the same problem. 


It's running happily on my Vista Business laptop. Well, as happily as 
anything runs on Windows ;-)



What I found so far is that the last event before stopping is  "loaded
library LOG: loaded library "$libdir/plugins/plugin_debugger.dll" which I
see it in windows logs (events) for applications.


That's normal and appears in my logs as well.

Try having a look at the logs in the database directory. By default, 
that'll be:


%PROGRAMFILES%\PostgreSQL\8.3\data\pg_log

If you don't see anything useful, consider increasing the log level by 
editing postgresql.conf and changing log_min_messages from:


  #log_min_messages = notice

to

  log_min_messages = debug1

Also, check to see if all postgresql processes are terminating or 
whether it's just the master.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] updating a row referenced by a foreign key

2008-05-18 Thread Michael P. Soulier

Hi,

I have to update a row of data that is referenced via a foreign key 
constraint from another table.


I cannot change the id of the referenced record because it is referenced 
as a foreign key from another table.


Is there a simple way to update the row, and cascade the change to any 
and all rows referencing this row as a foreign key? I searched on 
cascade in the postgres docs and while I found references to deletion, I 
didn't find one for updates.


Thanks,
Mike
--
Michael P. Soulier <[EMAIL PROTECTED]>, 613-592-2122 x2522
"Any intelligent fool can make things bigger and more complex... It
takes a touch of genius - and a lot of courage to move in the opposite
direction." --Albert Einstein

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] updating a row referenced by a foreign key

2008-05-18 Thread Craig Ringer

Michael P. Soulier wrote:

Is there a simple way to update the row, and cascade the change to any 
and all rows referencing this row as a foreign key? I searched on 
cascade in the postgres docs and while I found references to deletion, I 
didn't find one for updates.


http://www.postgresql.org/docs/current/static/ddl-constraints.html#DDL-CONSTRAINTS-FK

"Analogous to ON DELETE there is also ON UPDATE which is invoked when a 
referenced column is changed (updated). The possible actions are the same."


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 on-demand dictionary loading & using functions in tsearch2

2008-05-18 Thread Teodor Sigaev



Hmm, good point; I presume "accept the fact that settings change won't 
propagate to other backends until reconnect" would not be acceptable 
behavior, even if documented along with the relevant configuration option?


I suppose so. That was one of the reasons to move tsearch into core and it will 
be too regrettable to lost that feature again.


As for my second question, is it possible to use functions in 
tsearch2? For example, writing my own stemmer in PL/pgSQL or in C as 
a postgres function.
I've had something different in mind. Considering there are already 
facilities to use functions, be it PL/pgSQL, PL/Python or C, why not 
just use those with the condition that the function must accept 
some-arguments and return some-result? Or would using this, even while 
using C as the language used for the actual parser, slow things down too?


API to dictionary and parser intentionally utilizes complex (and nested) 
C-structures to decrease overheads. During parse of text postgres makes two call 
of parser (one call - parser returns word, second - word delimiter. Space is a 
lexeme too! Although it's not a subject to index) and one call of dictionary per 
word. So, if your language can work with C-structures then you can use that 
language with tsearch with more or less performance pay. PL/pgSQL hasn't this 
capability.


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Link tables from different databases

2008-05-18 Thread Shane Ambler

Reuben D. Budiardja wrote:

Hello,
Suppose I have two tables db1.table1 and db2.table2. Is there anyway I can 
actually have db2.table2 a link to db1.table1 ? What I meant is similar to a 
symlink in Unix filesystem (db2.table2 is a symlink to db1.table1). 

I want it so that every operation on db1.table1 (ie. INSERT, UPDATE, DELETE) 
also happens to db2.table2, but similarly I want it that application can do 
those operations on db2.table2 but actually it does it on db1.table1 behind 
the scene.


Is there anything like this in postgresql ? I am using version 7.4 by the way.

Thanks in advance for any help.
RDB


You will get people saying that this is bad design and they will be 
right. You should think about your design before going in this direction.


Does this same data belong to both db's? Can the two db's be merged into 
one using different schema to separate the data instead of different db's?



There isn't a simple way (like a simple symlink) but dblink in the 
contrib folder may help.


To automate the process you will need to add triggers to pass along all 
the changes but I think you will find that being able to update the 
table in both db's will cause you some headaches (think endless loops).


You may want to have one db with the table and use functions to access 
the data from the other db.



--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 on-demand dictionary loading & using functions in tsearch2

2008-05-18 Thread iSteve

Teodor Sigaev wrote:

As for downsides, I only really see two:
 * Tracking updates of dictionaries - but it's reasonable to believe 
that new connections get open more often than the dictionary gets 
updated. Also, this might be easily solved by stat()-ing the 
dictionary file before starting up session, and only have the server 
reload it if there's a notified change.

 * Possibly complicated to implement?


Keeping dictionary up to date - it's a most difficult part here. 
Configuration of dictionary might be done by ALTER command - so, parent 
process (and all currently running backends) should get that information 
to reload dictionary.


Hmm, good point; I presume "accept the fact that settings change won't 
propagate to other backends until reconnect" would not be acceptable 
behavior, even if documented along with the relevant configuration option?


As for my second question, is it possible to use functions in 
tsearch2? For example, writing my own stemmer in PL/pgSQL or in C as a 
postgres function.


Yes, of course, you can develop your dictionary (-ies) and parser. Dut 
only in C, because they are critical for performance.


I've had something different in mind. Considering there are already 
facilities to use functions, be it PL/pgSQL, PL/Python or C, why not 
just use those with the condition that the function must accept 
some-arguments and return some-result? Or would using this, even while 
using C as the language used for the actual parser, slow things down too?


Best regards,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql service stops every few minutes on Vista Biz

2008-05-18 Thread Tom Lane
alisazegar <[EMAIL PROTECTED]> writes:
> I'm having difficulties with Postgres lately. Postgres service for (pgsql
> 8.2 )kept stopping every few minutes.
> Then I read a note somewhere saying that there is a bug in 8.2 which causing
> the problem.
> I switched to 8.3 and have the same problem. 

> What I found so far is that the last event before stopping is  "loaded
> library LOG: loaded library "$libdir/plugins/plugin_debugger.dll" which I
> see it in windows logs (events) for applications.

Does the problem go away if you remove that from the preload_libraries
setting?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 on-demand dictionary loading & using functions in tsearch2

2008-05-18 Thread Tom Lane
Teodor Sigaev <[EMAIL PROTECTED]> writes:
>> Hmm, good point; I presume "accept the fact that settings change won't 
>> propagate to other backends until reconnect" would not be acceptable 
>> behavior, even if documented along with the relevant configuration option?

> I suppose so. That was one of the reasons to move tsearch into core and it 
> will 
> be too regrettable to lost that feature again.

The whole idea is pretty much a nonstarter anyway, because the only
place to "preload" dictionaries would be in the postmaster process.
That will never fly for several reasons:

1. The postmaster doesn't have database access and so couldn't read
the dictionary definition.

2. If it did have database access, it wouldn't know which database
of the cluster to look in.

3. The point of the postmaster process is to be so simple as to be
nearly crashproof.  We would never accept a patch that made it execute
dictionary-loading code, because of the likely impact on system
reliability.  (This is in fact the reason behind point #1.)

4. The approach would never work anyway on Windows, since that platform
does fork/exec not just fork.


What I think *is* worth doing is spending some time on making dictionary
loading go faster.  Has any focused profiling and optimization effort
been spent on that code at all?  One idea is to precompile the
text-based config files into some format designed to load fast.
(But it'd be premature to adopt that idea without some profiling
results to prove where the bottleneck is.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql MVCC, Read Committed Isolation Level and taking "snapshot"

2008-05-18 Thread Douglas McNaught
On Sun, May 18, 2008 at 8:04 AM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]> wrote:

> What is the effect of having nested functions all declared
> SERIALIZABLE?
>
> What if just the outermost is declared SERIALIZABLE?

SERIALIZABLE applies to the entire transaction, not to individual
function calls.

-Doug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tsearch2 on-demand dictionary loading & using functions in tsearch2

2008-05-18 Thread Craig Ringer

Tom Lane wrote:


What I think *is* worth doing is spending some time on making dictionary
loading go faster.


This is probably a stupid question, but: with PostgreSQL's use of shared 
memory, is it possible to load dictionaries into a small reserved shm 
area when the first backend starts, then use the preloaded copy in 
subsequent backends?


That way the postmaster doesn't have to do any risky work.

Anything that reduces backend startup costs and per-backend unshared 
memory would have to be a good thing.


I've found it useful in the past to share resources with an mmap()ped 
file, too, especially if I want write protection from some or all 
processes. If the postmaster forked a process to generate the mmap()able 
compiled dictionary files on startup then it'd be pretty safe from any 
misbehaviour of the dictionary compiling process.


Then again, I can't say I've personally noticed the cost of loading 
tsearch2 dictionaries.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Link tables from different databases

2008-05-18 Thread Reece Hart
> I want it so that every operation on db1.table1 (ie. INSERT, UPDATE, DELETE) 
> also happens to db2.table2, but similarly I want it that application can do 
> those operations on db2.table2 but actually it does it on db1.table1 behind 
> the scene.

If the tables could be in the same database but in different schemas,
you should be able to arrange a view in schema2 (called "table2") with
rules that operate on table1 in schema1. See the manual for a discuss of
rules and views.

If you really want these tables in separate *databases* (same machine or
not), then you probably want dblink, a stiff drink, and a long-hard
think about the prudence of such a design.

-Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] triggers: dynamic references to fields in NEW and OLD?

2008-05-18 Thread Klint Gore

[see below]

Kerri Reno wrote:

Vance,

I missed your earlier post, so I may be misunderstanding the 
situation, but I think you could do this more easily in plpython, 
because TD['new'] and TD['old'] are dictionaries, and you can traverse 
the dictionaries like this:


for k, v in TD['new'].items():
if tblfld == k:
  plpy.notice('%s' % v)

This probably looks like gibberish if you're not used to python, but 
if you'd like more help, email me back (with your original post) and 
I'll get back to you next week.


Kerri

On 5/15/08, *Vance Maverick* <[EMAIL PROTECTED] 
> wrote:


Thanks!  Your solution clearly works, but it requires the shared
function to
enumerate all possible column names.  In my real case, there are 8-10
distinct names, so that's a bit uglybut it works.

Vance

-Original Message-
If you just need which table triggered the function then
|TG_TABLE_NAME| may
be simpler than passing parameters.

Something like this will probably work for you (replace the raise
notice
with whatever you have to do)

create or replace function atest() returns trigger as $$ declare
   avalue int;
   tblfld text;
begin
   tblfld := tg_argv[0];
   if tblfld = 'aa' then
  avalue := new.aa;
   else
  if tblfld = 'bb' then
  avalue := new.bb ;
  end if;
   end if;
   raise notice '%',avalue;
   return new;
end;
$$ language plpgsql;

klint.


Agree with Kerri - do it in one of the languages other than plpgsql.

Plpgsql can't do the for loop as simply as other languages.  There's no 
way to walk a record structure (new) as a collection/array and pull out 
the item you are interested in.


You could possibly cheat by putting new into a temp table and then 
executing a select on it.  Performance will probably be bad.


 create temp table newblah as select new.*;
 execute 'select new. ' || tg_argv[0] || '::text' ||
  ' from newblah new '   into newval;
 execute 'drop table newblah';

There probably is a function in the plpgsql internals that will pull a 
named field out of a record but I have no idea what it is or if it's 
exposed so that it can be called.  Maybe someone who knows about the 
internals of plpgsql could comment - is there a function like 
getfieldfromrecord(record,text)?


klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Seeking rows whit \

2008-05-18 Thread Francisco Reyes

Is there a way to search for rows with the character \?

I tried variants of like and  ~.

create table mytable (f1 text);
insert into mytable values ('Test row 1');
insert into mytable values (E'Test row 2 \\');

select * from mytable where f1 like E'%\\%'; <-- returned nothing
select * from mytable where f1 ~ '\'; <-- waiting for single quote
select * from mytable where f1 ~ E'\\'; <-- Error

And a few more variants.. with no valid reults.

Any suggestions?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeking rows whit \

2008-05-18 Thread Gurjeet Singh
On Mon, May 19, 2008 at 7:56 AM, Francisco Reyes <[EMAIL PROTECTED]>
wrote:

> Is there a way to search for rows with the character \?
>
> I tried variants of like and  ~.
>
> create table mytable (f1 text);
> insert into mytable values ('Test row 1');
> insert into mytable values (E'Test row 2 \\');
>
> select * from mytable where f1 like E'%\\%'; <-- returned nothing
> select * from mytable where f1 ~ '\'; <-- waiting for single quote
> select * from mytable where f1 ~ E'\\'; <-- Error
>
> And a few more variants.. with no valid reults.
>
> Any suggestions?
>
>
>
It is well documented. Quoting from
http://www.postgresql.org/docs/8.3/interactive/functions-matching.html#FUNCTIONS-LIKE

Thus, writing a pattern that actually matches a literal backslash means
writing four backslashes in the statement.

Best regards,
-- 
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB http://www.enterprisedb.com

Mail sent from my BlackLaptop device


Re: [GENERAL] Seeking rows whit \

2008-05-18 Thread Francisco Reyes

Gurjeet Singh writes:

Thus, writing a pattern that actually matches a literal backslash means 
writing four backslashes in the statement.


Thanks.

select * from mytable where f1 like '%%'; 
Worked.


For the archives..

After reading section 4.1.2.2 
Also found that these other one also works

select * from mytable where f1 ~ $$\\$$;
select * from mytable where f1 like $$%\\%$$;







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general