Re: [GENERAL] ERROR: could not access file $libdir/xxid: No such file or directory

2009-08-22 Thread Jorge Daine Quiambao
Craig Ringer, 

I tried scanning slony1_funcs.dll for dependencies but for some reasons it 
cannot find POSTGRES.EXE, the files is encrypted if I try to open it so I can't 
set the directory. I even tried copying POSTGRES.EXE in the lib folder where 
slony1_funcs.dll resides but of course still gets the same error.

Can this really a factor for the Slony to throw that error?

 I keep getting...  ERROR: could not access file $libdir/xxid: No such 
 file or
 directory whenever I create a new cluster. I've checked the pg directory and
 the xxid files are in shared folder.

 The no such file complaint might refer to some library needed by the
 xxid DLL, rather than that DLL itself.  On Linux I'd suggest using ldd
 to check xxid's dependencies, but I dunno what incantation to use on
 Windows.

Dependency Walker (depends.exe) from www.dependencywalker.com (free).

Yet another tool the OS and the Windows standard dev tools fail to include.

--
Craig Ringer



  

Re: [GENERAL] ERROR: could not access file $libdir/xxid: No such file or directory

2009-08-22 Thread Jorge Daine Quiambao
Hi Scott,

I tried doing your suggestion, uninstalling antivirus but it won't work either. 
I also tried doing the same in an XP machine but the error is the same. 

Thanks!

Jorge



Whoops, I meant if you're on windows, uninstall any antivirus then see
if the problem goes away

Too early in the morning apparently.

On Thu, Aug 20, 2009 at 6:03 AM, Jorge Daine
Quiambaocyb3rjo...@yahoo.com wrote:

 Yes, that's how I did it uninstall everything first then install. I use
 Windows Vista


 Hi,

 I keep getting...  ERROR: could not access file $libdir/xxid: No such
 file or directory whenever I create a new cluster. I've checked the pg
 directory and the xxid files are in shared folder.

 I've installed PG 8.4 and Slony-I 2.0.2-1 properly using
 one-click-installer
 and stack builder. I've checked Options.. in PgAdmin and the Slony-I path
 is
 in /share directory under Pg installation. I have previous installation of
 Pg 8.3 but I uninstalled it properly using the bundled uninstaller.

 After doing all this. I'm still getting the error, any help will be highly
 appreciated!

 If you're o windows,uninstall (don't just disable) it and see if the
 problem goes away





  

Re: [GENERAL] ERROR: could not access file $libdir/xxid: No such file or directory

2009-08-22 Thread Craig Ringer
On Sat, 2009-08-22 at 00:23 -0700, Jorge Daine Quiambao wrote:
 Craig Ringer, 
 
 I tried scanning slony1_funcs.dll for dependencies but for some
 reasons it cannot find POSTGRES.EXE, the files is encrypted if I try
 to open it so I can't set the directory. I even tried copying
 POSTGRES.EXE in the lib folder where slony1_funcs.dll resides but of
 course still gets the same error.
 
 Can this really a factor for the Slony to throw that error?
 
  I keep getting...  ERROR: could not access file
 $libdir/xxid: No such file or
  directory whenever I create a new cluster. I've checked
 the pg directory and
  the xxid files are in shared folder.
 
  The no such file complaint might refer to some library
 needed by the
  xxid DLL, rather than that DLL itself.  On Linux I'd suggest
 using ldd
  to check xxid's dependencies, but I dunno what incantation
 to use on
  Windows.
 
 Dependency Walker (depends.exe) from www.dependencywalker.com
 (free).
 
 Yet another tool the OS and the Windows standard dev tools
 fail to include.
 
 --
 Craig Ringer
 
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open relation XXX: No such file or directory

2009-08-22 Thread Alan Millington
On 19/08/2009 6:38 PM, Craig Ringer wrote:

Got a virus scanner installed? If so, remove it (do not just disable it) and 
see if you can reproduce the problem. Ditto anti-spyware software. 
 
You should also `chkdsk' your file system(s) and use a SMART diagnostic tool to 
test your hard disk (assuming it's a single ATA disk). 
chkdsk reported that the disc is clean.
 
Since installing Postgres in early 2007 I have been running it together with 
McAfee with no problem. A few days ago McAfee was deinstalled and Kaspersky 
installed in its place, so Kaspersky appeared to be a suspect.
 
However, on looking at the matter again, I am now almost certain that I caused 
the problem myself. I have a Python function which (as a workaround to a 
problem which exists in Python 2.4, the version to which Postgres 8.1.4 is 
tied) executes a chdir. It appears that once this has happened, the current 
Postgres session is no longer able to find any new data files, though evidently 
it is still able to use those that it has located previously. If you can 
confirm that Postgres does indeed rely on the current working directory to 
locate its data files, the problem is solved.
 
Moral: never underestimate the stupidity of the people who post the questions 
(in this case, me)! No doubt this provides one example of why Python is deemed 
unsafe.
 


  

[GENERAL] Getting listed on Community Guide to PostgreSQL GUI Tools

2009-08-22 Thread Thomas Kellerer

Hi,

I was going through the list of application at 
http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools

and was wondering whom I should contact to get my application listed there as well. 


It is a Java based SQL tool (http://www.sql-workbench.net) and supports 
PostgreSQL (as a matter of fact I do most of the DBMS independent development 
agains my local PG database).


I have also seen that some of the listed applications don't seem to be active 
any longer (PGAccess, Xpg, pginhaler). Wouldn't it make sense to clean up a bit 
there as well?

Cheers
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea corruption?

2009-08-22 Thread Daniel Verite
Nathan Jahnke wrote:

 good catch - it's because i'm used to working in plperlu.
 unfortunately commenting out those lines makes no difference for this
 particular data (that i linked in my original email); it's still
 corrupted:

Don't remove both: remove only the custom decoding.

It's different for the encoding step. It can also be removed, but in this
case you need to tell DBD::Pg that your data is binary, like this:

$insert_sth-bind_param(1, $data, { pg_type = DBD::Pg::PG_BYTEA });
$insert_sth-execute();

(and have $data be raw binary, no custom encoding).

-- 
Daniel
PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea corruption?

2009-08-22 Thread Nathan Jahnke
wrong reply-address; please disregard the last message from me.


thanks for your help. unfortunately i'm still getting corruption on
this particular data (available at
http://nate.quandra.org/data.bin.0.702601051229191 ) even with these
changes:

# ./bytea.pl
Argument DBD::Pg::PG_BYTEA isn't numeric in subroutine entry at
./bytea.pl line 18.
37652cf91fb8d5e41d3a90ea3a22ea61 != ce3fc63b88993af73fb360c70b7ec965

things work fine if i make the data 123abc:

# ./bytea.pl
Argument DBD::Pg::PG_BYTEA isn't numeric in subroutine entry at
./bytea.pl line 18.
a906449d5769fa7361d7ecc6aa3f6d28 = a906449d5769fa7361d7ecc6aa3f6d28

below is my script as it stands now:

#!/usr/bin/perl -w

use DBI;
use Digest::MD5 qw(md5 md5_hex md5_base64);


my $fh;
open( $fh, '/tmp/data.bin.0.702601051229191' ) or die $!;
binmode $fh;
my $data = do { local( $/ ) ; $fh } ;
close($fh);

#$data = '123abc';

my $connection = DBI-connect_cached(dbi:Pg:dbname=testdb;port=5432,
root, , {RaiseError=1});

my $insert_sth = $connection-prepare('insert into testtable (data)
values (?) returning id');
$insert_sth-bind_param(1, $data, { pg_type = DBD::Pg::PG_BYTEA });
$insert_sth-execute();
my $ref = $insert_sth-fetchrow_hashref;
my $id = $ref-{id};

my $getall_sth = $connection-prepare('select * from testtable where id=?');
$getall_sth-execute($id);
my $newref = $getall_sth-fetchrow_hashref;
my $newdata = $newref-{data};

print md5_hex($data).' ';
print '!' if md5_hex($data) ne md5_hex($newdata);
print '= '.md5_hex($newdata);

print \n;

--


nathan


On Sat, Aug 22, 2009 at 9:17 AM, Daniel Veritedan...@manitou-mail.org wrote:
        Nathan Jahnke wrote:

 good catch - it's because i'm used to working in plperlu.
 unfortunately commenting out those lines makes no difference for this
 particular data (that i linked in my original email); it's still
 corrupted:

 Don't remove both: remove only the custom decoding.

 It's different for the encoding step. It can also be removed, but in this
 case you need to tell DBD::Pg that your data is binary, like this:

 $insert_sth-bind_param(1, $data, { pg_type = DBD::Pg::PG_BYTEA });
 $insert_sth-execute();

 (and have $data be raw binary, no custom encoding).

 --
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea corruption?

2009-08-22 Thread Daniel Verite
Nathan Jahnke wrote:

 thanks for your help. unfortunately i'm still getting corruption on
 this particular data (available at
 http://nate.quandra.org/data.bin.0.702601051229191 ) even with these
 changes:
 
 # ./bytea.pl
 Argument DBD::Pg::PG_BYTEA isn't numeric in subroutine entry at
 ./bytea.pl line 18.
 37652cf91fb8d5e41d3a90ea3a22ea61 != ce3fc63b88993af73fb360c70b7ec965

Ah, you also need to add
use DBD::Pg;
at the beginning of the script for DBD::Pg::PG_BYTEA to be properly
evaluated.

Best regards,
-- 
 Daniel
 PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to simulate crashes of PostgreSQL?

2009-08-22 Thread Sergey Samokhin
Hello!

To make my client application tolerant of PostgreSQL failures I first
need to be able to simulate them in a safe manner (hard reset isn't a
solution I'm looking for :)

Is there a way to disconnect all the clients as if the server has
crashed? It should look like a real crash from the client's point of
view.

Is using kill what everyone should use for these purposes?

Thanks.

-- 
Sergey Samokhin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Multiple table entries?

2009-08-22 Thread Jeff Ross

Hi,

I recently upgraded to 8.4 and everything went great.  All databases are 
working as they are supposed to, no problems seen.


Today, however, I did a \d on a database and was surprised to see sets 
of 5 identical table entries for each one that is supposed to be there.  
Here's the databases in question (used to store apache web logs).  
Several other of my databases (but not all) show the same 5 entries for 
each real table symptom.


_postgre...@heinlein:/backup/postgresql $ psql pglogd
psql (8.4.0)
Type help for help.

pglogd=# \d++
 List of relations
Schema | Name | Type  |Owner|  Size   | Description
+--+---+-+-+-
public | full_entries | table | _postgresql | 528 kB  |
public | full_entries | table | _postgresql | 528 kB  |
public | full_entries | table | _postgresql | 528 kB  |
public | full_entries | table | _postgresql | 528 kB  |
public | full_entries | table | _postgresql | 528 kB  |
public | full_temp| table | jross   | 1904 kB |
public | full_temp| table | jross   | 1904 kB |
public | full_temp| table | jross   | 1904 kB |
public | full_temp| table | jross   | 1904 kB |
public | full_temp| table | jross   | 1904 kB |
public | log_entries  | table | _postgresql | 1648 kB |
public | log_entries  | table | _postgresql | 1648 kB |
public | log_entries  | table | _postgresql | 1648 kB |
public | log_entries  | table | _postgresql | 1648 kB |
public | log_entries  | table | _postgresql | 1648 kB |
public | page_hits| table | _postgresql | 5296 kB |
public | page_hits| table | _postgresql | 5296 kB |
public | page_hits| table | _postgresql | 5296 kB |
public | page_hits| table | _postgresql | 5296 kB |
public | page_hits| table | _postgresql | 5296 kB |
public | total_hits   | table | _postgresql | 16 kB   |
public | total_hits   | table | _postgresql | 16 kB   |
public | total_hits   | table | _postgresql | 16 kB   |
public | total_hits   | table | _postgresql | 16 kB   |
public | total_hits   | table | _postgresql | 16 kB   |
(25 rows)

\l correctly lists the individual databases, and when I took a look at 
yesterday's dump file I only see one create table statement, not 5.


I browsed through the system catalogs but haven't found anything yet 
that can shine some light on this.


System is running OpenBSD -current.

Thanks,

Jeff Ross



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to simulate crashes of PostgreSQL?

2009-08-22 Thread Ray Stell
On Sat, Aug 22, 2009 at 01:03:43PM -0700, Sergey Samokhin wrote:
 Is there a way to disconnect all the clients as if the server has
 crashed? It should look like a real crash from the client's point of
 view.

ifconfig ethx down ?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to simulate crashes of PostgreSQL?

2009-08-22 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE- 
Hash: RIPEMD160


 Is there a way to disconnect all the clients as if the server has
 crashed? It should look like a real crash from the client's point of
 view.

 ifconfig ethx down ?

Or even:

iptables -I INPUT -p tcp --dport 5432 -j DROP

Keep in mind that both of those are simulating network failures, not
a server crash. But network failures are something your application
should handle gracefully too. :) To make something look like a real
crash, you should do a real crash. In this case, kill -9 the backend(s).

A server crash is a pretty rare event in the Postgres world, so I
would not spend too many cycles on this...

- --
Greg Sabino Mullane g...@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200908221849
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAkqQd2sACgkQvJuQZxSWSsg6TwCfXMZ/GNi33qc2TyMa4uf1asw8
vVcAn3bUUZMP+cmSNEd5EABH/09gLeE/
=Uowh
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join from array or cursor

2009-08-22 Thread John DeSoi


On Aug 21, 2009, at 9:22 AM, Greg Stark wrote:


Of course immediately upon hitting send I did think of a way:

SELECT (r).*
 FROM (SELECT (SELECT x FROM x WHERE a=id) AS r
 FROM unnest(array[1,2]) AS arr(id)
  ) AS subq;


Thanks to all for the interesting insights and discussion. Where in  
the docs can I learn about writing queries like that :).


While it avoids the sort of my method, it appears to be almost 5 times  
slower (about 4000 keys in the cursor, Postgres 8.4.0):


EXPLAIN ANALYZE SELECT (r).*
 FROM (SELECT (SELECT work FROM work WHERE dbid=id) AS r
 FROM cursor_pk('c1') AS arr(id)
  ) AS subq;


Function Scan on cursor_pk arr  (cost=0.00..116011.72 rows=1000  
width=4) (actual time=13.561..249.916 rows=4308 loops=1)

  SubPlan 1
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.003..0.003 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 2
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 3
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 4
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 5
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 6
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 7
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 8
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 9
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 10
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 11
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 12
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 13
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
  SubPlan 14
-  Index Scan using work_pkey on work  (cost=0.00..8.27 rows=1  
width=32) (actual time=0.002..0.002 rows=1 loops=4308)

  Index Cond: (dbid = $0)
Total runtime: 250.739 ms



EXPLAIN ANALYZE SELECT * FROM cursor_pk('c1') c LEFT JOIN work ON  
(c.pk = work.dbid) order by c.idx;


Sort  (cost=771.23..773.73 rows=1000 width=375) (actual  
time=36.058..38.392 rows=4308 loops=1)

  Sort Key: c.idx
  Sort Method:  external merge  Disk: 1656kB
  -  Merge Right Join  (cost=309.83..721.40 rows=1000 width=375)  
(actual time=15.447..22.293 rows=4308 loops=1)

Merge Cond: (work.dbid = c.pk)
-  Index Scan using work_pkey on work  (cost=0.00..385.80  
rows=4308 width=367) (actual time=0.020..2.078 rows=4308 loops=1)
-  Sort  (cost=309.83..312.33 rows=1000 width=8) (actual  
time=15.420..15.946 rows=4308 loops=1)

  Sort Key: c.pk
  Sort Method:  quicksort  Memory: 297kB
  -  Function Scan on cursor_pk_order c   
(cost=0.00..260.00 rows=1000 width=8) (actual time=12.672..13.073  
rows=4308 loops=1)

Total runtime: 51.886 ms


Thanks for any further suggestions.



John DeSoi, Ph.D.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Greg Stark
On Sat, Aug 22, 2009 at 9:31 PM, Jeff Rossjr...@wykids.org wrote:
 Hi,

 I recently upgraded to 8.4 and everything went great.  All databases are
 working as they are supposed to, no problems seen.

 Today, however, I did a \d on a database and was surprised to see sets of 5
 identical table entries for each one that is supposed to be there.

Ugh.

What does

select xmin,xmax,ctid,oid,* from pg_class

return?

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Greg Stark
On Sat, Aug 22, 2009 at 9:31 PM, Jeff Rossjr...@wykids.org wrote:
 I browsed through the system catalogs but haven't found anything yet that
 can shine some light on this.

Actually, I wonder if this isn't more likely to show the problem -- it
would explain why *all* your tables are showing up with duplicates
rather than just one.

select xmin,xmax,ctid,oid,* from pg_namespace


-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join from array or cursor

2009-08-22 Thread Greg Stark
On Sun, Aug 23, 2009 at 1:30 AM, John DeSoide...@pgedit.com wrote:
 While it avoids the sort of my method, it appears to be almost 5 times
 slower (about 4000 keys in the cursor, Postgres 8.4.0):


 Function Scan on cursor_pk arr  (cost=0.00..116011.72 rows=1000 width=4)
 (actual time=13.561..249.916 rows=4308 loops=1)
  SubPlan 1
  SubPlan 2
  SubPlan 3
  ...

Ugh, I guess using a subquery didn't work around the problem of the
(r).* getting expanded into multiple columns. This is starting to be a
more annoying limitation than I realized.

This also means when we do things like

select (x).* from (select bt_page_items(...))

or

select (h).* from (select  heap_page_items(...))

It's actually calling bt_page_items() repeatedly, once for every
column in the output record?  Bleagh.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Jeff Ross

Greg Stark wrote:

On Sat, Aug 22, 2009 at 9:31 PM, Jeff Rossjr...@wykids.org wrote:
  

Hi,

I recently upgraded to 8.4 and everything went great. �All databases are
working as they are supposed to, no problems seen.

Today, however, I did a \d on a database and was surprised to see sets of 5
identical table entries for each one that is supposed to be there.



Ugh.

What does

select xmin,xmax,ctid,oid,* from pg_class

return?

  

216 rows worth, but I only see one entry per table, not 5.

For easier reading, I put the output at

   http://openvistas.net/pg_class_query.html

Thanks!

Jeff


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Jeff Ross

Greg Stark wrote:

On Sat, Aug 22, 2009 at 9:31 PM, Jeff Rossjr...@wykids.org wrote:
  

I browsed through the system catalogs but haven't found anything yet that
can shine some light on this.



Actually, I wonder if this isn't more likely to show the problem -- it
would explain why *all* your tables are showing up with duplicates
rather than just one.

select xmin,xmax,ctid,oid,* from pg_namespace


  

http://openvistas.net/pg_namespace_query.html

Jeff

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Greg Stark
On Sun, Aug 23, 2009 at 4:06 AM, Jeff Rossjr...@wykids.org wrote:
 Greg Stark wrote:

 Actually, I wonder if this isn't more likely to show the problem -- it
 would explain why *all* your tables are showing up with duplicates
 rather than just one.

 select xmin,xmax,ctid,oid,* from pg_namespace

 http://openvistas.net/pg_namespace_query.html

Yeah, that's a problem. Would you be able to load the pageinspect
contrib module and run a query?

select (h).* from (select
heap_page_items(get_raw_page('pg_namespace',0)) as h from p) as x;



-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Improving Full text performance

2009-08-22 Thread xaviergxf
If i strip all html tags and filter more stop words, will the search
be more accurate? Actually my fulltext stats returns some like: font
from font tags i guess, and other garbage.
 If i do that, will i improve the speed of my search?

Thanks!

Ps: I cannot use other tools like MNOsearch, lucene, etc...because i
have no root pass to my server.

On 22 ago, 02:20, o...@sai.msu.su (Oleg Bartunov) wrote:
 On Fri, 21 Aug 2009, xaviergxf wrote:
  Hi,

    I?m using php and full text on postgresql 8.3 for indexing html
  descriptions. I have no acess to postgresql server, since i use a
  shared hosting service.
     To improve search and performance, i want to do the follow:

  Strip all html tags then use my php script to remove more stop words
  (because i can?t edit stop words file on the server).

  My question: What i?m thinking to do, has any collateral effects? Any
  suggestions?

 You shouldn't bother to strip all html tags, just create your own text search
 configuration, which index only what do you want. Read documentation for
 details.

         Regards,
                 Oleg
 _
 Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
 Sternberg Astronomical Institute, Moscow University, Russia
 Internet: o...@sai.msu.su,http://www.sai.msu.su/~megera/
 phone: +007(495)939-16-83, +007(495)939-23-83

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your 
 subscription:http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple table entries?

2009-08-22 Thread Greg Stark
On Sun, Aug 23, 2009 at 4:40 AM, Greg Starkgsst...@mit.edu wrote:
 On Sun, Aug 23, 2009 at 4:06 AM, Jeff Rossjr...@wykids.org wrote:
 Greg Stark wrote:

 Yeah, that's a problem. Would you be able to load the pageinspect
 contrib module and run a query?

 select (h).* from (select
 heap_page_items(get_raw_page('pg_namespace',0)) as h from p) as x;


Also, do you have the WAL log files going back to the database
creation? If so *please* preserve them! You're the second person to
report similar symptoms so it's starting to look like there may be a
serious bug here. And the nature of the problem is such that I think
it may require someone sifting through the xlog WAL files to see what
sequence of events happened.

The wal files are in a subdirection of the database root called
pg_xlog. If you still have one named 0001 then you
have all of them. Please, if possible, copy them to a backup
directory.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general