[GENERAL] = ANY (SELECT ..) and type casts, what's going on here?

2011-06-15 Thread Russell Smith
Hi,

Is anybody able to explain the following behaviour?

Server is 8.4.7  RHEL5 build.  Also happens on 8.4.8 Ubuntu x64 package.

mr-russ=# SELECT 'BU'::varchar = ANY ((select '{BU,CI}'::varchar[]));
ERROR:  operator does not exist: character varying = character varying[]
LINE 1: SELECT 'BU'::varchar = ANY ((select '{BU,CI}'::varchar[]));
 ^
HINT:  No operator matches the given name and argument type(s). You might need 
to add explicit type casts.
mr-russ=# SELECT 'BU'::varchar = ANY ((select '{BU,CI}'::varchar[])::varchar[]);
 ?column?
--
 t
(1 row)

mr-russ=#


What I don't understand is what happens to the single SELECT's type, is it 
because select returns a row?  The error doesn't seem to match what I would 
expect?

Thanks

Russell

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No PL/PHP ? Any reason?

2010-06-23 Thread Russell Smith
On 23/06/10 02:16, Joshua D. Drake wrote:
 On Tue, 2010-06-22 at 13:51 +, Greg Sabino Mullane wrote:
   
 Is there any technical obstacle to anyone creating PL/PHP? I am 
 cruious as to why it doesn't alreay exist.
   
 Obviously we need to improve our documentation. What led you to 
 believe it does not exist? As pointed out downthread, it does 
 exist (if not maintained).
 
 It is maintained. We address items as they come in. Is it currently
 being developed for new features? No.
   
 * Not in core
 
 True. Check the archives there were long discussions as to why it won't
 work. Basically the build path of PHP isn't really compatible with the
 build path of PostgreSQL.

   
 * PHP is not as stable, mature, secure, or well designed as Perl/Tcl/Python.
 
 No it is just more popular, more widely used and has a larger community.

 (Oh: And remember, I am a python guy)

   

The biggest obstacle to more widespread use is packaging.  There are no
installable packages for pl/php.  It would make a world of difference
for it to have packages hosted as part of the yum repository.  It would
be better for me if there were debian/ubuntu packages as well.  It would
be easy for people to install which means it would be easy for people to
use.  It's a lot of work to build from source, especially with the
strange dependency stuff.  Part of the reason I haven't built from
source is because that isn't easy.  For adoption in the enterprise, the
compile from source requirement kills using pl/php as an option.  And I
have developers who'd like to use it.  If you try to argue to build
something from source in my work, not a chance.  But there is at least a
chance of installing a package from a different repository.  It's seen
as much easier to back out and manage particularly if you want support
from your vendor.

At one point I did try to make a debian package for pl/php.  I wasn't
experienced enough at either debian packaging or the pl/php build
procedure to make it all hang together.  I still firmly believe there
would be more adoption if it was packaged.

Regards

Russell

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A maybe-bug?

2010-01-07 Thread Russell Smith
Tom Lane wrote:
 Vincenzo Romano vincenzo.rom...@notorand.it writes:
   
 I tried this:
 tmp1=# CREATE DOMAIN real as numeric;
 [ and got confused between this domain and the built-in real ]
 

   
 It looks like to me this is a bug and also the documentation seems to 
 confirm:
 The domain name must be unique among the types and domains existing
 in its schema.
 

 No bug.  The REAL keyword is treated as an alias for pg_catalog.float4,
 but even if it weren't an alias, it would exist in schema pg_catalog.
 Thus there is not a name conflict with your domain, which (I suppose)
 was created in the public schema.

   regards, tom lane

   
Really, I can't see how quoting an identifier should change the
behaviour or resolved type.  In the non-quoted version it uses the
pg_catalog.float4 and in the quoted version it uses the domain.  This
doesn't seem right at all.  I would have expected it would reject both
t1 and t2 inserts on the basic real and real both map to
pg_catalog.float4.  The documentation indicates that pg_catalog is
always searched first.  If it is, the domain from public should always
use the pg_catalog version.

Russell

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why are there pg_dump errors in my Apache httpd error_log ? (Postgres 8.3.7 on CentOS 5)

2009-11-03 Thread Russell Smith
Aleksey Tsalolikhin wrote:
 Hi.  I just found pg_dump errors in my Apache httpd log and am really
 confused.   Has anybody seen this before?

 My syslog.conf does not mention the httpd error_log.

 How did the errors get in there?

 # grep pg_dump /var/log/httpd/error_log
 pg_dump: [archiver] could not open output file: Permission denied
 pg_dump: [archiver] could not open output file: Permission denied
 pg_dump: [archiver] could not open output file: Permission denied
 pg_dump: [archiver] could not open output file: Permission denied
 pg_dump: server version: X.X.X; pg_dump version: Y.Y.Y
 pg_dump: aborting because of version mismatch  (Use the -i option to
 proceed anyway.)
   
These are definitely outputs from phppgadmin.  When you run an export in
that application, it uses pg_dump if it's found.  It looks like it's
finding it with a different version that the server it's dumping from. 
I can't remember under what circumstances it writes files to disk, but
it's trying that and denied the privilege.

Regards

Russell

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can someone help me to make a sql more pretty and more concise?

2008-11-13 Thread Russell Smith
Yi Zhao wrote:
 I want to select some column(a, b) from the table with the specified
 condition, so, i can do like this: 
 select a, b from mytable where id = (select id from temptable where
 tname = 'df' ) and stype = 'def' and range = 'afk'
   
How about;

SELECT a, b, count(1), sum(c) FROM mytable WHERE id = (select id from
temptable where
tname = 'df' ) and stype = 'def' and range = 'afk' GROUP BY 1,2;

Russell.
 but, I want the result contains a sum(c) and a count value extra,
 so, I use the sql below:
 select a, b,
 (select count(1) from mytable where id = 
   ( 
 select id from temptable where tname = 'df'
   ) and stype = 'def' and range = 'afk'  
 ),  
 (select sum(c) from mytable where id =
   ( 
 select id from temptable where tname = 'df'
   ) and stype = 'def' and range = 'afk'
 )   
   from mytable where id = ( 
 select id from temptable where tname = 'df'
   ) and stype = 'def' and range = 'afk';

 can someone help me to make this sql statement above more pretty and more 
 concise?


   


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC driver crash

2008-09-30 Thread Russell Smith
Craig Ringer wrote:
 Hi
 The crash occurs whether a file, system, or user DSN is being used.
 I can reproduce this on two different machines. It happens with or
 without SSL in use. It affects any Access 2007 database with a
 PostgreSQL ODBC connection in use, including a newly created blank
 database with a freshly linked table.
   
[snip]
 I'm at a bit of a loss. I'm used to debugging problems on platforms with
  luxuries like symbol names in binaries, or at least my own code on
 win32 where I can build it for debugging.

 Anybody else seeing, or seen, similar issues?

   
The closest I've had with PHP that has some of the same symptoms is in
http://archives.postgresql.org/pgsql-bugs/2008-06/msg00143.php  That bug
only occurs when SSL is enabled.  So it may not be related.  But it
wouldn't surprise me if it's related.

Russell.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq block allocated before my malloc handler inits?

2008-06-29 Thread Russell Smith
rob wrote:
 I am trying to build a small program with libpq as the interface to a
 Postgre database.  I am using the most current version.  My program
 uses malloc and free hooks to manage memory without having to request
 memory from the system all the time.  I expected that the init
 function (__malloc_initialize_hook)  would run before anything else,
 but after opening a number of connections to the database with
 PQconnectdb, my program blows up because of a free which refers to a
 block of memory that wasn't allocated using my malloc function.  My
 program runs without a hitch if I comment out the PQconnectdb function
 calls.
I've experienced an openSSL, libpq + other library using SSL bug
recently.  Do you get the same crash is you explicitly disable SSL in
the connection string?  sslmode=disable.

Thanks

Russell

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-14 Thread Russell Smith

Dave Page wrote:

On Fri, Mar 14, 2008 at 5:07 PM, David Wall [EMAIL PROTECTED] wrote:
  

  I imagine you can get round the second one by building your software
  so it supports PostgreSQL as well - that way you don't 'require
  customes to install MySQL'.
 
 Well, I'm not sure how they'd even know you were doing this, but as a
 commercial company, I'd suggest you not follow that advice since the
 code would not work without install MySQL.  Yes, they could install PG
 instead, and if they did, MySQL would have no problem.  But if you use
 MySQL, then clearly it's required and a commercial license would be
 required (though perhaps at least you'd put the legal obligation on the
 end customer).



Huh? I'm suggesting that you write your code to be
database-independent such that it is the user's choice what DBMS he
uses. That way you aren't 'requiring them to install MySQL'. MySQL
cannot hold you liable if a customer chooses to use your closed source
Java/JDBC app with their DBMS if you didn't require it.

  
Yes, that is MySQL's licensing angle.  I have spoken numerous times to 
MySQL staff about it.  So what ended up happening for my software 
development was it became a waste of time to support MySQL and 
PostgreSQL, I moved to PostgreSQL solely which didn't have any of those 
restrictions associated with it.  Which is how I got into PostgreSQL in 
the first place.  And now I use MySQL when I have to because PostgreSQL 
does the job for me and I'm used to writing SQL, plpgsql and the like 
for it.


Russell Smith

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-12 Thread Russell Smith

Scott Marlowe wrote:

On Tue, Mar 11, 2008 at 7:33 PM, Justin [EMAIL PROTECTED] wrote:
  

 I view updates/patches of any kind like this,  if ain't broke don't fix it.
I normally only update computers with security patches only after i prove it
don't destroy installs.



But that's juast it.  When a postgresql update comes out, it is
precisely because the database IS broken.  A bug that might eat your
data or allow an attacker to get into your database are the kinds of
fixes, and the only kind really, that go into production pgsql
releases.  I too wait a day or two to test it on a staging server, but
I've never had a pgsql update blow back in my face, and I've done an
awful lot of them.
  

So you missed 8.1.7 then or weren't using those features at the very least?
You also didn't have the stats collector issue with 8.2.3, 8.2.4 took 
quite some time to come out.
And remember the policy violation when 8.0 came out, we replaced the 
buffer expiry algorithm with a patch release.


PostgreSQL is not perfect, but as you can see by the problems with 8.1.7 
the next update was released very very quickly.  Sometimes I fear we 
pump up our status a little too far with the reliability and only 
perfectly patched releases.  The real key is what's the response when 
things go wrong, because things will go wrong at some point.  I think we 
need to be careful because it's a much bigger fall the higher the 
pedestal we put ourselves on.


Regards

Russell


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Constraint violations don't report the value that violates

2008-02-07 Thread Russell Smith

Tom Lane wrote:
Foreign keys give a value that is failing for the foreign key, is there 
a reason that other constraints don't do the same thing?



It's not very well-defined, for instance what would you report for

CHECK some_black_box_function(a, b, c+d)

FKs are constrained by the spec syntax to only involve simple column
values, but this is not the case for either CHECK or unique indexes.

Unique partial indexes would make things even more interesting.

regards, tom lane
  
I would have thought that the inputs would be well defined. In the 
example, a,b,c and d. This would be the same for any partial index.


So instead of this report where test3_z_check is the black box function 
above;
ERROR:  new row for relation test3 violates check constraint 
test3_z_check

STATEMENT:  insert into test3 select g from generate_series(-1,1) as g;

You would get an error like this;
ERROR:  new row for relation test3 violates check constraint 
test3_z_check

DETAIL: Input (a) = 'fred', (b) = 2, (c) = 6, (d) = -1
STATEMENT:  insert into test3 select g from generate_series(-1,1) as g;

The data must have been attempted to be inserted with values to violate 
the check.  I can't see how it's possible to have an input set of data 
that is not a specific set of values for a check/unique constraint. 


some_black_box_function must be immutable and can only take column data 
elements from the parent table. Or am I missing something about the complexity 
that I can make indexes?

Regards

Russell Smith



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Constraint violations don't report the value that violates

2008-02-06 Thread Russell Smith

Hi,

I've would find it useful if check constraints and unique constraints 
would give a value which is violating the constraint.


Foreign keys give a value that is failing for the foreign key, is there 
a reason that other constraints don't do the same thing?


example psql session from 8.3beta4;

# create table test (x integer ,primary key (x));
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
test_pkey for table test

CREATE TABLE
# insert into test values (1);
INSERT 0 1
# insert into test values (1);
ERROR:  duplicate key value violates unique constraint test_pkey
STATEMENT:  insert into test values (1);
# create table test2 (y integer references test(x));
CREATE TABLE
# insert into test2 values (2);
ERROR:  insert or update on table test2 violates foreign key 
constraint test2_y_fkey

DETAIL:  Key (y)=(2) is not present in table test.
STATEMENT:  insert into test2 values (2);
# create table test3 (z integer check (z0));
CREATE TABLE
# insert into test3 values (-1);
ERROR:  new row for relation test3 violates check constraint 
test3_z_check

STATEMENT:  insert into test3 values (-1);
# insert into test3 select g from generate_series(-1,1) as g;
ERROR:  new row for relation test3 violates check constraint 
test3_z_check

STATEMENT:  insert into test3 select g from generate_series(-1,1) as g;
# insert into test2 select g from generate_series(-1,1) as g;
ERROR:  insert or update on table test2 violates foreign key 
constraint test2_y_fkey

DETAIL:  Key (y)=(-1) is not present in table test.
STATEMENT:  insert into test2 select g from generate_series(-1,1) as g;


Notice that the foreign key case always reports the value that is 
violating.  None of the other cases do.  If all cases could report the 
error it would assist greatly in bulk load/INSERT INTO SELECT type queries.


Is this possible or difficult?  or has nobody had the inclination?

Thanks

Russell Smith




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ATTN: Clodaldo was Performance problem. Could it be related to 8.3-beta4?

2008-01-19 Thread Russell Smith

Tom Lane wrote:

Clodoaldo [EMAIL PROTECTED] writes:
  

2008/1/16, Tom Lane [EMAIL PROTECTED]:


I don't know of any reason to think that insertion is slower in 8.3
than it was in 8.2, and no one else has reported anything of the sort.
  


  

The old server reproduces the behavior of the new one.



Okay, Clodoaldo kindly gave me access to his old server, and after
nearly a full day of poking at it I think I've figured out what is going
on.  Recall that the problem query is

insert into usuarios (
  data,
  usuario,
  pontos,
  wus
  )
  select
(select data_serial from data_serial) as data,
ui.usuario_serial as usuario,
sum(pontos) as pontos,
sum(wus) as wus
  from usuarios_temp as ut inner join usuarios_indice as ui
on ut.usuario = ui.usuario_nome and ut.n_time = ui.n_time
  group by data, ui.usuario_serial
  ;

for which both 8.2 and 8.3 select a plan along the lines of

Subquery Scan *SELECT*  (cost=318139.26..342283.02 rows=877955 width=20)
  -  HashAggregate  (cost=318139.26..331308.58 rows=877955 width=12)
InitPlan
  -  Seq Scan on data_serial  (cost=0.00..1.01 rows=1 width=4)
-  Merge Join  (cost=101944.33..261142.53 rows=5699572 width=12)
  Merge Cond: ((ut.n_time = ui.n_time) AND ((ut.usuario)::text = 
inner.?column4?))
  -  Index Scan using usuarios_temp_ndx on usuarios_temp ut  
(cost=0.00..55038.92 rows=883729 width=26)
  -  Sort  (cost=101944.33..104139.22 rows=877955 width=22)
Sort Key: ui.n_time, (ui.usuario_nome)::text
-  Seq Scan on usuarios_indice ui  (cost=0.00..15273.55 
rows=877955 width=22)

and the target table has non-unique indexes on data and usuario
(which are both simple integer columns, no surprises there).

I first tried running this query with usuarios initially empty, and
both 8.2 and 8.3 did fine.  However, in the real scenario that Clodoaldo
is worried about, there's somewhere north of 135 million entries in
usuarios to begin with, and in that case 8.3 falls off a cliff while
8.2 doesn't --- the INSERT query takes about 400 seconds in 8.2 while
I gave up at 2h20m with 8.3.  Why is that?  Well, it turns out that 8.3
is thrashing a lot harder than 8.2 is.  Each index on usuarios is
about 2.3GB (the same in both versions) and the server has only 2GB RAM,
so it's not real surprising that the working set might be more than RAM;
but why is 8.3 worse than 8.2?

You can see from the query that it's inserting the same constant data
value into every new row, and if I understand the context correctly this
value will be higher than all prior entries in the usuarios table.  So
all the new entries in the data index are at the right-hand edge of
the index, and only a fairly small footprint is being touched at any
instant.  strace'ing confirms that neither 8.2 nor 8.3 do much I/O at
all on that index.  It's the index on the usuario column that is
thrashing.

The difference comes from the fact that the HashAggregate step --- which
is being done on hashing columns (data, usuario) --- is effectively
reading out in hash-value order for usuario, meaning that that is the
order in which we make index insertions.  8.2 had an extremely chintzy
hash function for integers --- basically just return the negative of the
integer value --- while 8.3 takes it seriously and produces a nicely
randomized hash value.  This means that the usuario values are returned
in a relatively well ordered fashion in 8.2 and a nearly totally random
one in 8.3.  I captured the output of the SELECT in both 8.2 and 8.3;
attached are plots showing the usuario values against row number.  From
this we can see that 8.2 has a working set that is a relatively small
part of the index at any instant, whereas 8.3 has the entire index as
working set ... and it doesn't fit into RAM.  Ergo, lots of disk seek
delays.

I don't think we want to back off the improved hashing functions in 8.3
--- in most scenarios they should lead to significantly better
performance.  But in this particular context they hurt.

A possibly usable workaround for now is set enable_hashagg = off
to force a GroupAggregate plan, which will deliver the values sorted
by (data, usuario) rather than by their hash values.  This helps both
versions, bringing the runtime down to something like 250 seconds,
because the index on usuario then has complete locality of access.
Alternatively, doubling the server's RAM would probably make the problem
go away (for awhile, until the index reaches 4GB).

In the long run, for queries inserting many rows it might be interesting
to accumulate all the entries intended for a btree index and sort them
before inserting.  Not sure about possible downsides of that.
  


If you add ORDER BY to the query, do you get different results?
The timing may be better/worse than the results you got with hashagg off.
However it seems slightly less brute than trun hashagg off.

Regards

Russell Smith

Re: [GENERAL] raw data into table process

2007-08-22 Thread Russell Smith

novice wrote:

I am trying to record the following entries into a table.  I'm curious
to know if there's an efficient/effective way of doing this?  This
entries come from an ancient datalogger (note: separated by space and
uses YY/MM/DD format to record date)

Plain file sample.dat

3665   OK   BS 07/08/16 07:28
3665   CC   BS 07/08/16 07:29
3665   CS   BS 07/08/16 07:29
3665   CS   BS 07/08/16 07:29
4532   OK   BS 07/08/16 07:34
4004   OK   BS 07/08/16 07:51
3991   OK   BS 07/08/16 07:54


This is the table that I'm adding the entries to

CREATE TABLE maintenance
(
  maintenance_id SERIAL PRIMARY KEY,
  meter_id integer,
  status text,
  inspector text,
  inspection_date timestamp with time zone,
)



If your on unix, why not use those tools first?
awk '{print $1 \t $2 \t $3 \t $4   $5}' sample.dat  sample.tab

-- Begin SQL script
CREATE TEMP TABLE maintenance_tmp (
  meter_id integer,
  status text,
  inspector text,
  inspection_date timestamp with time zone
);

SET datestyle='ymd';
\copy maintenance_tmp FROM sample.tab
INSERT INTO maintenance (meter_id, status, inspector, inspection_date)
  SELECT DISTINCT meter_id, status, inspector, inspection_date FROM 
 maintenance_tmp ORDER BY inpsection_date;

ANALYZE maintenance;

-- End SQL Script
[snip]


Any thoughts and suggestions welcome.

---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Working between Windows and Unix

2007-04-02 Thread Russell Smith

Erick Papadakis wrote:

Hello:

I am working on Windows and Unix with Postgresql (newbie, so please be
kind). I use phppgadmin on both platforms. Windows is my local
machine. Linux (CentOS as usual) is the cpanel thing that my hosting
provider offers.

Basically this setup works well and I am learning the ropes. But the
problem is with exporting data and structure. Phppgadmin does not
allow exporting to SQL of the database on Windows platform. I see this
error:


--QUOTE
br /
bNotice/b:  Undefined offset:  1 in
bD:\Code\phppg\dbexport.php/b on line b75/bbr /
br /
bNotice/b:  Undefined offset:  1 in
bD:\Code\phppg\dbexport.php/b on line b78/bbr /
Dumping of complex table and schema names on Windows is not supported.
--END QUOTE

This usually means the output of pg_dump -i is not working correctly.
Have you got the correct path for pg_dump on your server? 
Is pg_dump available on your server (where phpPgAdmin is running)?


You will need to edit conf/config.inc.php and set the correct path for 
pg_dump, once you've done that, you should get the correct output.



Failing this, what else can I do? How can I simply take all my data
and structure from here and put it up?

I also want this method to be fast because I have a table with about
2.5 million rows (I know it is nothing compared to other discussions
that people post here, but it is a lot for my machine).

Many thanks!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-19 Thread Russell Smith

Darcy Buskermolen wrote:

[snip]

Another thought, is it at all possible to do a partial vacuum?  ie spend the 
next 30 minutes vacuuming foo table, and update the fsm with what hew have 
learned over the 30 mins, even if we have not done a full table scan ?
  

There was a proposal for this, but it was dropped on 2 grounds.
1. partial vacuum would mean that parts of the table are missed, the 
user could never vacuum certain parts and transaction wraparound would 
get you.  You may also have other performance issues as you forgot 
certian parts of the table


2. Index cleanup is the most expensive part of vacuum.  So doing a 
partial vacuum actually means more I/O as you have to do index cleanup 
more often.


If we are talking about autovacuum, 1 doesn't become so much of an issue 
as you just make the autovacuum remember what parts of the table it's 
vacuumed.  This really has great power when you have a dead space map.


Item 2 will still be an issue.  But if you define partial as either 
fill maintenance_work_mem, or finish the table, you are not increasing 
I/O at all.  As when maintenance work mem is full, you have to cleanup 
all the indexes anyway.  This is really more like VACUUM SINGLE, but the 
same principal applies.


I believe all planning really needs to think about how a dead space map 
will effect what vacuum is going to be doing in the future.



Strange idea that I haven't researched,  Given Vacuum can't be run in a 
transaction, it is possible at a certain point to quit the current 
transaction and start another one.  There has been much chat and now a 
TODO item about allowing multiple vacuums to not starve small tables.  
But if a big table has a long running vacuum the vacuum of the small 
table won't be effective anyway will it?  If vacuum of a big table was 
done in multiple transactions you could reduce the effect of long 
running vacuum.  I'm not sure how this effects the rest of the system 
thought.


Russell Smith


  




Re: [GENERAL] Problems With VIEWS

2007-01-11 Thread Russell Smith

Bernd Helmle wrote:


On Thu, 11 Jan 2007 06:36:34 -0800 (PST), Richard Broersma Jr [EMAIL 
PROTECTED] wrote:

  

Either way.  I like to create sql files with all of the DDL for creating
the view and rules.
Overtime, if I need to change my view or reconfigure the rules, I can edit
my sql file and then
call it up in psql using \e view_def.sql

I use the CREATE OR REPLACE VIEW syntax to achieve this.




Additional note: REPLACE doesn't work if you are going to change the
list/type/name of your view columns. In 8.2 and above you could use 
DROP VIEW IF EXISTS combined with a following CREATE VIEW, instead.
  


Does anybody have a reason why this is the case.  I can change all those 
things for a table without dropping it, why can't I do the same on a view?

Bernd

---(end of broadcast)---
TIP 6: explain analyze is your friend


  



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Remove duplicate rows

2007-01-11 Thread Russell Smith

Jiří Němec wrote:

Hello,

I need to remove duplicates rows from a subquery but order these
results by a column what is not selected. There are logically two
solutions but no works.

SELECT DISTINCT sub.foo FROM (SELECT ...) AS sub ORDER BY sub.bar
ERROR:  for SELECT DISTINCT, ORDER BY expressions must appear in select list
  

I'm not sure here, so I'll leave it alone.

SELECT sub.foo FROM (SELECT ...) AS sub GROUP BY sub.foo ORDER BY sub.bar
ERROR: column sub.bar must appear in the GROUP BY clause or be used
in an aggregate function
  

The problem here is that you are not really asking a meaningful question...
foo  bar
1 1
1 2

now, you are selecting foo, but you want to order by bar.  What decision 
should be made about which value of bar to pick, so you can order on it?


Regards

Russell Smith

Does anybody know how to remove duplicate rows from a subquery and order
these results by a column what is not selected but exists in a subquery?

Thanks for any advice,

J.N.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Autovacuum Improvements

2006-12-20 Thread Russell Smith

Alvaro Herrera wrote:

Matthew O'Connor wrote:
  

Glen Parker wrote:


If it isn't there somewhere already, I would ask to add:

4) Expose all information used by autovacuum to form its decisions.
  
You could argue that this is already there, although not easy to get at 
I suppose.  But all table threshold settings are available either in the 
pg_autovacuum relation or the defaults via GUC variables, that plus a 
little math will get the information autovacuum uses to form its decisions.



No, we currently don't expose the number of dead tuples which autovacuum
uses.

  
5) Expose a very easy way to discover autovacuum's opinion about a 
particular table, for example table_needs_vacuum(oid), ignoring any 
time constraints that may be in place.
  
This might be a nice feature however in the presence of the much talked 
about but not yet developed maintenance window concept, I'm not sure how 
 this should work.  That is, during business hours the table doesn't 
need vacuuming, but it will when the evening maintenance window opens up.



I intend to work on the maintenance window idea for 8.3.  I'm not sure
if I'll be able to introduce the worker process stuff in there as well.
I actually haven't done much design on the stuff so I can't say.

  
What does a maintenance window mean?  I am slightly fearful that it as 
other improvements to vacuum are made, it will change it's meaning.


There has been discussion about a bitmap of dirty pages in a relation 
for vacuum to clean.  Do that effect what maintenance means?  eg.  Does 
maintenance mean I can only scan the whole relation for XID wrap in 
maintenance mode and not during non-maintenance time.  Does it mean we 
don't vacuum at all in non-maintenance mode.  Or do we just have a 
different set of thresholds during maintenance.


Further to this was a patch a long time ago for partial vacuum, which 
only vacuumed part of the relation.  It was rejected on grounds of not 
helping as the index cleanup is the expensive part.  My view on this is, 
if with a very large table you should be able to vacuum until you fill 
your maintenance work mem.  You are then forced to process indexes.  
Then that is a good time to stop vacuuming.  You would have to start the 
process again effectively.  This may also change the meaning of 
maintenance window.  Again, only full relation scans in maintenance 
times, possibly something else entirely.


I am unsure of what the long term goal of the maintenance window is.  I 
understand it's to produce a time when vacuum is able to be more 
aggressive on the system.  But I do not know what that means in light of 
other improvements such as those listed above.  Coming up with a method 
for maintenance window that just used a separate set of thresholds is 
one option.  However is that the best thing to do.  Some clarity here 
from others would probably help.  But I also think we need to consider 
the big picture of where vacuum is going before inventing a mechanism 
that may not mean anything come 8.4



Now, if you (Matthew, or Glen as well!) were to work on that it'll be
appreciated ;-) and we could team up.
  


I am happy to try and put in some design thought and discussion with 
others to come up with something that will work well.


Regards

Russell Smith

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] need help with plpgsql execute insert

2006-12-20 Thread Russell Smith

[EMAIL PROTECTED] wrote:

I am trying to loop through some data and then run insert some of the
resulting data into a new table.  I can create the function but when I run
it i get the error:

ERROR: query SELECT  'INSERT INTO payment (
id,amount,accepted_date,  company_id ,  date ,
patient_responsible_party_id ,  patient_contact_responsible_party_id , 
insurer_network_responsible_party_id,  type,   status)
values (%,%,%,%,%,%,%,%,%,%)',  $1 , 0.0,  $2  ,  $3 ,  $4  ,  $5  ,  $6 
,  $7 , 'Other', 'ACCEPTED' returned 11 columns

SQL state: 42601
Context: PL/pgSQL function add_missing_slrps line 20 at execute statement

I don't understand what the returned 11 columns means.  I am inserting
10 and i counted and it all matches.


Here is my function i deleted some of the sql.  The EXECUTE 'INSERT INTO
is where the errors starts

CREATE OR REPLACE FUNCTION add_missing_slrps() RETURNS integer AS $$
DECLARE
data RECORD;
paymentId int;
BEGIN
 RAISE NOTICE 'Start loop...';

FOR data IN select slra.company_id, slra.create_date,
slra.service_line_responsibility_id,
slr.insurance_policy_responsible_party_id,
slr.patient_responsible_party_id,
slr.patient_contact_responsible_party_id,
insurer_service_center.insurer_network_id
from
.
.
.
.
.
.
.
.
.
LOOP
-- Now data has one record
EXECUTE 'select nextval(''seq_payment'') ' into paymentId;

EXECUTE 'INSERT INTO payment (
id,amount,accepted_date,  company_id ,  date ,
patient_responsible_party_id   patient_contact_responsible_party_id ,
insurer_network_responsible_party_id,  type,   status)
values (%,%,%,%,%,%,%,%,%,%)', paymentId, 0.0,  data.create_date ,
data.company_id,  data.create_date , data.patient_responsible_party_id ,
data.patient_contact_responsible_party_id ,  data.insurer_network_id, 
'Other', 'ACCEPTED';



END LOOP;

 RAISE NOTICE 'Done loop .';
RETURN 1;
END;
$$ LANGUAGE plpgsql;
select add_missing_slrps() ;


I assumed using the '%' symbol will automatically use the real value. 
Like if it is a date it will handle it like a java prepared statement.  Am

I wrong?
  
I believe you are wrong.  the EXECUTE is being given 11 columns, it 
expects 1.  I think you need to form your execute query like;


EXECUTE 'INSERT INTO payment (
id,amount,accepted_date,  company_id ,  date ,
patient_responsible_party_id   patient_contact_responsible_party_id ,
insurer_network_responsible_party_id,  type,   status)
values (' || quote_ident(paymentId) || ',' || ...

Something of that fashion.



I have tried all kinds of things but I truly have no idea what the problem
is.
thanks




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] VACUUM and transactions in different databases

2006-12-07 Thread Russell Smith

Cornelia Boenigk wrote:

Hi Ragnar


could not duplicate this.


I also cannot reproduce the hanging VACUUM FULL. The problem remains 
thet the dead tuples cannot be vemoved.



[snip]


I am interested in this. As one database cannot talk to another database 
in a transactional way a long running transaction in one database should 
not effect the vacuuming of another database.  From my limited 
understanding VACUUM takes the lowest open transaction number and only 
cleans up transactions with TID's lower than that.  The reason I believe 
that it has to use cluster wide is because the shared catalogs might be 
effected.  Do shared catalogs follow MVCC or ACID strictly? I don't 
know, but I assume they don't follow both given my reading of the list.


So if shared catalogs are the problem, what happens if you just vacuum 
the relevant table public.dummy1 and not the whole database, does the 
vacuum remove all the tuples that are dead?


Is it possible to add logic for lazy vacuum that takes the lowest TID in 
our database when not vacuuming shared catalogs?  This may already be 
the case, I don't know.  Just putting forward suggestions.


Russell Smith

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] fatal error on 8.1 server

2006-12-02 Thread Russell Smith

Tom Lane wrote:

Alvaro Herrera [EMAIL PROTECTED] writes:
  

Tony Caduto wrote:

I did not have autovacuum turned on and I usually do a vacuumdb -z -a -f 
-q  each night but this one slipped through the cracks :-(
  


  

Strange -- autovacuum should have started an automatic database-wide
vacuum on that database, even if disabled.



We only added that in 8.2, no?  8.1 autovacuum would have forced the
vacuum to occur, but only if it was enabled in postgresql.conf.
  
And in 8.2 it's only a per table vacuum that is required.  Is that 
correct too?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_hba.conf

2006-11-19 Thread Russell Smith

Tom Allison wrote:

Ran into a mystery that I can't seem to figure out


I want to authenticate using SSL for all external IP addresses that I 
have in my subnet.  I also want to be able to authenticate via non-SSL 
for localhost (not unix socket).


I thought something like this would work:

host   allall127.0.0.1/32 md5
hostsslallall192.168.0.1/24   md5

But I have a localhost client that can't log in because it keeps 
trying to authenticate via SSL.


What am I doing wrong?  It seems simple enough.

What command are you typing?

#nonssl
postgres$ psql -h localhost postgres
#ssl
postgres$ psql -h 192.168.1.1 postgres



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Eliminating bad characters from a database for upgrading

2006-11-16 Thread Russell Smith

Gregory S. Williamson wrote:

Dear list,

I have been banging my head against a problem for a few days now, and although 
I am making progress it is painfully slow, and I am hoping that some one out 
there can steer me in a better way.

I've got a medium size database (7.4), about 64 gigs of data, about 1/3rd of 
which is in one table, which has ~32 million rows (22 gigs when dumped). This 
largish table has about 20 different columns of varchar or text.

There are some records that have illegal characters in them, according to 
postgres 8.1.5, which imposes stricter standards on UTF encoding.

I've been using copy to dump the big table to disk, then try to load it into my new table. 
When it fails, I use split to break the file into managable chunks and then use 
vi to find the offending line, then figure out the column. Then I use 
something like:

create table bad_char_gids as select gid from parcels where position('Ñ' in 
s_street)  0;

And so create a table with the ids of the bad records; and then use replace to 
either replace or eliminate the offending characters from that column. This 
example got 5001 records, but often it is one record in the whole DB will have 
some other offending character. I fix the problem in the loaddata as well, and 
continue.

The problem is that there are errors in quite a few of the columns (but only a 
few tens of thousands of records), and the offending characters are all quite 
different (wierd diacritics and characters, upper and lower case). And so this 
is a very slow process.

Is there any way to get a list of records, even if done repeatedly for each 
column, that would let me find the offending records in 7.4 which have any 
invalid UTF chars? I am feeling stupid for not seeing one ... I can find any 
individual bad character, but I want to find them all at once, if possible.
  
Try converting the dump files encoding to UTF-8.  before 8.1 you could 
insert invalid characters into the DB because it accepted other 
encodings.  It will also dump other encoding.  For example, converting 
something with windows characters in it.


iconv -f WINDOWS-1251 -t UTF-8 dump_file  converted_dump_file

And import the converted file.  you may need to try a couple of 
different input encodings if you aren't sure what encoding was used when 
inserting data into the DB.


Russell.


TIA,

Greg Williamson
DBA
GlobeXplorer LLC


---(end of broadcast)---
TIP 6: explain analyze is your friend


  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Russell Smith
.

Regards

Russell Smith

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Changing the number and type of columns in a view

2006-11-09 Thread Russell Smith

Hi General,

Is there a reason why it is not possible to redefine a view with a 
different number of columns or a different column type? 

It's been possible to change the datatypes of a table, and the column 
numbers for a long time.  What are the restrictions on making this 
possible for views.


I know you can drop an recreate the view, but if the view has dependent 
objects it becomes more difficult.


I'm currently running 8.1 when I say it's not possible.

Thanks

Russell Smith

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ERROR: tuple concurrently updated

2006-11-05 Thread Russell Smith

Tom Lane wrote:

Russell Smith [EMAIL PROTECTED] writes:
  

I got this error the other day, I was under the impression that vacuum could 
get a concurrently updated tuple.  I could be wrong.  It is possible for 
somebody to quickly explain this situation?  Message follows;



  

vacuumdb: vacuuming of table school.person in database sqlfilter failed: 
ERROR:  tuple concurrently updated



Was this a VACUUM ANALYZE, and if so might there have been another
ANALYZE running concurrently on that table?  If so, this is just a
reflection of concurrent attempts to update the same pg_statistic
row.  It's harmless since the ANALYZE that didn't fail presumably
stored pretty nearly the same results.  There is some interlocking
to prevent the error in CVS HEAD, though.
  
Thanks Tom, that makes a lot on sense.  There is an analyze in that.  
Plus 8.1 autovac probably got its hand in at the same time.  I didn't 
expect it to give the error about the vacuumed table if pg_statistic was 
the table having the concurrent update.  I will remember that for the 
future.  Thanks again for the fast and concise response.


Russell Smith

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


  



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] ERROR: tuple concurrently updated

2006-11-04 Thread Russell Smith

Hello,

I got this error the other day, I was under the impression that vacuum could 
get a concurrently updated tuple.  I could be wrong.  It is possible for 
somebody to quickly explain this situation?  Message follows;

vacuumdb: vacuuming of table school.person in database sqlfilter failed: 
ERROR:  tuple concurrently updated


Thanks

Russell Smith


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] php postgresql

2006-03-02 Thread Russell Smith

Mary Adel wrote:
I am wondering how i can call stored procedure from php 
If anyone can help it would great for me



Very small code snippet.

$sql = SELECT my_func('para');
$Result = pg_query($sql);
...

Regards

Russell Smith

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Which indexes does a query use?

2006-02-26 Thread Russell Smith

Chris Velevitch wrote:

In pg v7.4.5, I have this query:-

select *
from activities
where activity_user_id = 2 and activity_type = 1 and activity_ts  now()
order by activity_ts desc
limit 1;

where activity_user_id is a non-unique index and activity_ts is a
non-unique index.

When I explain it, I get:-

Limit  (cost=46.33..46.34 rows=1 width=58)
  -  Sort  (cost=46.33..46.34 rows=1 width=58)
Sort Key: activity_ts
-  Seq Scan on activities  (cost=0.00..46.32 rows=1 width=58)
  Filter: ((activity_user_id = 2) AND (activity_type = 1)
AND ((activity_ts)::timestamp with time zone  now()))

If I'm reading this right, it's telling me that is NOT using any indexes.

Clearly, this is wrong. I would have expected that index on
activity_user_id would have been used to help find all the records
efficiently.


Not necessarily.  How many rows are there in the table at the moment. 
If pg uses and index, it first has to get the index page, then get the 
heap page.  So if you have a small number of blocks in the heap it's 
actually cheaper to just scan the heap.  I would guess the heap is small 
by the fact that the seq scan only find one row, and finds it in 46.32 
units.  The row width isn't high either and that means you get good 
block packing.  Probably 80-100 row's per block.


If you post explain analyze instead of explain and possibly the 
number row in that table, we might be able to help further, but that is 
my best guess from the information given.


Regards

Russell Smith



Chris
--
Chris Velevitch
Manager - Sydney Flash Platform Developers Group
www.flashdev.org.au

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Object ownership in a new database

2006-02-13 Thread Russell Smith

Hello all,

When you create a new database, not all objects in that database are 
owned by the database owner.  Now some of those may need to be owned by 
a superuser, eg C functions.  However should other things such as the 
public schema or other general objects be owned by the database owner, 
or the user who created them in the template database?


To create a db with the public schema owned by postgres, just:

$ createdb -h 172.17.72.1 -U postgres -O non_superuser owner_test;

$ psql -h 172.17.72.1 -U postgres owner_test;
owner_test=# \dn
List of schemas
Name|  Owner
+--
 information_schema | postgres
 pg_catalog | postgres
 pg_toast   | postgres
 public | postgres
(4 rows)

owner_test=# \q


Now everything is owned by postgres.

Is this the correct and desired behaviour, or is the behaviour expected 
to be different.  I expected it to be owned by non_superuser.


Any comments welcome.

Russell Smith

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Old problem needs solution

2005-06-03 Thread Russell Smith
On Fri, 3 Jun 2005 12:38 am, Tom Lane wrote:
 Gerald D. Anderson [EMAIL PROTECTED] writes:
  So, the plot thickens.  Is there somewhere I can go tweak a few bytes to 
  make it think it's 7.3?
 
 No.  That's not what you want anyway; you want a late 7.4 build, just
 one without the hierarchical-queries patch.  I dunno enough about Gentoo
 to say how you get rid of a patch you don't want, but if it's anything
 like RPMs, you can just dike the patch out of the specfile and rebuild.
 
USE=-pg-hier emerge -vp =postgresql-7.4.7-r2

Will do the job on gentoo.  It's not enabled by default unless somebody has put
pg-hier somewhere in the use flags.

Regards

Russell Smith



regards, tom lane
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
 
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Old problem needs solution

2005-06-03 Thread Russell Smith
On Sat, 4 Jun 2005 09:25 am, Alvaro Herrera wrote:
 On Fri, Jun 03, 2005 at 05:55:36PM -0500, Gerald D. Anderson wrote:
[snip]
 
 I guess your build with the use flag wasn't successful.  I think you
 have two choices:
 
 1. really build with the patch installed, and dump your data using that

Given the number of reports we have had about this specific bug on the lists.
I have take the time to submit a bug directly to the gentoo project.  Hopefully
that can fit it properly and we will not see this happening again.

For reference the bug is at:
http://bugs.gentoo.org/show_bug.cgi?id=94965

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] bulk loading of bplus index tree

2005-05-26 Thread Russell Smith
On Thu, 26 May 2005 06:06 pm, Surabhi Ahuja wrote:
 
 I have heard about bulk loading algorithm for indexes..
 for eg. if u have values like 1, 2,3,4,5, etc...till a very large number.
 in case of simple mechanism of indexing, the values will be inserted one by 
 one for eg..1 then 2 and so on
 however in bulk loading ..the mechanism of building the index (bplus)tree is 
 quite diffreent and very fast ezpecially if u consider a very large number of 
 values.
 
 My question is : is this algorith implemented by postgreSQL. If yes please 
 tell in what cases can i make use of it.

Bulk loading for B+Tree's in implemented in PostgreSQL.  It is used on index 
creation, or reindex.

I don't believe it's in other places, but Others may have more to say.

Regards

Russell Smith.

 Thank You
 Regards
 Surabhi Ahuja
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] (Ideas) pg_dump in a production environment

2005-05-24 Thread Russell Smith
On Tue, 24 May 2005 02:12 pm, Tom Lane wrote:
 Thomas F. O'Connell [EMAIL PROTECTED] writes:
  I'd like to use pg_dump to grab a live backup and, based on the  
  documentation, this would seem to be a realistic possibility. When I  
  try, though, during business hours, when people are frequently  
  logging in and otherwise using the application, the application  
  becomes almost unusable (to the point where logins take on the order  
  of minutes).
 
 The pg_dump sources contain some comments about throttling the rate
 at which data is pulled from the server, with a statement that this
 idea was discussed during July 2000 and eventually dropped.  Perhaps
 you can think of a better implementation.

A brief look at the code suggests a couple of possibilities for fixing problems.
There seem to be a least two different issues here from the user point of view.

Issue 1: Large Tables cause server slowdown
---
There are two dump cases in my short reading of the pg_dump code.  Case 1 is
the copy dump, which is done as one command.  The server does most of the work.
Case 2 is the INSERT type dump, where the pg_dump client does most of the work
creating the INSERT statement.  Case 2 is done with a cursor, and it would be 
easy to
insert a fixed delay sleep at the end of a certain amount of record dumps.  I'm 
sure we
could work out the average size of a tuple in this case (2), and even pause 
after a certain
amount of data has been transferred.

I am unsure about how to attack Case 1, as mentioned it is handled mostly in 
the backend code
which we don't really control.  If it could be declared as a CURSOR you could 
you the same
principal as Case 2.  The current throttling suggestions are all based on time. 
 I think that
a data/counter based solution would be less intense on the system.  When 
counter is reached,
just do a usleep for the throttle time.


Issue 2: Full Backups of large amount of data saturate disk I/O (Many tables 
make it slow)
---
If the backup dump is large, and given all files will be sequentially scanned 
during the backed,
the server IO is going to be pushed to the limit.  A pause between dumping 
tables seems a
simple possibility to reduce the ongoing IO load on the server to allow for a 
period where other
requests can be served.  This would result in a bursty type performance 
improvement.  In environments
with large numbers of tables of a reasonable size, this could give a benefit.


---
In releases prior to 8.0, any sort of wait on a certain amount of data would 
possibly not evict high use
data as the wait time would mean that the frequently used data would have been 
accessed again,
meaning you would evict the seqscan data you requested for the previous part of 
the dump.
In post 8.0, or 8.1 with clock sweep, it's possibly the same situation with 
regard to the delays, but
you could possibly process larger amounts of data before the sleep, as you 
would keep recycling the same
buffers.  You would use the sleep to reduce disk IO more than the reduce cache 
eviction.

The problem with timing waits for any backups are the database is not able to 
be vacuumed.  In some
limited circumstances (like mine), If you have a long running transaction that 
blocks vacuum to certain
small high update tables, you lose performance as the table bloats and can only 
fix it with a vacuum full.

Both of these suggestions may be totally bogus.  So I suppose I'm asking for 
feedback on them to see if
they would be worthwhile implementing.

Regards

Russell Smith

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] pgplsh on postgresql 8.0.2

2005-05-13 Thread Russell Smith
On Fri, 13 May 2005 06:39 pm, [EMAIL PROTECTED] wrote:
 Hi,
 I migrated my db from postgresql 7.4.6 to 8.0.2 but I'm not able to make plsh
 functions working.
 I installed plsh ver. 1.0-7.3 after having installed postgresql 8.0.2 rpms for
 fedora core 2.
 I used the same, succesfull, install procedure I used on postgresql 7.4.6.
 
 Every time I try to run a plsh function (ie. postmasterstarttime()) I get the
 following error message:
 
 sano_gis=# select postmasterstarttime();
 server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.
 The connection to the server was lost. Attempting reset: Succeeded.
 sano_gis=#
 
 Do you know if plsh ver. 1.0-7.3 is compatible with postgresql 8.0.2 ?
 Is there any solution or alternative?

I have had an issue getting plsh working with 8.0.x  Even after editing some
of the source code.  I contacted Peter E about it and he said he would like into
it.  I think you can run exec from both plperlu and plphpu.  You may want to 
look
into those to see if you can get what you want.  They may even provide more 
flexibility.

Regards

Russell Smith
 
 Thank you very much,
 Paolo Mattioli
 
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] ORDER BY options (how to order data as AAA, aaa, BBB, bbb, ... ZZZ, zzz)

2005-05-10 Thread Russell Smith
On Tue, 10 May 2005 07:41 pm, Julian Legeny wrote:
 Hello,
 
I have following problem:

 But I would like to sort all data as following:
 
NAME
 ---
AAA
aaa
BBB
bbb
CCC
ccc
 
 
 How can I write sql command (or set up ORDER BY options) for selecting that?
 
SELECT * FROM MY_TABLE ORDER BY lower(NAME), NAME

The second NAME is to ensure that AAA comes before aaa, otherwise the order is 
undetermined.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Samba server Error

2005-05-09 Thread Russell Smith
On Mon, 9 May 2005 08:09 pm, ElayaRaja S wrote:
 Hi,

Wrong Mailing list maybe?  This is a PostgreSQL mailing list.

Regards

Russell Smith
   I am using Redhat Linux 9 and I configure the samba server with
 workgoup of my local
   area network. when i tried to connect it was getting the error message as 
   
   \\myserver is not accessible. You might not have permission to use
 this netowrk resource.
 Contact the administrator of this server to find out if you have
 access permissions.
 
 The nework path was not found.
 
 /etc/samba/smb.conf
 
 
  workgroup = myworkgroupname
 
 [root]
 comment = Root Directory
 path = /root
 public = yes
 browseable = yes
 writeable = yes
 create mode = 0664
 directory mode = 0775
 
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] plphp1.1 make fails

2005-05-06 Thread Russell Smith
On Fri, 6 May 2005 07:50 pm, CSN wrote:
 I followed the new instructions (patch, autoconf,
 configure --with-php=/usr/lib/apache/libphp5.so, etc.)
 and get this error when running 'make':
 
 make[3]: Entering directory
 `/usr/local/src/postgresql-8.0.2/src/pl/plphp'
 patching file php.h
It couldn't find your php.h
/usr/include/php/main/php.h

That is the standard location for it.  if you don't have it, maybe you installed
from a binary.  And you many need php-devel to be able to patch your setup.
I have no idea what distribution or OS you are using, so this is purely a guess.

 Hunk #1 FAILED at 291.
 1 out of 1 hunk FAILED -- saving rejects to file
 php.h.rej
 make[3]: *** [php.h] Error 1
 make[3]: *** Deleting file `php.h'
 make[3]: Leaving directory
 `/usr/local/src/postgresql-8.0.2/src/pl/plphp'
 make[2]: *** [all] Error 2
 make[2]: Leaving directory
 `/usr/local/src/postgresql-8.0.2/src/pl'
 make[1]: *** [all] Error 2
 make[1]: Leaving directory
 `/usr/local/src/postgresql-8.0.2/src'
 make: *** [all] Error 2
 
 Know what the problem is?
 
 Thanks,
 CSN
 
 
 
   
 Yahoo! Mail
 Stay connected, organized, and protected. Take the tour:
 http://tour.mail.yahoo.com/mailtour.html
 
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
 
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Can't compile plphp

2005-05-01 Thread Russell Smith
On Mon, 2 May 2005 06:05 am, CSN wrote:
 I'm following the directions here:
 http://plphp.commandprompt.com/documentation.html
 
 (Note that they differ from the directions included in
 plphp-7.4.x.tar.bz2, which I've also tried.)
 
 Using Postgres 7.4.7 and PHP 5.0.4. When I do this:
 
The new version of plphp doesn't not operate on 7.4 or any of the 7.x series.
It only works for 8.0 and above.

 7. Build and install your plphp.so library.
   cd src/pl/plphp
   make  make install
 
 I get this error:
 cp @php_path@/main/php.h ./
 cp: cannot stat [EMAIL PROTECTED]@/main/php.h': No such file
 or directory
 make: *** [patch] Error 1
 
 Know what the problem is?
 
You need to set those variables in the makefile manually.  There is currently 
not configure setup to make
them correct for you.  If you read the plphp mailing list, I have made a post 
about how to compile the new version.
It includes my sample makefile, and the steps I went through.

Regards

Russell Smith
 Thanks,
 CSN
 
 
 
 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] oid wraparound

2005-04-26 Thread Russell Smith
On Tue, 26 Apr 2005 07:24 pm, Hubert Fröhlich wrote:
 Hi list,
 
 some time ago, there was a discussion about oid wraparound. See 
 http://archives.postgresql.org/pgsql-general/2002-10/msg00561.php .
 
 Those days, we had PostgreSQL 7.1 and 7.2, and we had to be careful 
 oids approaching 2^32 (2.14 billion)
 
 Now, we have 8.0. What does the situation look like? Where do I have to 
 be careful:
 
 OID  2billion? 4billion?

 What about the danger of TID wraparounds? (databases are VACUUMed regularly)

With 8.0 you only need to make sure you do database wide vacuums every 1 
billion transactions
or so.  If you do that, then there is not problem when the XID (Transaction ID) 
wraps around.  
Postgresql will know which transaction were in the past, and which were in the 
future.

Regards

Russell Smith. 
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pg_dump fails

2005-04-19 Thread Russell Smith
On Tue, 19 Apr 2005 11:53 pm, Lorenzo Thurman wrote:
 Thanks for the reply. I've tried recompiling with my install build 
 settings, but no luck. I've posted a message on the Gentoo forums. 
 Hopefully they will have an answer. If they do, I'll post back here for 
 future reference.
 
I read your post in the forums.  And as Tom suggested, it's going nothing to do 
with pg_dump,
you need to remerge postgresql at the very least, and with some C and USE flags 
you understand.

The Usual Gentoo causes come to mind first.  USE flags set correctly?  what are 
they?

What are your GCC flags.  I see a lot of gentoo users who just about turn on 
every compiler flag
without actually knowing what they do, or how they effect things.  Are your 
C_FLAGS conservative?

I've been using Postgresql on gentoo for both 7.4, and 8.0 from beta to 8.0.2 
with no problems.  But then
I always set my C_FLAGS to something conservative like CGLAGS=-march=i586 
-mcpu=i586 -O2 -pipe
yes, it may seems a Gentoo Conservative buy I don't get broken software.  
Always check extra patches
applied to the default distribution if you ever have trouble to weed out 
problem.  And never build with and
USE flags you don't understand the implications of.  Especially package 
specific ones.

I understand Tom's frustration, as Redhat is in business and ships quality 
checked software, and Gentoo
is run by a community group.  Of which I think may of the packagers are not 
tied to the projects they are
packaging.  But I also think there is often fault with the Gentoo user 
attempting to bleed his system a little
too much for speed, without considering the stability or even understand it.

Regards

Russell Smith.

 On Apr 19, 2005, at 1:01 AM, Tom Lane wrote:
 
  Lorenzo Thurman [EMAIL PROTECTED] writes:
  I'm trying that right now. I think there may be mis-match in the build
  settings between upgrades of postgresql. The USE settings may be at
  fault:
 
- - pg-hier: Enables recursive queries like Oracle's 
  'CONNECT
  BY' feature.
 
  [ rolls eyes... ]  Yup, that's Gentoo all right: throw in random 
  patches
  that have been rejected by the upstream developers.  Now that I think
  about it, this failure is exactly what that patch is known to cause,
  because it makes an incompatible change in Query structures and hence
  in on-disk view rule representation.
 
  I think these may have been changed since the original install.
 
  Go back to your prior setting, or even better stop using Gentoo's
  hacked-up version.  I'm not sure why we even bother to answer support
  requests from Gentoo users, when what they are using is not our
  software but some randomly-modified variant.  I wonder what other
  brokennesses Gentoo may be including ...
 
  (Just for the record: I work for Red Hat, which has a rather different
  notion of the level of reliability it wants to ship.  So take my 
  opinion
  with the appropriate grain of salt.  But I'm a mite ticked off at the
  moment --- you're not the first person to have been bitten by this,
  and you likely won't be the last, and I think it's entirely because
  Gentoo has such a low quality standard for the patches they ship.)
 
 regards, tom lane
 
 
 
 
 Tech/Library Combo Lab Manager
 Northwestern University
 Office Tech MG49
 mailto:[EMAIL PROTECTED]
 voice: 847-467-6565
 pager: 847-536-0094
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 
 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-05 Thread Russell Smith
On Tue, 5 Apr 2005 06:01 am, Joshua D. Drake wrote:
 Tom Lane wrote:
 
 Andrew Dunstan [EMAIL PROTECTED] writes:
 
 ... If there are no license or build issues I'm in favor.
 
 
 Peter has pointed out that the problem of circular dependencies is a
 showstopper for integrating plPHP.  The build order has to be
  Postgres
  PHP (since its existing DB support requires Postgres to build)
  plPHP
 so putting #1 and #3 into the same package is a no go.  Which is too
 bad, but I see no good way around it.
 
 O.k. I am confused here. You do not need PHP DB support for plPHP. You only
 need the php.so (once were done anyway). Which means that as long as PHP
 is installed it will work, just like plperl or plpython.
 
 The ONLY reason you would build PHP separately is if your stock installed
 PHP didn't have a feature enabled that you want. This has nothing at all
 to do with plPHP.
 
The issue also includes the fact that you can't install libpq without having 
postgresql
installed.  If you could do that, the circular dependency wouldn't exist.

Some systems build postgresql into php, given that is the case, what Tom says 
is correct.
First you would have to force postgresql to be installed without pl/php.  Then 
install php
with postgresql support, then install pl/php.

OR

Install php without postgresql support
Install postgresql with pl/php
Rebuild php with postgresql support (Unless you only want it available in the 
db)

I may be a bad man for suggesting it...  But is it possible to ship libpq as a 
seperate
tarball that you can compile without postgresql server?

Regards

Russell Smith

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Copression

2005-03-20 Thread Russell Smith
On Mon, 21 Mar 2005 02:50 pm, Bruce Momjian wrote:
 Stanislaw Tristan wrote:
  It's a possible to compress traffic between server and client while server 
  returns query result?
  It's a very actually for dial-up users.
  What is solution?
There is always the possibility of using SSH to tunnel the connection.  You get 
encryption and compression.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Using sequence name depending on other column

2005-03-20 Thread Russell Smith
On Tue, 15 Mar 2005 08:39 pm, Andrus wrote:
   I have table containing different types of documents (type A, B and C).
  
   Each document type must have separate sequential ID starting at 1
  
   ID of first inserted record of type A must be set to 1
   ID of first inserted record of type B must be also set to 1
   ID of second record of type A must be set to 2
   etc.
 
  If you are happy with the fact that a sequence may leave a whole in
  the numbers.  You are probably best to no set a default value for an
  integer, or big integer.  Then run a before trigger for each row.  That
  trigger will assign a value to the column based on the value given for
  the type.
 
 Russell,
 
 thank you.
 I'm a new to Postgres.
 Is there any sample how to write such trigger ?
 

CREATE FUNCTION seq_trig() RETURNS trigger
AS $$BEGIN

IF NEW.type = 'A' THEN
  NEW.sequence = nextval('a');
END IF;

IF NEW.type = 'B' THEN
 NEW.sequence = nextval('b');
END IF;

RETURN NEW;
END$$
LANGUAGE plpgsql STRICT;


Something like that this may work.

 Before inserting each row it should set document id from sequence
 corresponding to insertable document type.
 
 Andrus. 
 
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sql question

2005-03-13 Thread Russell Smith
On Fri, 11 Mar 2005 11:36 pm, Steven Verhoeven wrote:
 Hi all
 
 My table definition :
 
id | fref  | mref
 --+---+--
   1   | 23|   25
   2   | 24|   28
   3   | 25|   31
   4   | 26|   34
 
 My problem :
 i need a query that results in this :
 
id |ref
 --+--
   1   | 23  
   1   | 25
   2   | 24
   2   | 28
   3   | 25   
   3   | 31
   4   | 26   
   4   | 34
 
SELECT id, fref as ref FROM table
UNION ALL
SELECT id, mref as ref FROM table;

Should do the trick.

 
 Do I need a crosstab-query ?
 Who can help me ?
 
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Using sequence name depending on other column

2005-03-12 Thread Russell Smith
On Sun, 13 Mar 2005 02:59 pm, Bruno Wolff III wrote:
 On Sat, Mar 12, 2005 at 23:05:41 +0200,
   Andrus Moor [EMAIL PROTECTED] wrote:
  I have table containing different types of documents (type A, B and C).
  
  Each document type must have separate sequential ID starting at 1
  
  ID of first inserted record of type A must be set to 1
  ID of first inserted record of type B must be also set to 1
  ID of second record of type A must be set to 2
  etc.
 
If you are happy with the fact that a sequence may leave a whole in
the numbers.  You are probably best to no set a default value for an
integer, or big integer.  Then run a before trigger for each row.  That
trigger will assign a value to the column based on the value given for 
the type.

Regards

Russell Smith

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problem with inherited table, can you help?...

2005-03-10 Thread Russell Smith
On Fri, 11 Mar 2005 03:39 am, Michael Fuhr wrote:
 On Thu, Mar 10, 2005 at 01:31:21AM -0800, Net Virtual Mailing Lists wrote:

[snip]

 
 Some have suggested that PostgreSQL should use a weaker lock on the
 referenced key, but that hasn't been implemented yet.
 

Are there actually any problems with only getting a AccessShareLock?

Regards

Russell Smith

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Website Documentation

2005-02-12 Thread Russell Smith
Dear all,

There does not seems to be the latest version of the PostgreSQL documentation 
online.

The release notes for 8.0 and 7.4 only go to version 8.0.0 and 7.4.6.

Where can I find the changes made from 7.4.6 - 7.4.7, and 8.0.0 - 8.0.1?

Should the site be updates?

Thanks

Russell Smith

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Website Documentation

2005-02-12 Thread Russell Smith
On Sun, 13 Feb 2005 01:22 pm, Bruce Momjian wrote:
 Russell Smith wrote:
  Dear all,
  
  There does not seems to be the latest version of the PostgreSQL 
  documentation online.
  
  The release notes for 8.0 and 7.4 only go to version 8.0.0 and 7.4.6.
  
  Where can I find the changes made from 7.4.6 - 7.4.7, and 8.0.0 - 8.0.1?
 
 If you want the changes to the server between releases see the Release
 notes in the documentation.  8.0.1 release notes contain all changes
 back to the 1.0 release.
 
I am aware of this, but the 8.0.1 release notes are not online.  I assume we 
are saying
I have to download them to view them.  This may be correct now, but in the past 
that doesn't
seem to have been the case.

http://www.postgresql.org/docs/8.0/interactive/release.html

Has all releases, including 7.4.6  and the changes to 7.4.5, but none of these 
appear to be 
available for the most recent release (eg 7.4.7, 7.3.9 and 8.0.1)  I know they 
may have been
only security release, but I assumed that the release notes would say what 
security bugs 
those versions fixed.

But I could be all wrong about this too.

Regards

Russell Smith

 If you want to know the documentation changes between releases, we don't
 track that except via CVS logs.
 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Website Documentation

2005-02-12 Thread Russell Smith
On Sun, 13 Feb 2005 01:45 pm, Tom Lane wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  Russell Smith wrote:
  The release notes for 8.0 and 7.4 only go to version 8.0.0 and 7.4.6.
 
  If you want the changes to the server between releases see the Release
  notes in the documentation.
 
 I think his point is that those notes aren't up on the website.  We
 should make a regular practice of updating the on-line docs to match
 each minor release, even if nothing changed except the release notes
 (which is often not the case, anyway).

What Tom has understood is what I meant.  Also thanks to Bruno for the link
to the developer docs, where the 8.0.1 release information is.

Regards

Russell Smith.
 
regards, tom lane
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Sorting when * is the initial character

2005-02-08 Thread Russell Smith
On Tue, 8 Feb 2005 01:10 pm, CoL wrote:
 hi,
 
 Berend Tober wrote, On 2/7/2005 22:20:
  I encountered what looks like unusually sorting behavior, and I'm wondering 
  if
  anyone can tell me if this is supposted to happen (and then if so, why) or 
  if
  this is a bug:
  
  
  SELECT * FROM sample_table ORDER BY 1;
  
  account_id,account_name
  100,First account
  110,Second account
  *115,Fifth account
  120,Third account
  *125,Fourth account
  
  I would expect to see
  
  account_id,account_name
  *115,Fifth account
  *125,Fourth account
  100,First account
  110,Second account
  120,Third account

With 8.0.0  C local, SQL_ASCII Database, I get the expected output.

Regards

Russell Smith

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Calculating a moving average (Coding style)

2005-01-24 Thread Russell Smith
On Mon, 24 Jan 2005 08:32 pm, Alban Hertroys wrote:
 [EMAIL PROTECTED] wrote:
  CREATE OR REPLACE FUNCTION get_bar_avg() RETURNS TRIGGER AS '
  DECLARE
  bar_record RECORD;
  x INTEGER;
  y DOUBLE PRECISION := 0;
  BEGIN
   IF TG_OP = ''INSERT'' THEN
y := y + NEW.bar;
 ...
RETURN NEW;
   ELSIF TG_OP = ''DELETE'' THEN
 x := 0;
 ...
 RETURN OLD;
ELSE
 y := y + NEW.bar;
 ...
 RETURN NEW;
END IF;
  END;
  ' LANGUAGE plpgsql;
 
 I see people do this from time to time. Just out of curiosity, is this 
 considered good coding style, or is it considered lazyness? I'm not 
 sure what to think of it.
 
 If I would have written this, there would have been 3 triggers w/o the 
 check on TG_OP. Is there an important drawback to doing so? Is there any 
 document on preferred coding style in PL/PGSQL?
 
 Yes, I'm a bit of a purist...
 
Given you have to define a function for each trigger, my view is why write more 
functions.

Along with this.  As a C programmer, I would do a few more IF tests in a 
function, rather than
write another one.  I find that triggers like this are one functional block and 
all go together.
Then when you update the function, it's all in one place.

Others may have better reasons for why they do it the way they do.  But they 
are mine.

Regards

Russell Smith.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] change natural column order

2004-11-30 Thread Russell Smith
Regarding the Natural Order of columns.

Why if we can delete a column from the middle of a table now, change the type 
of that column to something totally different, eg text - int.
Can we not move the order of the rows around, and when the new row is written 
to disk in the new arrangement.  Or more accurately,
why is it not possible to add a new column, not at the end of the list.  It's 
probably more complicated that I think, as that's usually the case.

We don't need logical and physical mapping, probably just the ability to insert 
a column not on the end.

Sorry if this comment is in the wrong place, I've been following the General 
and Hackers discussions and decided to post now after
deleting the other posts.

Regards

Russell Smith.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] OID's

2004-11-16 Thread Russell Smith
On Tue, 16 Nov 2004 08:01 pm, Joolz wrote:
 Michael Glaesemann zei:
 
  OIDS are a system level implementation. They are no longer required
  (you can make tables without OIDS) and they may go away someday.
 
 Out of curiosiry: how will we handle blobs once the OID's are gone?
 
I would guess bytea would become the standard for blob use.  The size
is limited to about 1G compressed, but I would guess most people don't
store 2G files in there DB at the moment, or have that much ram to be
able to handle a value that big.

 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 
 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Error connecting using pgadmin from different computer !!

2004-11-10 Thread Russell Smith
On Thu, 11 Nov 2004 03:31 am, Goutam Paruchuri wrote:
 Hello,
 
 I get an error in my log when connecting to postgres server on Windows.
 Postgres version : 8.0.0-beta4
 
 LOG TEXT 
 
 2004-11-10 11:22:47 LOG:  invalid entry in file C:/Program
 Files/PostgreSQL/8.0.0-beta4/data/pg_hba.conf at line 64, token
 192.168.2.1/254
 2004-11-10 11:22:47 FATAL:  missing or erroneous pg_hba.conf file
 2004-11-10 11:22:47 HINT:  See server log for details.
 
 Configuration file pg_hba.conf
 
 hostall all  192.168.2.1/254 md5
/254 - that is not a valid subnet mask.  /0 - /32 are valid, did you mean /31?

 hostall all 127.0.0.1/32  md5
 
 Any ideas how to fix it ??
 
 Thanks !
 Goutam
 
 
 
 
 Confidentiality Notice
 The information contained in this e-mail is confidential and intended for use 
 only by the person(s) or organization listed in the address. If you have 
 received this communication in error, please contact the sender at O'Neil  
 Associates, Inc., immediately. Any copying, dissemination, or distribution of 
 this communication, other than by the intended recipient, is strictly 
 prohibited.
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] index not always used when selecting on a date field

2004-11-07 Thread Russell Smith
On Mon, 8 Nov 2004 07:56 am, Miquel van Smoorenburg wrote:
 I have a database with a btree index on the 'removed' field,
 which is of type 'date'. However it isn't being used:
 
 techdb2= explain select * from lines where removed  CURRENT_DATE;
  QUERY PLAN
 
  Seq Scan on lines  (cost=0.00..243.47 rows=2189 width=324)
Filter: (removed  ('now'::text)::date)
 (2 rows)
 
 Now the weird thing is that if I select a range it is being used:
 
 techdb2= explain select * from lines where removed  CURRENT_DATE and 
 removed  '-01-01';
  QUERY PLAN
 
  Index Scan using lines_removed_idx on lines  (cost=0.00..120.56 rows=33 
 width=324)
Index Cond: ((removed  ('now'::text)::date) AND (removed  
 '-01-01'::date))
 (2 rows)
 
 Why is this?
 
 (Tested with both 7.3.2 and 7.4.6)
 
 Mike.
 
now() and CURRENT_DATE, are and cannot be planned as constants.
So the planner cannot use an index for them.

This have been covered on the list a number of times.  Until a solution is at 
hand,
you can either use constants instead of now, or create a immutable function 
that returns now.
However if you PREPARE those queries, you will not get the new time for now() 
each time you 
run the query.

This function fits in a category between STABLE and IMMUTABLE, of which there 
is currently
no type.

Regards

Russell Smith

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] QMail

2004-10-30 Thread Russell Smith
On Thu, 28 Oct 2004 04:14 am, Eric wrote:
 Is there something to interface postgreSQL with QMail to store mails in
 pgsql instead of using mbox or maildir?
 
 Or maybe it's not a good idea to do that?
 
 I think there is some adavantages...

also look at http://www.dbmail.org/
 
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly
 
 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] adding missing FROM-clause

2004-10-30 Thread Russell Smith
On Sat, 30 Oct 2004 01:42 am, C G wrote:
 Dear All,
 
 I have a simple join query
 
 SELECT c1 FROM t1
 INNER JOIN
 t2 ON t2.c2 = t1.c2 WHERE t3.c3= t2.c3;
 
Instead 
SELECT c1 FROM t2, t1 INNER JOIN t2 ON t2.c2 = t1.c2 WHERE t3.c3=t2.c3

OR

SELECT c1 FROM t1 INNER JOIN t2 ON t2.c2 = t1.c2 JOIN t3 ON t3.c3 = t2.c3

 Which gives the expected result but I get the message
 NOTICE:  adding missing FROM-clause entry for table t3
 
 How do I get rid of this NOTICE, i.e. how should I construct my select 
 query.
 
 Many thanks
 
 Colin
 
 _
 Want to block unwanted pop-ups? Download the free MSN Toolbar now!  
 http://toolbar.msn.co.uk/
 
 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
 
 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Resource temporarily unavailable

2004-10-30 Thread Russell Smith
 
 max_connections = 400
 
 www3:/etc/postgresql# ulimit -a
 max user processes(-u) 256

 im totaly bogus, any ideas ?
 

Well, your allowing postgres to have 400 connection which is at least 400 processes if 
not  a few more.
And you are ulimiting the postgres user to 256 processes.  So the kernel says out of 
process space for that user.

Regards

Russell Smith

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Repeated VACUUM reports same number of deleted rows

2004-10-01 Thread Russell Smith
Postgresql 7.4.5

The following VACUUMs were run within a couple of minutes of each other completing.
This table concerned is a table that has not been changed in a long time. (a month)

There were open transactions at the time the vacuum was run. These were created by 
having PHP running with
postgresql.  At least that is what I have managed to find so far. and disabling PHP in 
apache removes them.

postgres  7588  0.0  0.4 50324 3168 ?SSep28   0:00 postgres: postgres 
sqlfilter 10.0.0.5 idle in transaction
postgres  7589  0.0  0.4 50324 3172 ?SSep28   0:00 postgres: postgres 
sqlfilter 10.0.0.5 idle in transaction

Should VACUUM report the rows as deleted or say they could not be deleted?
Why does it report the same information for three runs in a row?

Or is something else going on that I don't understand.

Regards

Russell Smith


sqlfilter=# vacuum verbose filter.access_log_big
sqlfilter-# ;
INFO:  vacuuming filter.access_log_big
INFO:  index access_log_url now contains 5159204 row versions in 74984 pages
DETAIL:  21455 index pages have been deleted, 2 are currently reusable.
CPU 5.16s/0.69u sec elapsed 650.85 sec.
INFO:  index access_log_whenwho now contains 5159204 row versions in 58292 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 5.39s/0.68u sec elapsed 615.43 sec.
INFO:  index access_log_time now contains 5159204 row versions in 38063 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 3.34s/0.46u sec elapsed 259.59 sec.
INFO:  index accesslogbig_domain now contains 5159204 row versions in 25501 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 2.17s/0.24u sec elapsed 56.53 sec.
INFO:  access_log_big: found 0 removable, 5159204 nonremovable row versions in 
175418 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 19262 unused item pointers.
0 pages are entirely empty.
CPU 21.06s/2.60u sec elapsed 1662.62 sec.
VACUUM
Time: 1662627.077 ms
sqlfilter=# vacuum verbose filter.access_log_big;
INFO:  vacuuming filter.access_log_big
INFO:  index access_log_url now contains 5159204 row versions in 74984 pages
DETAIL:  21455 index pages have been deleted, 2 are currently reusable.
CPU 5.57s/0.64u sec elapsed 655.85 sec.
INFO:  index access_log_whenwho now contains 5159204 row versions in 58292 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 5.54s/0.52u sec elapsed 589.95 sec.
INFO:  index access_log_time now contains 5159204 row versions in 38063 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 3.30s/0.38u sec elapsed 262.43 sec.
INFO:  index accesslogbig_domain now contains 5159204 row versions in 25501 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 2.37s/0.20u sec elapsed 54.98 sec.
INFO:  access_log_big: found 0 removable, 5159204 nonremovable row versions in 
175418 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 19262 unused item pointers.
0 pages are entirely empty.
CPU 21.20s/2.36u sec elapsed 1647.25 sec.
VACUUM
Time: 1647292.681 ms
sqlfilter=# commit;
WARNING:  there is no transaction in progress
COMMIT
Time: 47.537 ms
sqlfilter=# vacuum verbose filter.access_log_big;
INFO:  vacuuming filter.access_log_big
INFO:  index access_log_url now contains 5159204 row versions in 74984 pages
DETAIL:  21455 index pages have been deleted, 2 are currently reusable.
CPU 4.95s/0.68u sec elapsed 648.90 sec.
INFO:  index access_log_whenwho now contains 5159204 row versions in 58292 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 5.29s/0.68u sec elapsed 605.39 sec.
INFO:  index access_log_time now contains 5159204 row versions in 38063 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 3.58s/0.42u sec elapsed 250.08 sec.
INFO:  index accesslogbig_domain now contains 5159204 row versions in 25501 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 2.13s/0.29u sec elapsed 55.05 sec.
INFO:  access_log_big: found 0 removable, 5159204 nonremovable row versions in 
175418 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 19262 unused item pointers.
0 pages are entirely empty.
CPU 20.89s/2.58u sec elapsed 1658.31 sec.
VACUUM
Time: 1658431.580 ms

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Repeated VACUUM reports same number of deleted rows

2004-10-01 Thread Russell Smith
On Sat, 2 Oct 2004 12:42 am, Tom Lane wrote:
 Russell Smith [EMAIL PROTECTED] writes:
  Should VACUUM report the rows as deleted or say they could not be deleted?
  Why does it report the same information for three runs in a row?
 
 I see no pending deletions in that vacuum output:
 
  DETAIL:  0 dead row versions cannot be removed yet.
 
INFO:  vacuuming filter.access_log_big
INFO:  index access_log_url now contains 5159204 row versions in 74984 pages
DETAIL:  21455 index pages have been deleted, 2 are currently reusable.

I expect that index info not to be the same each vacuum run.  Why are 21455 index 
pages deleted
on runs 1, 2 and 3.

 so I'm not sure why you would expect the output to change.
 
regards, tom lane
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Controlling order of evaluation?

2004-09-28 Thread Russell Smith
On Wed, 29 Sep 2004 08:16 am, Jerry LeVan wrote:
 I have an srf sql function annual_report(year) that
 as 14 columns, a category, 12 month numeric columns,  and a total 
 numeric column.
 
 The function finds monthly totals for each category (which is the 
 target of order by)
 and the grand total for the specified year.
 
 I have another sql function annual_profit_loss(year) that summarizes 
 the total
 for each month and the grand total for each year. ( There is a text 
 column that
 serves a label.) It generates a summary for the annual_report function 
 in essence.
 
 I have hoped that
 
 select * from annual_report(2003)
 union
 select * from annual_profit_loss(2003)
Order by title_column = 'Grand Total', month

that will put all rows not containing grand total at the top, sorted by month.
Then grand total's sorted by month.  something like that.


 
 would print the last select last ;( but it inserts the last selection
 alphabetically into the rows of the annual_report depending on the label
 field... I suppose I could use a label zzGrand Totals, but that just
 does not look right.
 
 Is there any way I can force the last select to appear last?
 
 Jerry
 
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Error Message Importing Into PostgreSQL (Using phpPgAdmin)

2004-08-31 Thread Russell Smith
On Wed, 1 Sep 2004 12:53 pm, Daniel Secomb wrote:
 Hi,
 
 I'm using PostgreSQL 7.3.4 with phpPgAdmin 2.4.2 and the .sql file I'm
 trying to import came from a dump from a server running PostgreSQL 7.0.3.
 
I'd suggest a phpPgAdmin upgrade 3.4 is the latest.

 I just got this error message trying to import the PostgreSQL database file
 into phpPgAdmin. The error message was as follows:
 
 Error -
 /usr/local/psa/admin/htdocs/domains/databases/phpPgAdmin/db_readdump.php --
 Line: 33
 PostgreSQL said: ERROR: parser: parse error at or near \ at character 1
 Your query:
 \connect - postgres
 CREATE SEQUENCE acls_bannerid_seq start 1 increment 1 maxvalue 2147483647
 minvalue 1 cache 1 ;
 
This is designed to be sent to psql, which understands \connect.
phpPgadmin does not.  I would suggest restoring using psql.  

Regards

Russell Smith

---(end of broadcast)---
TIP 8: explain analyze is your friend