Re: [GENERAL] Recovering data via raw table and field separators

2007-12-17 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  We used to have the C defined MAKE_EXPIRED_TUPLES_VISIBLE that would
  make deleted rows visible, but it seems it was removed in this commit as
  part of a restructuring:
 
 It was removed because it was utterly useless.

It worked in some cases so I don't think it was useless.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Need to find out which process is hitting hda

2007-12-17 Thread Ow Mun Heng

On Sun, 2007-12-16 at 16:11 -0800, Joshua D. Drake wrote:
 On Sun, 16 Dec 2007 17:55:55 -0600
 Scott Marlowe [EMAIL PROTECTED] wrote:
 
  On Dec 14, 2007 1:33 AM, Ow Mun Heng [EMAIL PROTECTED] wrote:
   I kept looking at the io columns and didn't even think of the swap
   partition. It's true that it's moving quite erratically but I won't
   say that it's really thrashing.
  
total   used   free sharedbuffers
   cached Mem:   503498  4  0
   3287 -/+ buffers/cache:207295
   Swap: 2527328   2199
  
   (YEP, I know I'm RAM starved on this machine)
  
  Good lord, my laptop has more memory than that. :)
 
 My phone has more memory than that :P

What can I say :-p
budgets are tight


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Killing a session in windows

2007-12-17 Thread Alvaro Herrera
Bruce Momjian wrote:
 Howard Cole wrote:
  
   Which you can do, no?  I thought pg_ctl's kill option was invented
   specifically to make this less painful on Windows.
   I shall look into the pg_ctl options to see if the kill option does 
   what taskill cannot (thanks for the heads up on that)
  
  Using
  $ pg_ctl kill TERM [pid]
  worked great. Since very few people seem to know about this, could I 
  suggest making it more prominent in the server administration pages.
 
 Agreed. I have added the second sentence to our 8.3 beta docs:
 
Alternatively, you can send the signal directly using commandkill/
(or commandpg_ctl kill TERM [process id]/ on productnameWindows/).
 
 You can actually use pg_ctl kill on Unix too but it seems awkward to
 suggest it in the existing sentence.

Huh, why is it awkward?

Alternatively, you can send the signal directly using commandkill/
(or commandpg_ctl kill TERM [process id]/).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Killing a session in windows

2007-12-17 Thread Bruce Momjian
Alvaro Herrera wrote:
 Bruce Momjian wrote:
  Howard Cole wrote:
   
Which you can do, no?  I thought pg_ctl's kill option was invented
specifically to make this less painful on Windows.
I shall look into the pg_ctl options to see if the kill option does 
what taskill cannot (thanks for the heads up on that)
   
   Using
   $ pg_ctl kill TERM [pid]
   worked great. Since very few people seem to know about this, could I 
   suggest making it more prominent in the server administration pages.
  
  Agreed. I have added the second sentence to our 8.3 beta docs:
  
 Alternatively, you can send the signal directly using commandkill/
 (or commandpg_ctl kill TERM [process id]/ on 
  productnameWindows/).
  
  You can actually use pg_ctl kill on Unix too but it seems awkward to
  suggest it in the existing sentence.
 
 Huh, why is it awkward?
 
 Alternatively, you can send the signal directly using commandkill/
 (or commandpg_ctl kill TERM [process id]/).

I think we should mention Windows in there somewhere, because it isn't
alternatively on Windows.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Killing a session in windows

2007-12-17 Thread Alvaro Herrera
Bruce Momjian wrote:
 Alvaro Herrera wrote:

  Huh, why is it awkward?
  
  Alternatively, you can send the signal directly using commandkill/
  (or commandpg_ctl kill TERM [process id]/).
 
 I think we should mention Windows in there somewhere, because it isn't
 alternatively on Windows.

Actually, this whole change is a bit silly, because the text now says
something like:

You can send these signals using pg_ctl kill.  Alternatively you can
use kill (or pg_ctl kill)

In my opinion this change should be reverted.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Killing a session in windows

2007-12-17 Thread Bruce Momjian
Alvaro Herrera wrote:
 Bruce Momjian wrote:
  Alvaro Herrera wrote:
 
   Huh, why is it awkward?
   
   Alternatively, you can send the signal directly using commandkill/
   (or commandpg_ctl kill TERM [process id]/).
  
  I think we should mention Windows in there somewhere, because it isn't
  alternatively on Windows.
 
 Actually, this whole change is a bit silly, because the text now says
 something like:
 
 You can send these signals using pg_ctl kill.  Alternatively you can
 use kill (or pg_ctl kill)

Wow, yea, I see that now, but it is alone a paragraph above.  I updated
the text to:

   The xref linkend=app-pg-ctl program provides a convenient
   interface for sending these signals to shut down the server.
   Alternatively, you can send the signal directly using
   commandkill/ on non-Windows systems.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] slony error --need help

2007-12-17 Thread SHARMILA JOTHIRAJAH
Hi
I had posted this in the slony mailing list but no luck in getting any 
answers...Pls help me as I'm stuck with this error for the last 4 days



Im trying to replicate between postgres version 7.4.18 and version 8.1.10. 

I configured postgres-7.4 with enable-thread-safety option

I configured slony1 with this command

./configure --prefix=/export/home/josh/slony7.4 --enable-thread-safety 
--with-pgconfigdir=/export/home/josh/postgres7.4/bin 
--with-pgsourcetree=/export/home/josh/postgresql-7.4.18

 

When i try to execute the this script

#!/bin/sh

slonik _EOF_
cluster name = slony_example;

node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST 
user=$REPLICATIONUSER';
node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST 
user=$REPLICATIONUSER';

init cluster ( id=1, comment = 'Master Node');

create set (id=1, origin=1, comment='All pgbench tables');
   set add table (set id=1, origin=1, id=1, fully qualified name = 
'public.sample1', comment='accounts table');
store node (id=2, comment = 'Slave node');

store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME 
host=$MASTERHOST user=$REPLICATIONUSER');
store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME 
host=$SLAVEHOST user=$REPLICATIONUSER');

_EOF_

 

I get theis error

stdin:21: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could not load 
library /export/home/josh/postgres7.4/lib/xxid.so: ld.so.1: postgres: fatal: 
relocation error: file /export/home/josh/postgres7.4/lib/xxid.so: symbol 
GetTopTransactionId: referenced symbol not found

stdin:21: Error: the extension for the xxid data type cannot be loaded in 
database 'dbname=testdb1 host=172.31.0.67 user=josh'
stdin:21: ERROR: no admin conninfo for node 134701624


 

The same works fine between postgresql versions 8.1.10 and 8.2.5 .

Why do I get this error when replicating between versions 7.4
and8.1. 
Does slony1 replicate between these 2 versions? If so is there
any other settings that needs to be done?

 



Thanks in advance

josh












  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


Re: [GENERAL] Planner ignoring to use INDEX SCAN

2007-12-17 Thread Will

Hi - been having a few problems like this myself. It's probably a
locale thing.

Here's at one of your problems:

 -  Seq Scan on sms_new  (cost=0.00..5240444.80
rows=138939341 width=8)
   Filter: ((otid)::text !~~ 'ERROR%'::text)

I assume you have an index on sms_new?

Check the locale your database cluster has been set up with:

 SHOW lc_ctype

if it doesn't come back with C as your locale, then you will have to
make sure your indexes are created as follows:

CREATE INDEX IDX_sms_new
  ON YOUR_TABLE
  USING btree
  (sms_new varchar_pattern_ops);

Look at chapter 11.8 - Operator Classes for an explanation.

Unfortunately you'll need a second index if you want to use a non-
pattern matching operator, e.g. =.

If your locale is C, then I don't know what the problem is.

Hope that helps.

Will Temperley

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] postgres8.3beta encodding problem?

2007-12-17 Thread marcelo Cortez

Folks


select chr(165);
ERROR:  requested character too large for encoding:
165
 it's one old scrip if not remember wrong works
postgres in 8.2.4
any clue?
best regars 
mdc 

info:
select version().
PostgreSQL 8.3beta3 on i686-pc-linux-gnu, compiled by
GCC gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.1)

show all 

add_missing_from;off
allow_system_table_mods;off
archive_command;(disabled)
archive_mode;off
archive_timeout;0
array_nulls;on
authentication_timeout;1min
autovacuum;on
autovacuum_analyze_scale_factor;0.1
autovacuum_analyze_threshold;50
autovacuum_freeze_max_age;2
autovacuum_max_workers;3
autovacuum_naptime;1min
autovacuum_vacuum_cost_delay;20ms
autovacuum_vacuum_cost_limit;-1
autovacuum_vacuum_scale_factor;0.2
autovacuum_vacuum_threshold;50
backslash_quote;safe_encoding
bgwriter_delay;200ms
bgwriter_lru_maxpages;100
bgwriter_lru_multiplier;2
block_size;8192
bonjour_name;
check_function_bodies;on
checkpoint_completion_target;0.5
checkpoint_segments;3
checkpoint_timeout;5min
checkpoint_warning;30s
client_encoding;latin1
client_min_messages;notice
commit_delay;0
commit_siblings;5
config_file;/usr/local/pgsql/data/postgresql.conf
constraint_exclusion;off
cpu_index_tuple_cost;0.005
cpu_operator_cost;0.0025
cpu_tuple_cost;0.01
custom_variable_classes;
data_directory;/usr/local/pgsql/data
DateStyle;ISO, DMY
db_user_namespace;off
deadlock_timeout;1s
debug_assertions;off
debug_pretty_print;off
debug_print_parse;off
debug_print_plan;off
debug_print_rewritten;off
default_statistics_target;10
default_tablespace;
default_text_search_config;pg_catalog.spanish
default_transaction_isolation;read committed
default_transaction_read_only;off
default_with_oids;off
dynamic_library_path;$libdir
effective_cache_size;128MB
enable_bitmapscan;on
enable_hashagg;on
enable_hashjoin;on
enable_indexscan;on
enable_mergejoin;on
enable_nestloop;on
enable_seqscan;on
enable_sort;on
enable_tidscan;on
escape_string_warning;on
explain_pretty_print;on
external_pid_file;
extra_float_digits;0
from_collapse_limit;8
fsync;on
full_page_writes;on
geqo;on
geqo_effort;5
geqo_generations;0
geqo_pool_size;0
geqo_selection_bias;2
geqo_threshold;12
gin_fuzzy_search_limit;0
hba_file;/usr/local/pgsql/data/pg_hba.conf
ident_file;/usr/local/pgsql/data/pg_ident.conf
ignore_system_indexes;off
integer_datetimes;off
join_collapse_limit;8
krb_caseins_users;off
krb_realm;
krb_server_hostname;
krb_server_keyfile;
krb_srvname;postgres
lc_collate;es_AR
lc_ctype;es_AR
lc_messages;es_AR
lc_monetary;es_AR
lc_numeric;es_AR
lc_time;es_AR
listen_addresses;*
local_preload_libraries;
log_autovacuum_min_duration;-1
log_checkpoints;off
log_connections;off
log_destination;stderr
log_directory;pg_log
log_disconnections;off
log_duration;off
log_error_verbosity;default
log_executor_stats;off
log_filename;postgresql-%Y-%m-%d_%H%M%S.log
log_hostname;off
log_line_prefix;
log_lock_waits;off
log_min_duration_statement;-1
log_min_error_statement;error
log_min_messages;notice
log_parser_stats;off
log_planner_stats;off
log_rotation_age;1d
log_rotation_size;10MB
log_statement;all
log_statement_stats;off
log_temp_files;-1
log_timezone;America/Buenos_Aires
log_truncate_on_rotation;off
logging_collector;off
maintenance_work_mem;16MB
max_connections;100
max_files_per_process;1000
max_fsm_pages;153600
max_fsm_relations;1000
max_function_args;100
max_identifier_length;63
max_index_keys;32
max_locks_per_transaction;64
max_prepared_transactions;5
max_stack_depth;2MB
password_encryption;on
port;5432
post_auth_delay;0
pre_auth_delay;0
random_page_cost;4
regex_flavor;advanced
search_path;$user,public
seq_page_cost;1
server_encoding;LATIN1
server_version;8.3beta3
server_version_num;80300
session_replication_role;origin
shared_buffers;24MB
shared_preload_libraries;
silent_mode;off
sql_inheritance;on
ssl;off
standard_conforming_strings;off
statement_timeout;0
superuser_reserved_connections;3
synchronous_commit;on
syslog_facility;LOCAL0
syslog_ident;postgres
tcp_keepalives_count;9
tcp_keepalives_idle;7200
tcp_keepalives_interval;75
temp_buffers;1024
temp_tablespaces;
TimeZone;America/Buenos_Aires
timezone_abbreviations;Default
trace_notify;off
trace_sort;off
track_activities;on
track_counts;on
transaction_isolation;read committed
transaction_read_only;off
transform_null_equals;off
unix_socket_directory;
unix_socket_group;
unix_socket_permissions;511
update_process_title;on
vacuum_cost_delay;0
vacuum_cost_limit;200
vacuum_cost_page_dirty;20
vacuum_cost_page_hit;1
vacuum_cost_page_miss;10
vacuum_freeze_min_age;1
wal_buffers;64kB
wal_sync_method;fdatasync
wal_writer_delay;200ms
work_mem;1MB
xmlbinary;base64
xmloption;content
zero_damaged_pages;off




  Tarjeta de crédito Yahoo! de Banco Supervielle.
Solicitá tu nueva Tarjeta de crédito. De tu PC directo a tu casa. 
www.tuprimeratarjeta.com.ar 

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] top posting

2007-12-17 Thread Lew

Bruce Momjian wrote:

I do top-post if I am asking _about_ the email, rather than addressing
its content, like Is this a TODO item?  You don't want to trim the
email because it has context that might be needed for the reply, and
bottom-posting just makes it harder to find my question, and the
question isn't really related to the content of the email.


Strictly speaking, then, that isn't top-posting but inline posting, where in 
line is position 0, with trim, where the amount trimmed is none.


--
Lew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] mssql migration and boolean to integer problems

2007-12-17 Thread robert lazarski
On Dec 14, 2007 12:49 PM, Gregory Stark [EMAIL PROTECTED] wrote:

 robert [EMAIL PROTECTED] writes:

  So it has two 'bool' - includeScenario  and deleted . I have an
  insert like...
 
  INSERT INTO ASSETSCENARIO
  (assetScenarioID,OBJ_VERSION,includeScenario,scenarioName,probability,occurenceDate,notes,priceTarget,assetID,created,modified,createdUserID,modifiedUserID,deleted)
  VALUES
  (197,0,1,'2007-12-13 11:31:00.000','2007-12-13 11:31:00.000',2,2,NULL);

 There's an SQL standard syntax too, but the Postgres-specific syntax is:

 postgres=# select 1::bool;
  bool
 --
  t
 (1 row)

 postgres=# select 0::bool;
  bool
 --
  f
 (1 row)


 Alternatively you could just quote the inputs. If you insert '0' and '1'
 they'll be parsed as boolean values. It's just because you used 0 and 1
 without quotes that they're parsed as integers first then don't match the
 boolean type.


Is there any way to get the above insert to work as is, via a function
or some other way? I tried the function of another poster but it seems
there is already a cast built in for 8.1.9 for integer to boolean, and
it didn't work for me on the above insert. I'm using Java and
Hibernate so I don't control the select so I wouldn't be able to cast
on it AFAIK. I've gotten this far using the mssql inserts by tweaking
them via regular expressions. The problem I have with putting quotes
around the values such as '0' and '1' is that would be a seem to me to
be a hard search and replace expression to write, as 0 and 1 is so
common, the booleans are scattered around a lot, and there's no
indication in the inserts file to indicate what fields are booleans.

What I'm trying to avoid is just using integer for these values
instead of boolean - that would work but I'd have to rewrite a fair
amount of java code to do do that.

Thanks for any further ideas,
Robert

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Anomalia file FILBD.TXT

2007-12-17 Thread Vincenzo Romano
Il Friday 14 December 2007 19:32:14 Andrej Ricnik-Bay ha scritto:
 On 12/15/07, Vincenzo Romano [EMAIL PROTECTED] wrote:
  Buon giorno.
  Nel file FILBD.TXT sono presenti dei prodotti per il quali il campo
  Unita' di Misura (offset 182) presenza il valore PM.
  Allego la lista dei prodotti interessati.

 And in English?

 Or maybe find an Italian list? :}


 Cheers,
 Andrej

Sorry guys.
It's been a mistake. I've tried to ask the owner to discard the email.
I'm not even subscribed to this list: the usual copy+paste mistake!

It'd be too boring in English, though! :-)

-- 
Vincenzo Romano
NotOrAnd.IT
Information Technologies
cel. +39 339 8083886
tel. +39 0823 454163
fax. +39 02 700506964
Non quietis maribus nauta peritus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] mssql migration and boolean to integer problems

2007-12-17 Thread Tom Lane
robert lazarski [EMAIL PROTECTED] writes:
 Is there any way to get the above insert to work as is, via a function
 or some other way? I tried the function of another poster but it seems
 there is already a cast built in for 8.1.9 for integer to boolean, and
 it didn't work for me on the above insert.

You could mark the built-in cast as assignment-only (I wouldn't
recommend setting it to implicit, as it's not clear what cases
that might break).

d1=# create table foo (f1 bool);
CREATE TABLE
d1=# insert into foo values(1);
ERROR:  column f1 is of type boolean but expression is of type integer
HINT:  You will need to rewrite or cast the expression.
d1=# update pg_cast set castcontext = 'a' where castsource = 'int'::regtype
d1-# and casttarget = 'bool'::regtype;
UPDATE 1
d1=# insert into foo values(1);
INSERT 0 1

Unfortunately this is something you'd have to do over after any database
reload, because pg_dump won't preserve changes to the definitions of
built-in objects.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgres writer process growing up too much

2007-12-17 Thread Heiner Vega
Thanks to you guys for your help... I appreciate it a lot.

Now, I still have my SHR and RES growing up. How can I know the number at
which those values should stop?

On Dec 14, 2007 5:12 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Richard Huxton [EMAIL PROTECTED] writes:
  Heiner Vega wrote:
  I've been monitoring my postgres processes and I noticed that the
 resident
  memory
  size of the writer process is growing up too much.

  Notice the SHR=shared value. That's 155MB virtual memory, 140MB of it
  resident of which 139MB is shared with other processes. So - nothing to
  worry about.

 The reason the SHR number grows over time is that the system only counts
 a page of shared memory against the process after the process has first
 touched it.  Once the bgwriter has touched every page of shared buffers,
 the number will stop changing.

 If there were actually a memory leak, the VIRT number would be growing
 ... but it's not.

  My postgres version is 8.1.3

 If I were you, I'd be considerably more worried about the fact that
 you are running an old release with a pile of known bugs.

regards, tom lane




-- 
Heiner Vega Thames


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Colin Wetherbee

Sam Mason wrote:

On Sun, Dec 16, 2007 at 06:31:56PM -0500, Colin Wetherbee wrote:
If I write one Perl sub for each operation on the table (e.g. one that 
gets the username and password hash, another that gets the last name and 
first name, etc.), there will be a whole lot of subs, each of which 
performs one very specific task.


If I write one larger Perl sub that grabs the whole row, and then I deal 
with the contents of the row in Perl, ignoring columns as I please, it 
will require fewer subs and, in turn, imply cleaner code.


It sounds as though you're just treating the database as a relatively
dumb datastore.  They can be used as this, and Toms comments are as
always good, but relational databases come into their own when you're
writing more complex queries.  When I'm writing my code I tend to put
the SQL statements directly in with the rest of the code, abstracting
away from the database tends to make things more complicated than they
need to be.


Because I know Perl a whole lot better than SQL, PostgreSQL, and even 
the Perl DBI, I'm always inclined to wrap the database stuff in a nice 
little package and forget about it.  This method has worked well for me 
in the past, but the project I'm starting is much bigger in terms of 
database use than anything else I've written.


You say you write SQL directly in your application code, rather than 
writing task-oriented wrappers.  I like that idea, but I can see that 
getting really messy.  What is your opinion on how it affects code 
maintenance and things like that?


Thanks.

Colin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgres8.3beta encodding problem?

2007-12-17 Thread Jeff Davis
On Mon, 2007-12-17 at 13:53 -0300, marcelo Cortez wrote:
 select chr(165);
 ERROR:  requested character too large for encoding:
 165
  it's one old scrip if not remember wrong works
 postgres in 8.2.4
 any clue?


http://www.postgresql.org/docs/8.3/static/release-8-3.html

Ensure that chr() cannot create invalidly-encoded values (Andrew) 

In UTF8-encoded databases the argument of chr() is now treated as a
Unicode code point. In other multi-byte encodings chr()'s argument must
designate a 7-bit ASCII character. Zero is no longer accepted. ascii()
has been adjusted to match.

Regards,
Jeff Davis



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Sam Mason
On Mon, Dec 17, 2007 at 12:49:46PM -0500, Colin Wetherbee wrote:
 Because I know Perl a whole lot better than SQL, PostgreSQL, and even 
 the Perl DBI, I'm always inclined to wrap the database stuff in a nice 
 little package and forget about it.  This method has worked well for me 
 in the past, but the project I'm starting is much bigger in terms of 
 database use than anything else I've written.

SQL allows you to express some sorts of problems very succinctly and
efficiently, the problem is that you need some experience of SQL before
knowing when is best to use imperative vs relational tools.

Without knowing what sort of thing you're trying to do I'm unable to
give any appropriate examples.  I'd definitely say that spending time
getting to know databases has made me much more productive in total.
Mainly because with relational databases you just don't have to worry
about lots of the, mostly distracting, details that you do in imperative
languages.

 You say you write SQL directly in your application code, rather than 
 writing task-oriented wrappers.  I like that idea, but I can see that 
 getting really messy.  What is your opinion on how it affects code 
 maintenance and things like that?

It's normally very manageable, but it'll depend on where you like to
put your machinery.  I seem to be relying more and more on (database)
server side code and most of the UI code just tends to do some initial
simple input validation and then hand all the data over to the database.
My UI code then takes any results back from the database and feeds it
into the appropriate places.  I'd guess that maybe 5 to 10% of the lines
in my programs contain some SQL---i.e. two or three lines of SQL for
every twenty or thirty lines.  I don't think I'd save much, if any,
lines in total if I tried to put some database abstraction layer in the
middle.  I can say with reasonable certainty that it would be much more
complicated to keep everything working as efficiently.

Luckily I've been able to design most of the programs I work on as
relatively simple layers over a database, I'm not sure if you're able to
work like this.


  Sam

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Ted Byers

--- Colin Wetherbee [EMAIL PROTECTED] wrote:

 Sam Mason wrote:
  On Sun, Dec 16, 2007 at 06:31:56PM -0500, Colin
 Wetherbee wrote:
  If I write one Perl sub for each operation on the
 table (e.g. one that 
  gets the username and password hash, another that
 gets the last name and 
  first name, etc.), there will be a whole lot of
 subs, each of which 
  performs one very specific task.
 
Right. First rule of software engineering is keep
functions as small as possible, focussed on one thing
wherever practicable.  It doesn't matter if the
language is Perl or C++ or Java, or a stored procedure
in an RDBMS.  One can always create additional driver
functions that use the elemental simple functions to
do more complex tasks (bearing in mind the
complexities that will inevitably arise in multiple
user situations).

  If I write one larger Perl sub that grabs the
 whole row, and then I deal 
  with the contents of the row in Perl, ignoring
 columns as I please, it 
  will require fewer subs and, in turn, imply
 cleaner code.
  
Define cleaner code.  The more data, and the more
complex that data, the more code you have to write,
regardless of whether that is in one function or
several.  Either way, done badly, can be a problem for
both maintenance and performance.

  It sounds as though you're just treating the
 database as a relatively
  dumb datastore.  They can be used as this, and
 Toms comments are as
  always good, but relational databases come into
 their own when you're
  writing more complex queries.  When I'm writing my
 code I tend to put
  the SQL statements directly in with the rest of
 the code, abstracting
  away from the database tends to make things more
 complicated than they
  need to be.
 
 Because I know Perl a whole lot better than SQL,
 PostgreSQL, and even 
 the Perl DBI, I'm always inclined to wrap the
 database stuff in a nice 
 little package and forget about it.  This method has
 worked well for me 
 in the past, but the project I'm starting is much
 bigger in terms of 
 database use than anything else I've written.
 
I routinely keep my SQL code distinct from my Perl,
java or C++ code.  When a client program needs to do
something with the database, then either a child
process executes a script I have written, if the
client program doesn't need to do anything with data
drawn from the database, or I have all the SQL code in
one or more stored procedures, and use the appropriate
client interface to invoke the stored procedure(s). 
Whether the SQL is in a specific script or in a stored
procedure, my SQL code is kept distinct from the
client code, regardles of the language I have used for
that.  I find this even MORE useful as my projects get
bigger.

 You say you write SQL directly in your application
 code, rather than 
 writing task-oriented wrappers.  I like that idea,
 but I can see that 
 getting really messy.  What is your opinion on how
 it affects code 
 maintenance and things like that?
 
From what I have seen, even in small, almost trivial,
client programs, I find this gets messy real quick. 
I, therefore, hate the idea of mixing SQL in with
client code (it is all, in my view, application code).
 I like the use of middleware objects since, if well
designed, they can make developing the code required
all the simpler.  It is only when badly done that an
abstraction leads to complicated code that is a
nightmare to maintain; worse if it is inadequately
documented.  The whole purpose of abstraction, whether
one is using objected oriented development of
middleware, or generic programming, or structured
programming, or functional programming, is to analyze
a complex problem into more manageable parts.  The
parts themselves become easier to code (witness java
beans of various kinds, such as backing beans - my
beans get their data either from the database or from
the user interface - in either case, they make
interaction between a web based interface and the
database back end MUCH simpler to code), and the
relationships among the parts are easier to
understand.  Each kind of abstraction has its place. 
It is up to the analyst or architect to figure out how
many layers and what abstractions are appropriate for
a given project.

HTH

Ted

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Colin Wetherbee

Sam Mason wrote:

Luckily I've been able to design most of the programs I work on as
relatively simple layers over a database, I'm not sure if you're able to
work like this.


I'm not at liberty to divulge much of the application concept, but 
consider, if you will, an application like Gmail or any other web-based 
mail service.


I'm venturing a guess that the database code in Gmail scripts isn't 
overly-complicated, and there are probably a handful of task 
categories that get executed by the application.  Folder operations 
might be lumped into one category, and SMTP operations into another, for 
example.  Each category probably has a few variations, like retrieving 
an email with or without full headers.  Overall, though, I would wager 
that the front-end, UI-type stuff in Gmail is much more complicated than 
the database code, especially with all the Javascript it uses (though, I 
suspect most of that is relatively static code).


This is roughly the distribution of code I'm implementing: lots of web 
stuff with only a few database hits per page, most of which are SELECT 
queries.


So, really, I don't think my application would be considered to be 
relatively simple layers over a database, since the UI part will be so 
full-featured.  I doubt I would ever see 5-10% of the lines accessing 
the database in this application.  A better estimate would probably be 
around 1% or 1.5%.


My guess, having written this, is that your approach might be more 
useful for applications that rely heavily on interaction with a 
database.  I'd appreciate any more comments you have on this, though.


Colin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Colin Wetherbee

Ted Byers wrote:

--- Colin Wetherbee [EMAIL PROTECTED] wrote:


Sam Mason wrote:

On Sun, Dec 16, 2007 at 06:31:56PM -0500, Colin

Wetherbee wrote:

If I write one Perl sub for each operation on the
table (e.g. one that 

gets the username and password hash, another that
gets the last name and 

first name, etc.), there will be a whole lot of
subs, each of which 

performs one very specific task.


Right. First rule of software engineering is keep
functions as small as possible, focussed on one thing
wherever practicable.  It doesn't matter if the
language is Perl or C++ or Java, or a stored procedure
in an RDBMS.  One can always create additional driver
functions that use the elemental simple functions to
do more complex tasks (bearing in mind the
complexities that will inevitably arise in multiple
user situations).


I've programmed in this way for years, basically ever since I learned 
object-oriented programming.  I find it cleaner to keep functional 
elements separate and access them sequentially from larger, more 
broadly-focused functions.



If I write one larger Perl sub that grabs the
whole row, and then I deal 

with the contents of the row in Perl, ignoring
columns as I please, it 

will require fewer subs and, in turn, imply

cleaner code.

Define cleaner code.  The more data, and the more
complex that data, the more code you have to write,
regardless of whether that is in one function or
several.  Either way, done badly, can be a problem for
both maintenance and performance.


See above.


I routinely keep my SQL code distinct from my Perl,
java or C++ code.  When a client program needs to do
something with the database, then either a child
process executes a script I have written, if the
client program doesn't need to do anything with data
drawn from the database, or I have all the SQL code in
one or more stored procedures, and use the appropriate
client interface to invoke the stored procedure(s). 
Whether the SQL is in a specific script or in a stored

procedure, my SQL code is kept distinct from the
client code, regardles of the language I have used for
that.  I find this even MORE useful as my projects get
bigger.


This seems like quite a departure from Sam's recommendation.  Now, I'm torn!

Colin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Sam Mason
On Mon, Dec 17, 2007 at 01:29:10PM -0500, Ted Byers wrote:
 I routinely keep my SQL code distinct from my Perl,
 java or C++ code.  When a client program needs to do
 something with the database, then either a child
 process executes a script I have written, if the
 client program doesn't need to do anything with data
 drawn from the database, or I have all the SQL code in
 one or more stored procedures, and use the appropriate
 client interface to invoke the stored procedure(s). 
 Whether the SQL is in a specific script or in a stored
 procedure, my SQL code is kept distinct from the
 client code, regardles of the language I have used for
 that.  I find this even MORE useful as my projects get
 bigger.

Humm, this discussion is horribly domain specific.  Abstractions are the
bread and butter of programming and designing them appropriately makes
the difference between things turning into a mess later on or not.  The
only solid rules I stick to is that when I'm working to solve a problem
I've not solved before, I will get it (i.e. the way I go about solving
the problem, getting the right answer is generally pretty easy) wrong
the first time and probably only get it reasonable the third or fourth
time.

Luckily, big bits of software take a while to accumulate so it's
possible to do the naive thing first and then, when you know how things
work, better abstractions can be found and incorporated.  For example,
I'd be tempted to replace your get password hash with a verify
password procedure that actually hashes the password client side and
then sends it over to the database for actual validation.  However, you
may actually care what the password hash is, at which point the hashing
should be broken out and revealed to the appropriate balance of the
software.

 Each kind of abstraction has its place. 
 It is up to the analyst or architect to figure out how
 many layers and what abstractions are appropriate for
 a given project.

Indeed.  But to be able to choose a suitable set of abstractions, it
helps for the designer to know the ins and outs of the tools being
used.  I was trying to encourage Colin to look at using databases
in a different way, different styles of programming suit different
applications and hiding the database can have detrimental effects as
well as positive effects.


  Sam

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] running Apple Mail on Postgres?

2007-12-17 Thread Matthew Hixson
This may be a tad off topic, but thought a PG enthusiast might have  
some insight on this.  Apple Mail sits on top of Sqlite.  I was  
wondering if it would be possible to substitute Postgres as the  
underlying database.  I do know how to vacuum Sqlite to speed up  
Mail, but with the massive amount of email I have I was wondering if  
Postgres could more easily handle the workload.

  Thanks,
-M@

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] slony error --need help

2007-12-17 Thread Shane Ambler

Not sure I can help much, I only read out of interest, but thought
clarifying a few things may help.

SHARMILA JOTHIRAJAH wrote:

Hi I had posted this in the slony mailing list but no luck in getting
any answers...Pls help me as I'm stuck with this error for the last 4
days



Im trying to replicate between postgres version 7.4.18 and version
8.1.10.


7.4 is the master?


I configured postgres-7.4 with enable-thread-safety option

I configured slony1 with this command

./configure --prefix=/export/home/josh/slony7.4
--enable-thread-safety
--with-pgconfigdir=/export/home/josh/postgres7.4/bin
--with-pgsourcetree=/export/home/josh/postgresql-7.4.18


This is the 7.4 config? is the 8.1 config the same?
(your not building slony for 8.1 against a 7.4 source tree?)

Are both versions running on the same machine?

What version of Slony? same for both servers?



When i try to execute the this script


On the 7.4 machine?


#!/bin/sh

slonik _EOF_ cluster name = slony_example;

node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST
user=$REPLICATIONUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME
host=$SLAVEHOST user=$REPLICATIONUSER';

init cluster ( id=1, comment = 'Master Node');

create set (id=1, origin=1, comment='All pgbench tables'); set add
table (set id=1, origin=1, id=1, fully qualified name =
'public.sample1', comment='accounts table'); store node (id=2,
comment = 'Slave node');

store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME
host=$MASTERHOST user=$REPLICATIONUSER'); store path (server = 2,
client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST
user=$REPLICATIONUSER');

_EOF_



I get theis error

stdin:21: PGRES_FATAL_ERROR load '$libdir/xxid';  - ERROR:  could
not load library /export/home/josh/postgres7.4/lib/xxid.so:
ld.so.1: postgres: fatal: relocation error: file
/export/home/josh/postgres7.4/lib/xxid.so: symbol
GetTopTransactionId: referenced symbol not found

stdin:21: Error: the extension for the xxid data type cannot be
loaded in database 'dbname=testdb1 host=172.31.0.67 user=josh' 
stdin:21: ERROR: no admin conninfo for node 134701624





The same works fine between postgresql versions 8.1.10 and 8.2.5 .

Why do I get this error when replicating between versions 7.4 and8.1.
 Does slony1 replicate between these 2 versions? If so is there any
other settings that needs to be done?




--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Efficiency vs. code bloat for SELECT wrappers

2007-12-17 Thread Ted Byers

--- Sam Mason [EMAIL PROTECTED] wrote:

 On Mon, Dec 17, 2007 at 01:29:10PM -0500, Ted Byers
 wrote:
  I routinely keep my SQL code distinct from my
 Perl,
  java or C++ code.  When a client program needs to
 do
  something with the database, then either a child
  process executes a script I have written, if the
  client program doesn't need to do anything with
 data
  drawn from the database, or I have all the SQL
 code in
  one or more stored procedures, and use the
 appropriate
  client interface to invoke the stored
 procedure(s). 
  Whether the SQL is in a specific script or in a
 stored
  procedure, my SQL code is kept distinct from the
  client code, regardles of the language I have used
 for
  that.  I find this even MORE useful as my projects
 get
  bigger.
 
 Humm, this discussion is horribly domain specific. 
 Abstractions are the
 bread and butter of programming and designing them
 appropriately makes
 the difference between things turning into a mess
 later on or not.  The
 only solid rules I stick to is that when I'm working
 to solve a problem
 I've not solved before, I will get it (i.e. the way
 I go about solving
 the problem, getting the right answer is generally
 pretty easy) wrong
 the first time and probably only get it reasonable
 the third or fourth
 time.
 

Right.  But there is a quicker way.  I like to work
with people who have experience that is different from
mine, expecting they will have seen a different suite
of problems and found solutions I have come to rely
on.  In that way, in discussing my current problems, I
can learn something new, and that much faster than
trial and error.  Of course, there are always test or
proof of concept programs, where I test ideas.  For
example, I have routinely been advised that left joins
are faster than correlated subqueries.  In the past
month, I have found two problems requiring either a
correlated subquery or a left join, and in one case
the correlated subquery was more than ten times faster
while in the other the left join was about twice as
fast.  In both cases, the results returned by the two
approaches were identical, but there were significant
differences in performance; most astonishing in the
one case that proved to be quite different than
expected.

  Each kind of abstraction has its place. 
  It is up to the analyst or architect to figure out
 how
  many layers and what abstractions are appropriate
 for
  a given project.
 
 Indeed.  But to be able to choose a suitable set of
 abstractions, it
 helps for the designer to know the ins and outs of
 the tools being
 used.  I was trying to encourage Colin to look at
 using databases
 in a different way, different styles of programming
 suit different
 applications and hiding the database can have
 detrimental effects as
 well as positive effects.
 
I find the opportunity to look at problems in a
different light priceless.  

But some practices generally turn out to be counter
productive.  Producing overly long functions, or
mixing code involving different languages into the
same file, often leads to an unmaintainable mess. 
More often than not, such practices are a consequence
of poor design.  But even here, there is no hard and
fast rule, since some algorithms, such as numeric
quadrature or QR factorization of general real
matrices can not be written in a short, simple
function although they logically ought to be one
function in the simplest cases; but even these admit
enhancements that warrant more interesting data
structures and ancillary functions.  But for these
algorithms, which admittedly have nothing to do with
databases, the top consideration is the speed of
provably correct code.  If that means it is too
complex for junior or even intermediate programmers,
so be it.  I have seen and used library code for
number crunching that only a specialist in numeric
methods have a hope of easily understanding, and been
glad I didn't have to write those functions myself. 
;-)

For Colin's situation, what I would have suggested
would be a simple extension of what he normally does,
and that is look at an additional layer that gets the
data from the database as efficiently as possible, and
cache it in that layer, providing it to the UI layer
as required.  Similarly, perhaps in a different layer,
get user data from the interface and cache it, sending
it to the database back end at the most appropriate
time in the most efficient way.  And perhaps with all
this, some form of database connection pooling;
although this latter option will depend on how many
concurrent users there may be in the worse case
scenario (or maybe that is the best case scenario
since it would mean that there is significant demand
for what the app does ;-).

I haven't attempted this in Perl, but it is dirt
simple in Java/J2EE.  In a web application, based,
e.g. on Tomcat, it is easy to set this up with as many
layers as one can justify, especially with an IDE like
NetBeans6.  Not having thought about how to do this

Re: [GENERAL] running Apple Mail on Postgres?

2007-12-17 Thread Steve Atkins


On Dec 17, 2007, at 12:10 PM, Matthew Hixson wrote:

This may be a tad off topic, but thought a PG enthusiast might have  
some insight on this.  Apple Mail sits on top of Sqlite.  I was  
wondering if it would be possible to substitute Postgres as the  
underlying database.  I do know how to vacuum Sqlite to speed up  
Mail, but with the massive amount of email I have I was wondering if  
Postgres could more easily handle the workload.

 Thanks,


sqlite is (usually) an embedded database. That means a couple of  
things - one is that it's not something that you can simply swap out  
easily, it's linked into the Mail.app binary. The other is that, for  
this particular application (single reader/writer, simple workload)  
it's probably quite a lot faster than postgresql would be in theory,  
and both would be dominated by disk i/o in practice.


(And if you haven't upgraded to Leopard yet, you should. Mail.app  
sucks less on large IMAP boxes than with previous versions.)


If you want to do complex data-mining on email, there are several ways  
to pull email into a postgresql database, and then make it available  
via IMAP to a standard client. dbmail.org is one that springs to mind,  
archiveopteryx another.


Cheers,
  Steve


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] PostgreSQL Conference East Update

2007-12-17 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

The dates have been confirmed. The new dates for the conference are
March 29th and March 30th (instead of the 28th and 29th). The
conference is being held at the University of Maryland, College Park.
The location is 10 minutes from Washington D.C. and is strategically
located near many of our contributors. It is a perfect time, a perfect
place. 

With the date change, the conference is now a Saturday and Sunday
conference. This will insure that more community members will be able
to attend.

We have also increased our facilities at the conference to handle
the demand we have already received. This is going to allow the
conference to hold more tutorials, mini-tutorials and talks.

Lastly, I would like to thank our first confirmed sponsor for
this event, Continuent. Thanks Continuent!

For more information on the event:

Main site: http://www.postgresqlconference.org/
Sponsoring: http://www.postgresqlconference.org/sponsor/

Remember all sponsorships are donations to the PostgreSQL affiliated
501c3 non-profit, Software in the Public Interest and 100% of net
proceeds go to the community.

The call for talks is going out this week and registration opens up in
January 08.

Sincerely,

Joshua D. Drake


- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHZuYVATb/zqfZUUQRApylAJ97MVxgd7FvelUfaugCHBGyT9HZnQCfT488
XZGJgH9Lqyc/DlnvNUsDzqU=
=l2Zk
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] spreadsheet interface

2007-12-17 Thread hjenkins
Some people in my workplace are asking if there exists a GUI that will
allow cut-and-paste of multiple cells directly from (and, preferably,
directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
Any suggestions?





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Adrian Klaver
-- Original message --
From: hjenkins [EMAIL PROTECTED]
 Some people in my workplace are asking if there exists a GUI that will
 allow cut-and-paste of multiple cells directly from (and, preferably,
 directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
 Any suggestions?
 
 
OpenOffice. Set up a datasource using the Base component. You can drag and drop 
to the spreadsheet component.


--
Adrian Klaver
[EMAIL PROTECTED]

 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Thomas Hart

hjenkins wrote:

Some people in my workplace are asking if there exists a GUI that will
allow cut-and-paste of multiple cells directly from (and, preferably,
directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
Any suggestions?
  
By spreadsheet I'm going to assume you mean M$ excel. The best way I 
can think of to pull that off is to set up an Access db with the 
psqlODBC driver to connect to it. I believe you ought to at least be 
able to copy from the db that way (btw pgAdmin does work with a little 
effort, if you're too leet for access, though at that point you'd be 
able to work with pgadmin (or even better yet, and deserving of two 
levels of parantheses, use csv/txt files and COPY from psql at a dummy 
terminal in amber text)).


Lol sorry to get off the point, but this is the first suggestion that 
pops in mind.


--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Thomas Hart

Adrian Klaver wrote:

-- Original message --
From: hjenkins [EMAIL PROTECTED]
  

Some people in my workplace are asking if there exists a GUI that will
allow cut-and-paste of multiple cells directly from (and, preferably,
directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
Any suggestions?




OpenOffice. Set up a datasource using the Base component. You can drag and drop 
to the spreadsheet component.


--
Adrian Klaver
[EMAIL PROTECTED]
  
Another good suggestion, if you're organization is/can use OpenOffice. 
If you're stuck with M$ then you might be able to do this directly in excel.


--
Tom Hart
IT Specialist
Cooperative Federal
723 Westcott St.
Syracuse, NY 13210
(315) 471-1116 ext. 202
(315) 476-0567 (fax)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Bill Moran
In response to hjenkins [EMAIL PROTECTED]:

 Some people in my workplace are asking if there exists a GUI that will
 allow cut-and-paste of multiple cells directly from (and, preferably,
 directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
 Any suggestions?

To add on to Thomas' comment.  You can also install OpenOffice.org with
the pgsql ODBC driver and use the OOo spreadsheet to access data directly.

I haven't done this, personally, so I can't vouch for how well it works.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] install problem

2007-12-17 Thread Bill Taylor

Hey,

I am working on testing our windows version of software
which uses postgress on a machine with XP Pro and 512
meg of memory.

The very first time I installed, it was fine except for the
fact it was only listening on the localhost 127.0.0.1
and I need it listening on its ip address.

Since, I have not been able to re-install. I have tried thoroughly
cleaning machine and registry. The issue always comes back to
a 1920 error, do you have permissions in the event viewer
and always fails to start the service.

I even wiped away my test box for our product and
installed a fresh copy of windows on top of
my old version.

zip :-)

Can someone tell me what I am doing wrong? We have an end of the year
rush and we have a number of people world wide who would be using
this once we can actually test it.

Bill

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 17 Dec 2007 16:47:41 -0500
Bill Moran [EMAIL PROTECTED] wrote:

 In response to hjenkins [EMAIL PROTECTED]:
 
  Some people in my workplace are asking if there exists a GUI that
  will allow cut-and-paste of multiple cells directly from (and,
  preferably, directly to) spreadsheets. pgAdmin III, PGAccess, and
  TOra don't seem to. Any suggestions?
 
 To add on to Thomas' comment.  You can also install OpenOffice.org
 with the pgsql ODBC driver and use the OOo spreadsheet to access data
 directly.
 
 I haven't done this, personally, so I can't vouch for how well it
 works.


Well OpenOffice now has a native sdbc driver so no ODBC is required.
It seems to work very well.

Joshua D. Drake 


- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHZvAAATb/zqfZUUQRAjCkAJoDVALSb4UT4u0BcjLKhKxVOGR7dQCZAcN9
s7AarPlnSCq/AdlWrxi6MMg=
=AAjl
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] install problem

2007-12-17 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Mon, 17 Dec 2007 15:52:27 -0600
Bill Taylor [EMAIL PROTECTED] wrote:

 Hey,
 
 I am working on testing our windows version of software
 which uses postgress on a machine with XP Pro and 512
 meg of memory.
 
 The very first time I installed, it was fine except for the
 fact it was only listening on the localhost 127.0.0.1
 and I need it listening on its ip address.

Its in the postgresql.conf under your data directory.

 Can someone tell me what I am doing wrong? We have an end of the year
 rush and we have a number of people world wide who would be using
 this once we can actually test it.


http://www.postgresql.org/docs/8.2/static/runtime-config-connection.html

Sincerely,

Joshua D. Drake




- -- 
The PostgreSQL Company: Since 1997, http://www.commandprompt.com/ 
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
SELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHZvF8ATb/zqfZUUQRAjZpAJ9Fb3ytLIUe3/7V/ajnwhpRJHlTLQCeMdx6
EIZwXpv9Z4dFSDarnC3UqXg=
=Kmxh
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Limit record count

2007-12-17 Thread Sean Z.
Hi,

I need to write a maintenance function to delete oldest records of a table, 
leaving only a certain number of records, say 50.  The task should run once per 
day.  How can I write it?  
The oldest record carries the smallest ID.

Thanks a lot!
Sean




  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Dave Page


 --- Original Message ---
 From: hjenkins [EMAIL PROTECTED]
 To: pgsql-general@postgresql.org
 Sent: 17/12/07, 21:10:46
 Subject: [GENERAL] spreadsheet interface
 
 Some people in my workplace are asking if there exists a GUI that will
 allow cut-and-paste of multiple cells directly from (and, preferably,
 directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
 Any suggestions?

pgAdmin will copy arbitrary blocks of cells to spreadsheets, and will accept 
rows pasted into the edit grid. It should work out of the box with Excel and 
OpenOffice - if not, check the copy delimiter/quoting options under 
file-options.

Regards, Dave

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Limit record count

2007-12-17 Thread Rodrigo Gonzalez
DELETE FROM table WHERE id NOT IN (SELECT id FROM table ORDER BY 
id DESC LIMIT 50)


Sean Z. escribió:

Hi,

I need to write a maintenance function to delete oldest records of a 
table, leaving only a certain number of records, say 50.  The task 
should run once per day.  How can I write it? 
The oldest record carries the smallest ID.


Thanks a lot!
Sean


Never miss a thing. Make Yahoo your homepage. 
http://us.rd.yahoo.com/evt=51438/*http://www.yahoo.com/r/hs 





smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] install problem

2007-12-17 Thread Greg Smith

On Mon, 17 Dec 2007, Bill Taylor wrote:


Since, I have not been able to re-install. I have tried thoroughly
cleaning machine and registry. The issue always comes back to
a 1920 error, do you have permissions in the event viewer
and always fails to start the service.


Have you wiped out whatever user was responsible for starting the service? 
That's both something that could cause your error and something that would 
survive the re-intall regime you've tried so far.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Adrian Klaver
On Monday 17 December 2007 1:47 pm, Bill Moran wrote:
 In response to hjenkins [EMAIL PROTECTED]:
  Some people in my workplace are asking if there exists a GUI that will
  allow cut-and-paste of multiple cells directly from (and, preferably,
  directly to) spreadsheets. pgAdmin III, PGAccess, and TOra don't seem to.
  Any suggestions?

 To add on to Thomas' comment.  You can also install OpenOffice.org with
 the pgsql ODBC driver and use the OOo spreadsheet to access data directly.

 I haven't done this, personally, so I can't vouch for how well it works.

For the record I have luck using OO with either the Postgres  JDBC driver or 
the native SDBC driver.  I tried the ODBC driver with not too much success. 

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] spreadsheet interface

2007-12-17 Thread Adrian Klaver
On Monday 17 December 2007 3:10 pm, hjenkins wrote:
 Hello, all,

 Dave Page writes:
 pgAdmin will copy arbitrary blocks of cells to spreadsheets, and will
 accept rows pasted into the edit grid. It should work out of the box

 with Excel and OpenOffice - if not, check the copy delimiter/quoting
 options under file-options.
 That sounds perfect, but I can't get it to work, even with OO. I can't
 find any copy delimiter/quoting options under file-options.

 When you say pgAdmin does work with a little effort, Mr. Hart, do you
 mean that I can set it up so that one can cut-and-paste multiple cells, as
 Mr. Page is describing, or that there is a work-around which requires more
 effort per-use than cut-and-paste?

 Using OpenOffice:
 OO driver: http://dba.openoffice.org/drivers/postgresql/index.html
 JDBC driver: http://jdbc.postgresql.org/
 ODBC driver: http://pgfoundry.org/projects/psqlodbc/ and others

 Since the JDBC driver is in Gentoo-portage, I tried that, but
 OpenOffice can't load org.postgresql.Driver as the JDBC Driver class... If
 I use this, I'll let you know how it goes.
You need to set up OpenOffice. Do following:
1) From Menu bar Tools -- Options
2) Click on OpenOffice.org -- Java
3) Make sure a Java Runtime Environment is set up
4) Click on the Class Path button
5) Click on Add Archive and browse for location of the Postgres JDBC jar file.
6) Back out of menus
7) Restart OpenOffice

 I'm too... experienced with Access (several days worth)... to want to use
 it, personally, but it might work for this application. I'll have a look.
 I've been using Python scripts to generate the COPY functions and piping
 them to psql -f. I doubt, though, that my co-workers will be keen on doing
 this each time they want to move three cells into a report, especially
 since they have neither Python nor a UNIX shell.

 Belatedly, I've found a similar if outdated post at
 http://lists.suse.com/archive/suse-slox-e/2003-Aug/0149.html
 but it's been taken down; only a cached version.

 Thanks for all the replies. Honestly, you have to hover over your inbox on
 this mailing list, it pounces so quickly.

 Regards,
 H.Jenkins

-- 
Adrian Klaver
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] insert into t1 (delete from t0 returning *)

2007-12-17 Thread Matthew Dennis
in 8.3beta3

create table t0(c1 int);
create table t1(c1 int);

insert into t0 values (1);
insert into t0 values (2);
insert into t0 values (3);

If I execute delete from t0 returning * it deletes the rows and returns
the deleted rows.  I could insert all those rows into t1 by doing insert
into t1 (select * from t0).  I would expect to be able to do the same with
returning *.  In particular, I would expect insert into t1 (delete from
t0 returning *) to delete the rows from t0 and insert them into t1 but I
get a syntax error - is that expected?


Re: [GENERAL] insert into t1 (delete from t0 returning *)

2007-12-17 Thread Merlin Moncure
On Dec 17, 2007 6:56 PM, Matthew Dennis [EMAIL PROTECTED] wrote:
 in 8.3beta3

 create table t0(c1 int);
 create table t1(c1 int);

 insert into t0 values (1);
  insert into t0 values (2);
  insert into t0 values (3);

 If I execute delete from t0 returning * it deletes the rows and returns
 the deleted rows.  I could insert all those rows into t1 by doing insert
 into t1 (select * from t0).  I would expect to be able to do the same with
 returning *.  In particular, I would expect insert into t1 (delete from
 t0 returning *) to delete the rows from t0 and insert them into t1 but I
 get a syntax error - is that expected?

Yes.

This is a 'faq' (although I don't think it's on the faq, maybe it should be.
It is harder than it looks.
Everyone wants it.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
Hello, I'm using 8.3b4 and keep experiencing server crash when I execute
various queries using XML functions. The crash backtraces look like this:

---
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x3f847ae1
0x004b140c in xmlCleanupCharEncodingHandlers ()
(gdb) bt
#0  0x004b140c in xmlCleanupCharEncodingHandlers ()
#1  0x004d01fc in xmlCleanupParser ()
#2  0x0026d514 in xpath (fcinfo=value temporarily unavailable, due to
optimizations) at xml.c:3441
#3  0x0010b908 in ExecMakeFunctionResult (fcache=0xa08474,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:1351
#4  0x0010a120 in ExecEvalArrayCoerceExpr (astate=0xa08428,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:3633
#5  0x0010b4c0 in ExecEvalFuncArgs (fcinfo=0xbfffd968, argList=value
temporarily unavailable, due to optimizations, econtext=0x7fe9ec) at
execQual.c:
#6  0x0010b5fc in ExecMakeFunctionResult (fcache=0xa0801c,
econtext=0x7fe9ec, isNull=0xbfffdbf8 , isDone=0x0) at execQual.c:1169
#7  0x0010c120 in ExecQual (qual=value temporarily unavailable, due to
optimizations, econtext=0x7fe9ec, resultForNull=value temporarily
unavailable, due to optimizations) at execQual.c:4501
#8  0x0010fcb8 in ExecScan (node=0x7fe960, accessMtd=0x119a20 IndexNext)
at execScan.c:131
#9  0x00107e80 in ExecProcNode (node=0x7fe960) at execProcnode.c:338
#10 0x0011c818 in ExecNestLoop (node=0x107bb8c) at nodeNestloop.c:170
#11 0x00107ee0 in ExecProcNode (node=0x107bb8c) at execProcnode.c:367
#12 0x0011e158 in ExecSort (node=0x107bb00) at nodeSort.c:102
#13 0x00107f20 in ExecProcNode (node=0x107bb00) at execProcnode.c:386
#14 0x00105f28 in ExecutorRun (queryDesc=value temporarily unavailable,
due to optimizations, direction=ForwardScanDirection, count=0) at
execMain.c:1233
#15 0x001c6834 in PortalRunSelect (portal=0x104be1c, forward=value
temporarily unavailable, due to optimizations, count=0, dest=0x103a228)
at pquery.c:943
#16 0x001c7c4c in PortalRun (portal=0x104be1c, count=2147483647,
isTopLevel=0 '\0', dest=0x103a228, altdest=0x103a228,
completionTag=0xbfffe3c4 ) at pquery.c:769
#17 0x001c59b4 in PostgresMain (argc=value temporarily unavailable, due
to optimizations, argv=value temporarily unavailable, due to
optimizations, username=value temporarily unavailable, due to
optimizations) at postgres.c:1844
#18 0x0018a468 in ServerLoop () at postmaster.c:3180
#19 0x0018b974 in PostmasterMain (argc=4, argv=0x800550) at postmaster.c:1028
#20 0x0012e2fc in main (argc=4, argv=0x800550) at main.c:188

---
#0  0x95c59ca8 in __kill ()
#1  0x95cfe7b8 in abort ()
#2  0x00281338 in ExceptionalCondition (conditionName=value temporarily
unavailable, due to optimizations, errorType=value temporarily
unavailable, due to optimizations, fileName=value temporarily
unavailable, due to optimizations, lineNumber=value temporarily
unavailable, due to optimizations) at assert.c:57
#3  0x002a1460 in pfree (pointer=0x207) at mcxt.c:581
#4  0x004b1428 in xmlCleanupCharEncodingHandlers ()
#5  0x004d01fc in xmlCleanupParser ()
#6  0x0026d514 in xpath (fcinfo=value temporarily unavailable, due to
optimizations) at xml.c:3441
#7  0x0010b908 in ExecMakeFunctionResult (fcache=0xa08474,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:1351
#8  0x0010a120 in ExecEvalArrayCoerceExpr (astate=0xa08428,
econtext=0x7fe9ec, isNull=0xbfffdb08 , isDone=0xbfffd8c8) at
execQual.c:3633
#9  0x0010b4c0 in ExecEvalFuncArgs (fcinfo=0xbfffd968, argList=value
temporarily unavailable, due to optimizations, econtext=0x7fe9ec) at
execQual.c:
#10 0x0010b5fc in ExecMakeFunctionResult (fcache=0xa0801c,
econtext=0x7fe9ec, isNull=0xbfffdbf8 , isDone=0x0) at execQual.c:1169
#11 0x0010c120 in ExecQual (qual=value temporarily unavailable, due to
optimizations, econtext=0x7fe9ec, resultForNull=value temporarily
unavailable, due to optimizations) at execQual.c:4501
#12 0x0010fcb8 in ExecScan (node=0x7fe960, accessMtd=0x119a20 IndexNext)
at execScan.c:131
#13 0x00107e80 in ExecProcNode (node=0x7fe960) at execProcnode.c:338
#14 0x0011c818 in ExecNestLoop (node=0x107c58c) at nodeNestloop.c:170
#15 0x00107ee0 in ExecProcNode (node=0x107c58c) at execProcnode.c:367
#16 0x0011e158 in ExecSort (node=0x107c500) at nodeSort.c:102
#17 0x00107f20 in ExecProcNode (node=0x107c500) at execProcnode.c:386
#18 0x00105f28 in ExecutorRun (queryDesc=value temporarily unavailable,
due to optimizations, direction=ForwardScanDirection, count=0) at
execMain.c:1233
#19 0x001c6834 in PortalRunSelect (portal=0x104781c, forward=value
temporarily unavailable, due to optimizations, count=0, dest=0x102d828)
at pquery.c:943
#20 0x001c7c4c in PortalRun (portal=0x104781c, count=2147483647,
isTopLevel=0 '\0', dest=0x102d828, altdest=0x102d828,
completionTag=0xbfffe3c4 ) at pquery.c:769
#21 0x001c59b4 in PostgresMain (argc=value temporarily unavailable, due
to optimizations, argv=value temporarily unavailable, due to
optimizations, 

Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Tom Lane
Matt Magoffin [EMAIL PROTECTED] writes:
 Hello, I'm using 8.3b4 and keep experiencing server crash when I execute
 various queries using XML functions. The crash backtraces look like this:

This was reported before,
http://archives.postgresql.org/pgsql-general/2007-12/msg00716.php
but neither he nor you have provided anything approximating a
reproducible test case.  The interactions with libxml are messy
enough that I'm not even going to think about fixing this without
a test case to trace through.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
 This was reported before,
 http://archives.postgresql.org/pgsql-general/2007-12/msg00716.php
 but neither he nor you have provided anything approximating a
 reproducible test case.  The interactions with libxml are messy
 enough that I'm not even going to think about fixing this without
 a test case to trace through.

I'll try to get a test case to you. I've had trouble getting it to
consistently crash, except via JDBC. Do you still happen to have that
database dump I provided to you previously? I can try to get the crash to
occur on that table.

I also noticed these in my log file, don't know if this is helpful:

TRAP: FailedAssertion(!(pointer == (void *) (((long) ((pointer)) + ((4) -
1))  ~((long) ((4) - 1, File: mcxt.c, Line: 581)
LOG:  server process (PID 714) was terminated by signal 6: Abort trap

TRAP: BadArgument(!(((header-context) != ((void *)0) 
(Node*)((header-context)))-type) == T_AllocSetContext, File:
mcxt.c, Line: 589)
LOG:  server process (PID 633) was terminated by signal 6: Abort trap

-- m@



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Tom Lane
Matt Magoffin [EMAIL PROTECTED] writes:
 Do you still happen to have that
 database dump I provided to you previously?

Not sure --- when are you thinking of, and what was the context?
I don't usually keep sample data unless the issue still seems open.

 I also noticed these in my log file, don't know if this is helpful:
 TRAP: FailedAssertion(!(pointer == (void *) (((long) ((pointer)) + ((4) -
 1))  ~((long) ((4) - 1, File: mcxt.c, Line: 581)
 LOG:  server process (PID 714) was terminated by signal 6: Abort trap
 TRAP: BadArgument(!(((header-context) != ((void *)0) 
 (Node*)((header-context)))-type) == T_AllocSetContext, File:
 mcxt.c, Line: 589)
 LOG:  server process (PID 633) was terminated by signal 6: Abort trap

These are consistent with the idea that we've got a memory-allocation
problem, ie, libxml is trying to access data that was already freed.
But exactly where and how is not any more clear than before.

FWIW, I think it's unlikely that a single query will reproduce this,
because the problem looks to be an expectation that leftover data is
still valid when it ain't.  What you need to be looking for is a series
of two or more queries that crash PG.  Possibly it'll be easier to
reproduce with that in mind ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Many 8.3 server crashes in xmlCleanupCharEncodingHandlers()

2007-12-17 Thread Matt Magoffin
 Not sure --- when are you thinking of, and what was the context?
 I don't usually keep sample data unless the issue still seems open.

I was referring to a dump I provided a link to you called
pg83-leads-sanitized.db which was around 20 Dec, with email subject Re:
[GENERAL] 8.3b2 XPath-based function index server crash.

 I also noticed these in my log file, don't know if this is helpful:
 TRAP: FailedAssertion(!(pointer == (void *) (((long) ((pointer)) + ((4)
 -
 1))  ~((long) ((4) - 1, File: mcxt.c, Line: 581)
 LOG:  server process (PID 714) was terminated by signal 6: Abort trap
 TRAP: BadArgument(!(((header-context) != ((void *)0) 
 (Node*)((header-context)))-type) == T_AllocSetContext, File:
 mcxt.c, Line: 589)
 LOG:  server process (PID 633) was terminated by signal 6: Abort trap

 These are consistent with the idea that we've got a memory-allocation
 problem, ie, libxml is trying to access data that was already freed.
 But exactly where and how is not any more clear than before.

 FWIW, I think it's unlikely that a single query will reproduce this,
 because the problem looks to be an expectation that leftover data is
 still valid when it ain't.  What you need to be looking for is a series
 of two or more queries that crash PG.  Possibly it'll be easier to
 reproduce with that in mind ...

Thanks for the tips. I am trying to get some sort of reproducible series
of queries, but so far no luck. I'll let you know if I find anything.

-- m@



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly