Re: [GENERAL] 2 questions

2015-12-02 Thread Jim Nasby

On 12/1/15 10:26 AM, Scott Mead wrote:

The data directory will cause you many problems. You will need one data
directory that is accessed by one AND ONLY one host for each node
connected. You can't run an instance on multiple machines pointing to
the same 'data' directory simultaneously.  Data directories cannot be
shared by multiple instances simultaneously, that's an active/active
shared disk cluster and most databases don't support it or require
massive overhead ( network/licensing I.e. Oracle rac) to do that.


Not only that, but trying to reliably operate a database on an NFS mount 
is *extremely* difficult. NFS is notorious for not correctly honoring 
things like fsync and atomic rename, which pretty much guarantees you'll 
end up with a corrupted (and unusable) database at some point.


VMs can also be dangerous in this regard, though as far as I know all 
the major ones should be safe with a default configuration. Be extremely 
careful about any "performance tweaks" you decide to do though.


Personally, for small environments, I tend to prefer just running the 
database on bare hardware and making use of features like connection 
poolers and replication to provide for high availability and operational 
flexibility, rather than trying to finagle it through a bunch of 
virtualization.


One last think to consider... you didn't mention backups. Understand 
that multiple copies of a database kept on the same storage system are 
NOT a replacement for backups, no matter how much redundancy that 
storage system has. All it takes is one bug in the storage firmware, or 
one person hacking into the control interface for the storage system (or 
one malicious employee) and ALL of your data goes *poof*. You may well 
decide that it's not worth the extra cost/overhead to protect against 
those events. Just don't fall into the trap of "we have multiple 
snapshots/copies, and those are just as good as backups". :)

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-12-02 Thread anj patnaik
Thanks for the info Scott.

Can I setup a primary server called A that uses the Postgres installation
on a nfs mounted filesystem and then A does a daily backup of database A
and restores to database B on same filesystem.

Then I have server B acting as cold standby and if server A goes down, then
B would make the backup DB active?

In this case, both server A and B have the same NFS mount, but B is not
always writing to the data.

The target environment I am moving to is a RAID5 system with mirrored
discs. I will be getting a VM on a physical server and the mirrored disks
will store the database. I would like to setup a primary and secondary as
cold standby.

I am fairly new to PG so asking these questions.

Please advise. Thank you.

On Tue, Dec 1, 2015 at 11:26 AM, Scott Mead  wrote:

>
>
> On Nov 30, 2015, at 12:54, anj patnaik  wrote:
>
> 1) directory listing:
>
> /opt/nfsDir/postgres/9.4/
> /bin
> /data
> /etc
>/pgAdmin3
>   
>
>
> The data directory will cause you many problems.   You will need one data
> directory that is accessed by one AND ONLY one host for each node
> connected. You can't run an instance on multiple machines pointing to the
> same 'data' directory simultaneously.  Data directories cannot be shared by
> multiple instances simultaneously, that's an active/active shared disk
> cluster and most databases don't support it or require massive overhead (
> network/licensing I.e. Oracle rac) to do that.
>
>   You *can* re-use the other directories, it can be wrought with issues,
> and you need to carefully think though upgrades, etc
>
> 2) The way I am using PG now is that I have specified the directory above
> as the location to install it (from graphical installer).
>
> Now, it appears that postgres places files in other directories besides
> the one specified in the installer. For instance, there are scripts in
> /etc/init.d to start the service.
>
> So in answering my own question: it appears PG places files in other dirs
> so when I am given a new VM/different physical server with the same NFS
> mount I would either need to copy these files over or better yet un-install
> the current PG and re-install from scratch.
>
> Thanks,
> ap
>
> On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver 
> wrote:
>
>> On 11/27/2015 01:17 PM, anj patnaik wrote:
>>
>>> Hello,
>>> Yes, postgres is currently installed on a nfs mounted file system. So
>>> when graphical installer runs, there is a form which asks location for
>>> installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
>>> a nfs mount. So currently this is where PG 9.4 lives.
>>>
>>
>> What is under /opt/nfsDir/Postgres?
>>
>>
>>> My question is when I am given a brand new VM on a different physical
>>> server, can I mount that same NFS FS and use the Postgres or do I need
>>> to re-install PG on new VM?
>>>
>>
>> How are you using Postgres now? Please be specific, more detail is better
>> then less at this point.
>>
>>
>>> I am not sure if PG writes to any other directories besides the one
>>> where it is installed.
>>>
>>
>>
>>
>>> On the issue of logging, I see a lot of log statements because client
>>> apps do upserts and since I use Tcl I don't have a SQL proc, but rather
>>> I let it exception and then do an update on the row.
>>>
>>
>> You can Tcl in the database:
>>
>> http://www.postgresql.org/docs/9.4/interactive/pltcl.html
>>
>> That will not change things if you let the database throw an exception
>> there also.
>>
>>
>>
>>> So, you can limit the size of an individual log, but there is no way to
>>> tell PG to keep the log file short?
>>>
>>> if i choose FATAL, I'd lose some log, right?
>>>
>>> Thank you!
>>> ap
>>>
>>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


Re: [GENERAL] 2 questions

2015-12-01 Thread Adrian Klaver

On 12/01/2015 09:16 AM, anj patnaik wrote:

Thanks for the info Scott.

Can I setup a primary server called A that uses the Postgres
installation on a nfs mounted filesystem and then A does a daily backup
of database A and restores to database B on same filesystem.

Then I have server B acting as cold standby and if server A goes down,
then B would make the backup DB active?

In this case, both server A and B have the same NFS mount, but B is not
always writing to the data.

The target environment I am moving to is a RAID5 system with mirrored
discs. I will be getting a VM on a physical server and the mirrored
disks will store the database. I would like to setup a primary and
secondary as cold standby.

I am fairly new to PG so asking these questions.

Please advise. Thank you.


Advice, read these pages:

http://www.postgresql.org/docs/9.4/interactive/tutorial-arch.html

http://www.postgresql.org/docs/9.4/interactive/creating-cluster.html

http://www.postgresql.org/docs/9.4/interactive/warm-standby.html







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-12-01 Thread Scott Mead


> On Nov 30, 2015, at 12:54, anj patnaik  wrote:
> 
> 1) directory listing:
> 
> /opt/nfsDir/postgres/9.4/
> /bin
> /data
> /etc
>/pgAdmin3
>   
> 

The data directory will cause you many problems.   You will need one data 
directory that is accessed by one AND ONLY one host for each node connected. 
You can't run an instance on multiple machines pointing to the same 'data' 
directory simultaneously.  Data directories cannot be shared by multiple 
instances simultaneously, that's an active/active shared disk cluster and most 
databases don't support it or require massive overhead ( network/licensing I.e. 
Oracle rac) to do that. 

  You *can* re-use the other directories, it can be wrought with issues, and 
you need to carefully think though upgrades, etc

> 2) The way I am using PG now is that I have specified the directory above as 
> the location to install it (from graphical installer). 
> 
> Now, it appears that postgres places files in other directories besides the 
> one specified in the installer. For instance, there are scripts in 
> /etc/init.d to start the service. 
> 
> So in answering my own question: it appears PG places files in other dirs so 
> when I am given a new VM/different physical server with the same NFS mount I 
> would either need to copy these files over or better yet un-install the 
> current PG and re-install from scratch.
> 
> Thanks,
> ap
> 
>> On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver  
>> wrote:
>>> On 11/27/2015 01:17 PM, anj patnaik wrote:
>>> Hello,
>>> Yes, postgres is currently installed on a nfs mounted file system. So
>>> when graphical installer runs, there is a form which asks location for
>>> installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
>>> a nfs mount. So currently this is where PG 9.4 lives.
>> 
>> What is under /opt/nfsDir/Postgres?
>> 
>>> 
>>> My question is when I am given a brand new VM on a different physical
>>> server, can I mount that same NFS FS and use the Postgres or do I need
>>> to re-install PG on new VM?
>> 
>> How are you using Postgres now? Please be specific, more detail is better 
>> then less at this point.
>> 
>>> 
>>> I am not sure if PG writes to any other directories besides the one
>>> where it is installed.
>> 
>> 
>>> 
>>> On the issue of logging, I see a lot of log statements because client
>>> apps do upserts and since I use Tcl I don't have a SQL proc, but rather
>>> I let it exception and then do an update on the row.
>> 
>> You can Tcl in the database:
>> 
>> http://www.postgresql.org/docs/9.4/interactive/pltcl.html
>> 
>> That will not change things if you let the database throw an exception there 
>> also.
>> 
>> 
>>> 
>>> So, you can limit the size of an individual log, but there is no way to
>>> tell PG to keep the log file short?
>>> 
>>> if i choose FATAL, I'd lose some log, right?
>>> 
>>> Thank you!
>>> ap
>> 
>> -- 
>> Adrian Klaver
>> adrian.kla...@aklaver.com
> 


Re: [GENERAL] 2 questions

2015-12-01 Thread anj patnaik
1) directory listing:

/opt/nfsDir/postgres/9.4/
/bin
/data
/etc
   /pgAdmin3
  

2) The way I am using PG now is that I have specified the directory above
as the location to install it (from graphical installer).

Now, it appears that postgres places files in other directories besides the
one specified in the installer. For instance, there are scripts in
/etc/init.d to start the service.

So in answering my own question: it appears PG places files in other dirs
so when I am given a new VM/different physical server with the same NFS
mount I would either need to copy these files over or better yet un-install
the current PG and re-install from scratch.

Thanks,
ap

On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver 
wrote:

> On 11/27/2015 01:17 PM, anj patnaik wrote:
>
>> Hello,
>> Yes, postgres is currently installed on a nfs mounted file system. So
>> when graphical installer runs, there is a form which asks location for
>> installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
>> a nfs mount. So currently this is where PG 9.4 lives.
>>
>
> What is under /opt/nfsDir/Postgres?
>
>
>> My question is when I am given a brand new VM on a different physical
>> server, can I mount that same NFS FS and use the Postgres or do I need
>> to re-install PG on new VM?
>>
>
> How are you using Postgres now? Please be specific, more detail is better
> then less at this point.
>
>
>> I am not sure if PG writes to any other directories besides the one
>> where it is installed.
>>
>
>
>
>> On the issue of logging, I see a lot of log statements because client
>> apps do upserts and since I use Tcl I don't have a SQL proc, but rather
>> I let it exception and then do an update on the row.
>>
>
> You can Tcl in the database:
>
> http://www.postgresql.org/docs/9.4/interactive/pltcl.html
>
> That will not change things if you let the database throw an exception
> there also.
>
>
>
>> So, you can limit the size of an individual log, but there is no way to
>> tell PG to keep the log file short?
>>
>> if i choose FATAL, I'd lose some log, right?
>>
>> Thank you!
>> ap
>>
>>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] 2 questions

2015-12-01 Thread anj patnaik
Hello,
Yes, postgres is currently installed on a nfs mounted file system. So when
graphical installer runs, there is a form which asks location for
installation. I specified this path /opt/nfsDir/Postgres where nfsDir is a
nfs mount. So currently this is where PG 9.4 lives.

My question is when I am given a brand new VM on a different physical
server, can I mount that same NFS FS and use the Postgres or do I need to
re-install PG on new VM?

I am not sure if PG writes to any other directories besides the one where
it is installed.

On the issue of logging, I see a lot of log statements because client apps
do upserts and since I use Tcl I don't have a SQL proc, but rather I let it
exception and then do an update on the row.

So, you can limit the size of an individual log, but there is no way to
tell PG to keep the log file short?

if i choose FATAL, I'd lose some log, right?

Thank you!
ap

On Fri, Nov 27, 2015 at 10:19 AM, Adrian Klaver 
wrote:

> On 11/25/2015 10:28 AM, anj patnaik wrote:
>
>> Hello all,
>> I've got 2 more questions. The cron job is now setup and email gets
>> generated with proper body.
>>
>> I've one setup with NFS which appears to work smoothly. Now soon, I will
>> be given a Linux VM on a different physical server, but will retain my
>> NFS mount. I've installed Postgres 9.4 using the graphical installer and
>> specified the directory for the nfs mount.
>>
>
> Define 'setup with NFS'.
>
> So are you saying this how you set up the old instance or how you are
> setting up the new VM?
>
>
>> 1) When I move to the new VM, can I keep using that NFS mount without
>> having to do a re-install of PG? This would be a different physical
>> machine.
>>
>
> Where is Postgres installed now?
>
> Not entirely following, but I hope you are not asking if two Postgres
> installs can share the same NFS mount? That will end badly.
>
>
> It might help if you give a schematic description of what you are trying
> to achieve.
>
>
>> 2) I have a cron job that deletes log files older than 10 days, but I am
>> noticing rather large log files. Is there a way to limit the size of log
>> files?
>>
>
> What I do is keep the previous 2 days of files as written and then keep
> compressed files older then that to some period of time. The files compress
> a good bit so it works well for me. This is done via a cron script that
> runs each night.
>
>
>> users do upserts and they are valid, but those are getting dumped as
>> error statements. I set the verbosity to "terse", but still seeing lots
>> of log output.
>>
>
> So what are the errors?
>
> As to 'terse':
>
>
> http://www.postgresql.org/docs/9.4/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
> log_error_verbosity (enum)
>
> Controls the amount of detail written in the server log for each
> message that is logged. Valid values are TERSE, DEFAULT, and VERBOSE, each
> adding more fields to displayed messages. TERSE excludes the logging of
> DETAIL, HINT, QUERY, and CONTEXT error information. VERBOSE output includes
> the SQLSTATE error code (see also Appendix A) and the source code file
> name, function name, and line number that generated the error. Only
> superusers can change this setting.
>
>
> You lose a lot of valuable information this way. I would go with Albe's
> suggestion and change log_min_error_statement.
>
>
>
>
>> My settings are as follows:
>> postgres=#  select name,setting,unit from pg_settings where name like
>> '%log%';
>>  name |setting | unit
>> -++--
>>   log_autovacuum_min_duration | -1 | ms
>>   log_checkpoints | off|
>>   log_connections | off|
>>   log_destination | stderr |
>>   log_directory   | pg_log |
>>   log_disconnections  | off|
>>   log_duration| off|
>>   log_error_verbosity | terse  |
>>   log_executor_stats  | off|
>>   log_file_mode   | 0600   |
>>   log_filename| postgresql-%Y-%m-%d_%H%M%S.log |
>>   log_hostname| off|
>>   log_line_prefix | %t |
>>   log_lock_waits  | off|
>>   log_min_duration_statement  | -1 | ms
>>   log_min_error_statement | error  |
>>   log_min_messages| error  |
>>   log_parser_stats| off|
>>   log_planner_stats   | off|
>>   log_rotation_age| 1440   | min
>>   log_rotation

Re: [GENERAL] 2 questions

2015-11-30 Thread Adrian Klaver

On 11/30/2015 09:54 AM, anj patnaik wrote:

1) directory listing:

/opt/nfsDir/postgres/9.4/
 /bin
 /data
 /etc
/pgAdmin3
   

2) The way I am using PG now is that I have specified the directory
above as the location to install it (from graphical installer).

Now, it appears that postgres places files in other directories besides
the one specified in the installer. For instance, there are scripts in
/etc/init.d to start the service.


Answers without context are not answers.

Where is /etc/init.d located, the machine that has the NFS mount or 
another machine?




So in answering my own question: it appears PG places files in other
dirs so when I am given a new VM/different physical server with the same
NFS mount I would either need to copy these files over or better yet
un-install the current PG and re-install from scratch.


Why not save yourself a lot of trouble, and if it is not the case 
already, set up Postgres on a central server and connect to it from 
whatever client you wish.




Thanks,
ap

On Fri, Nov 27, 2015 at 8:30 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 11/27/2015 01:17 PM, anj patnaik wrote:

Hello,
Yes, postgres is currently installed on a nfs mounted file
system. So
when graphical installer runs, there is a form which asks
location for
installation. I specified this path /opt/nfsDir/Postgres where
nfsDir is
a nfs mount. So currently this is where PG 9.4 lives.


What is under /opt/nfsDir/Postgres?


My question is when I am given a brand new VM on a different
physical
server, can I mount that same NFS FS and use the Postgres or do
I need
to re-install PG on new VM?


How are you using Postgres now? Please be specific, more detail is
better then less at this point.


I am not sure if PG writes to any other directories besides the one
where it is installed.




On the issue of logging, I see a lot of log statements because
client
apps do upserts and since I use Tcl I don't have a SQL proc, but
rather
I let it exception and then do an update on the row.


You can Tcl in the database:

http://www.postgresql.org/docs/9.4/interactive/pltcl.html

That will not change things if you let the database throw an
exception there also.



So, you can limit the size of an individual log, but there is no
way to
tell PG to keep the log file short?

if i choose FATAL, I'd lose some log, right?

Thank you!
ap


--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-11-27 Thread Adrian Klaver

On 11/27/2015 01:17 PM, anj patnaik wrote:

Hello,
Yes, postgres is currently installed on a nfs mounted file system. So
when graphical installer runs, there is a form which asks location for
installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
a nfs mount. So currently this is where PG 9.4 lives.

My question is when I am given a brand new VM on a different physical
server, can I mount that same NFS FS and use the Postgres or do I need
to re-install PG on new VM?

I am not sure if PG writes to any other directories besides the one
where it is installed.

On the issue of logging, I see a lot of log statements because client
apps do upserts and since I use Tcl I don't have a SQL proc, but rather
I let it exception and then do an update on the row.

So, you can limit the size of an individual log, but there is no way to
tell PG to keep the log file short?

if i choose FATAL, I'd lose some log, right?


Hit Enter to quickly. You answered your own question, it is possible to 
have Postgres keep the log file short, just do not log anything:) You 
have to decide what is more important the size of the file or the 
information in it.




Thank you!
ap




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-11-27 Thread Adrian Klaver

On 11/27/2015 01:17 PM, anj patnaik wrote:

Hello,
Yes, postgres is currently installed on a nfs mounted file system. So
when graphical installer runs, there is a form which asks location for
installation. I specified this path /opt/nfsDir/Postgres where nfsDir is
a nfs mount. So currently this is where PG 9.4 lives.


What is under /opt/nfsDir/Postgres?



My question is when I am given a brand new VM on a different physical
server, can I mount that same NFS FS and use the Postgres or do I need
to re-install PG on new VM?


How are you using Postgres now? Please be specific, more detail is 
better then less at this point.




I am not sure if PG writes to any other directories besides the one
where it is installed.





On the issue of logging, I see a lot of log statements because client
apps do upserts and since I use Tcl I don't have a SQL proc, but rather
I let it exception and then do an update on the row.


You can Tcl in the database:

http://www.postgresql.org/docs/9.4/interactive/pltcl.html

That will not change things if you let the database throw an exception 
there also.




So, you can limit the size of an individual log, but there is no way to
tell PG to keep the log file short?

if i choose FATAL, I'd lose some log, right?

Thank you!
ap



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-11-27 Thread Adrian Klaver

On 11/25/2015 10:28 AM, anj patnaik wrote:

Hello all,
I've got 2 more questions. The cron job is now setup and email gets
generated with proper body.

I've one setup with NFS which appears to work smoothly. Now soon, I will
be given a Linux VM on a different physical server, but will retain my
NFS mount. I've installed Postgres 9.4 using the graphical installer and
specified the directory for the nfs mount.


Define 'setup with NFS'.

So are you saying this how you set up the old instance or how you are 
setting up the new VM?




1) When I move to the new VM, can I keep using that NFS mount without
having to do a re-install of PG? This would be a different physical
machine.


Where is Postgres installed now?

Not entirely following, but I hope you are not asking if two Postgres 
installs can share the same NFS mount? That will end badly.



It might help if you give a schematic description of what you are trying 
to achieve.




2) I have a cron job that deletes log files older than 10 days, but I am
noticing rather large log files. Is there a way to limit the size of log
files?


What I do is keep the previous 2 days of files as written and then keep 
compressed files older then that to some period of time. The files 
compress a good bit so it works well for me. This is done via a cron 
script that runs each night.




users do upserts and they are valid, but those are getting dumped as
error statements. I set the verbosity to "terse", but still seeing lots
of log output.


So what are the errors?

As to 'terse':

http://www.postgresql.org/docs/9.4/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
log_error_verbosity (enum)

Controls the amount of detail written in the server log for each 
message that is logged. Valid values are TERSE, DEFAULT, and VERBOSE, 
each adding more fields to displayed messages. TERSE excludes the 
logging of DETAIL, HINT, QUERY, and CONTEXT error information. VERBOSE 
output includes the SQLSTATE error code (see also Appendix A) and the 
source code file name, function name, and line number that generated the 
error. Only superusers can change this setting.



You lose a lot of valuable information this way. I would go with Albe's 
suggestion and change log_min_error_statement.





My settings are as follows:
postgres=#  select name,setting,unit from pg_settings where name like
'%log%';
 name |setting | unit
-++--
  log_autovacuum_min_duration | -1 | ms
  log_checkpoints | off|
  log_connections | off|
  log_destination | stderr |
  log_directory   | pg_log |
  log_disconnections  | off|
  log_duration| off|
  log_error_verbosity | terse  |
  log_executor_stats  | off|
  log_file_mode   | 0600   |
  log_filename| postgresql-%Y-%m-%d_%H%M%S.log |
  log_hostname| off|
  log_line_prefix | %t |
  log_lock_waits  | off|
  log_min_duration_statement  | -1 | ms
  log_min_error_statement | error  |
  log_min_messages| error  |
  log_parser_stats| off|
  log_planner_stats   | off|
  log_rotation_age| 1440   | min
  log_rotation_size   | 10240  | kB
  log_statement   | none   |
  log_statement_stats | off|
  log_temp_files  | -1 | kB
  log_timezone| EST5EDT|
  log_truncate_on_rotation| off|
  logging_collector   | on |
  syslog_facility | local0 |
  syslog_ident| postgres   |
  wal_log_hints   | off|
(30 rows)

postgres=#


Thank you.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions

2015-11-27 Thread Albe Laurenz
anj patnaik wrote:
> I've got 2 more questions. The cron job is now setup and email gets generated 
> with proper body.
> 
> I've one setup with NFS which appears to work smoothly. Now soon, I will be 
> given a Linux VM on a
> different physical server, but will retain my NFS mount. I've installed 
> Postgres 9.4 using the
> graphical installer and specified the directory for the nfs mount.
> 
> 1) When I move to the new VM, can I keep using that NFS mount without having 
> to do a re-install of PG?
> This would be a different physical machine.

I don't know what the graphical installer does.
If all you want to share is the PostgreSQL software, you can certainly use an 
NFS mount
for that as long as the architecture is the same on all machines.
But there may be other things necessary for running PostgreSQL, for example
startup scripts, that you may want to have everywhere.

> 2) I have a cron job that deletes log files older than 10 days, but I am 
> noticing rather large log
> files. Is there a way to limit the size of log files?

There is the log_rotation_size parameter that limits the size of an individual
log file, but it won't reduce the overall amount of log.

> users do upserts and they are valid, but those are getting dumped as error 
> statements. I set the
> verbosity to "terse", but still seeing lots of log output.
> 
> My settings are as follows:
[...]
>  log_min_error_statement | error  |

Change that to "fatal", and the error messages will no longer be in the log.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 questions

2015-11-26 Thread anj patnaik
Hello all,
I've got 2 more questions. The cron job is now setup and email gets
generated with proper body.

I've one setup with NFS which appears to work smoothly. Now soon, I will be
given a Linux VM on a different physical server, but will retain my NFS
mount. I've installed Postgres 9.4 using the graphical installer and
specified the directory for the nfs mount.

1) When I move to the new VM, can I keep using that NFS mount without
having to do a re-install of PG? This would be a different physical
machine.

2) I have a cron job that deletes log files older than 10 days, but I am
noticing rather large log files. Is there a way to limit the size of log
files?

users do upserts and they are valid, but those are getting dumped as error
statements. I set the verbosity to "terse", but still seeing lots of log
output.

My settings are as follows:
postgres=#  select name,setting,unit from pg_settings where name like
'%log%';
name |setting | unit
-++--
 log_autovacuum_min_duration | -1 | ms
 log_checkpoints | off|
 log_connections | off|
 log_destination | stderr |
 log_directory   | pg_log |
 log_disconnections  | off|
 log_duration| off|
 log_error_verbosity | terse  |
 log_executor_stats  | off|
 log_file_mode   | 0600   |
 log_filename| postgresql-%Y-%m-%d_%H%M%S.log |
 log_hostname| off|
 log_line_prefix | %t |
 log_lock_waits  | off|
 log_min_duration_statement  | -1 | ms
 log_min_error_statement | error  |
 log_min_messages| error  |
 log_parser_stats| off|
 log_planner_stats   | off|
 log_rotation_age| 1440   | min
 log_rotation_size   | 10240  | kB
 log_statement   | none   |
 log_statement_stats | off|
 log_temp_files  | -1 | kB
 log_timezone| EST5EDT|
 log_truncate_on_rotation| off|
 logging_collector   | on |
 syslog_facility | local0 |
 syslog_ident| postgres   |
 wal_log_hints   | off|
(30 rows)

postgres=#


Thank you.


Re: [GENERAL] 2 questions re RAID

2011-06-21 Thread Scott Ribe
On Jun 21, 2011, at 7:49 AM, Vick Khera wrote:

> Ok...there is *one* advantage:
> you can lose any two drives at the same time and still survive, with
> RAID-10 if you lose the wrong two drives you're hosed.

Exactly. The performance advantage of RAID-10 over RAID-6 in this sever is, I 
think, not useful.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-21 Thread Vick Khera
On Fri, Jun 17, 2011 at 1:35 PM, Scott Ribe  wrote:
> RAID-1 & RAID-10 are not ruled out, I'm just exploring options. And I'm not 
> actually wanting to use RAID 5; it's RAID 6 that I'm considering...

You have 4 disk bays and you want RAID-6?  How will that improve
anything over RAID-10?  You will have the same amount of available
space, and the writes will be slower.  Ok...there is *one* advantage:
you can lose any two drives at the same time and still survive, with
RAID-10 if you lose the wrong two drives you're hosed.

That said, on one of my production DB's, I have a 16-bay raid
enclosure and I run RAID-6 + hot spare.  It even has an Areca
controller made visible to the server as a single drive via fibre
channel.  Very sweet setup.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-21 Thread Vick Khera
On Fri, Jun 17, 2011 at 1:20 PM, Scott Marlowe  wrote:
> Pluses for the Arecas I've used:
> Out Of Band monitoring.  Heck, I've updated the firmware on them from
> 1000 miles away.
> fast in RAID-10.  Lots of HW controllers (I'm looking at you, LSI)
> perform poorly with layered RAID.
> They all use the same simple standard battery backed unit, unlike some
> manufacturers that glue them onto the DIMM so you have to buy a new
> memory module to replace your BBU (again, I'm looking at you LSI)
> Great UI via the web and / or the BIOS.  Again, some other RAID setup
> utils are not so nice (and again, I'm looking at you, LSI)
>

If I could repeat this 1000 times, I would.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-18 Thread Scott Ribe
On Jun 17, 2011, at 11:23 PM, Greg Smith wrote:

> I guess you could call Highpoint a RAID manufacturer, but I wouldn't do so.  
> They've released so many terrible problems over the years that it's hard to 
> take the fact that they may have something reasonable you can buy now (the 
> 43XX cards I think?)  seriously.

Ah, I see. So they're on par with Apple's RAID controller instead of being the 
first step up.

> Atto is so Mac focused that you're not going to find much experience here, 
> for the same reason you didn't get any response to your original question.  
> Their cards are using the same Intel IO Processor (IOP) hardware as some 
> known capable cards.  For example, the ExpressSAS R348 is named that because 
> it has an Intel 348 IOP.  That's the same basic processor as on the medium 
> sized Areca boards:  http://www.areca.us/products/pcietosas1680series.htm  So 
> speed should be reasonable, presuming they didn't make any major errors in 
> board design or firmware.

Good info. Didn't know about their focus, because the last time I dealt with 
them was so many years ago they still had a significant focus on Windows, or so 
it seemed to me at the time. Focus on Mac says nothing about the firmware on 
the card, but it should bode well for the driver.

> The real thing you need to investigate is whether the write cache setup is 
> done right, and whether monitoring is available in a way you can talk to.  
> What you want is for the card to run in write-back mode normally, degrading 
> to write-through when the battery stops working well.  If you don't see that 
> sort of thing clearly documented as available, you really don't want to 
> consider their cards.

Well, right up front in their marketing materials they make a major point about 
cache protection, how important it is, how good it is, using 
ultracapacitor+flash over batteries (on some of their controllers). So they 
have awareness & intent; competence and follow-through of course are not 
assured by marketing materials. (Also they talk about background scanning of 
drives for defects.) And it looks like they offer all of: GUI setup/monitoring 
that runs on OS X, command-line setup/monitoring that runs on OS X, SNMP...

> You're basically asking "if I don't write to the database, does the fact that 
> write performance on RAID5 is slow matter?"  When asked that way, sure, it's 
> fine.  If after applying the write cache to help, your write throughput 
> requirements don't ever exceed what a single disk can provide, than maybe 
> RAID5 will be fine for you.  Make sure you keep shared_buffers low though, 
> because you're not going to be able to absorb a heavy checkpoint sync on 
> RAID5.

Yes, basically I wanted to confirm that's what I was actually asking ;-) The 
only circumstance under which I could see overflowing the card's write cache is 
during migrations. So my choice then really is better performance during rare 
migrations vs being able to lose any 2 drives out of 4 (RAID6). Which is OK, 
since neither choice is really bad--having been burned by bad disk runs before, 
I'll probably go for safety. (FYI this is not my only margin for failure. Two 
geographically-distributed WAL-streaming replicas with low-end RAID1 are the 
next line of defense. Followed by, god forbid I should ever have to use them, 
daily dumps.)

Thanks for all the info. I guess about all I have remaining to do is 
sanity-check my beliefs about disk I/O.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-17 Thread Greg Smith

On 06/17/2011 01:02 PM, Scott Ribe wrote:

1) Is my impression correct that given a choice between Areca&  Highpoint, it's 
a no-brainer to go with Areca?
   


I guess you could call Highpoint a RAID manufacturer, but I wouldn't do 
so.  They've released so many terrible problems over the years that it's 
hard to take the fact that they may have something reasonable you can 
buy now (the 43XX cards I think?)  seriously.



 And, in further digging, I discover that gh is an option for me. Anyone got 
comments on these? (I notice that they use ultracapacitor/flash to protect 
cache...)



Atto is so Mac focused that you're not going to find much experience 
here, for the same reason you didn't get any response to your original 
question.  Their cards are using the same Intel IO Processor (IOP) 
hardware as some known capable cards.  For example, the ExpressSAS R348 
is named that because it has an Intel 348 IOP.  That's the same basic 
processor as on the medium sized Areca boards:  
http://www.areca.us/products/pcietosas1680series.htm  So speed should be 
reasonable, presuming they didn't make any major errors in board design 
or firmware.


The real thing you need to investigate is whether the write cache setup 
is done right, and whether monitoring is available in a way you can talk 
to.  What you want is for the card to run in write-back mode normally, 
degrading to write-through when the battery stops working well.  If you 
don't see that sort of thing clearly documented as available, you really 
don't want to consider their cards.



2) I understand why RAID 5 is not generally recommended for good db 
performance. But if the database is not huge (10-20GB), and the server has 
enough RAM to keep most all of the db cached, and the RAID uses 
(battery-backed) write-back cache, is it sill really an issue?
   


You're basically asking "if I don't write to the database, does the fact 
that write performance on RAID5 is slow matter?"  When asked that way, 
sure, it's fine.  If after applying the write cache to help, your write 
throughput requirements don't ever exceed what a single disk can 
provide, than maybe RAID5 will be fine for you.  Make sure you keep 
shared_buffers low though, because you're not going to be able to absorb 
a heavy checkpoint sync on RAID5.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-17 Thread Scott Marlowe
On Fri, Jun 17, 2011 at 11:35 AM, Scott Ribe
 wrote:
> It's small enough that there's some other things going on at the same small 
> server with 4 disk bays ;-) My thinking was that write-back cache might 
> mitigate the poor write performance enough to not be noticed. This db doesn't 
> generally get big batch updates anyway, it's mostly a constant stream of 
> small updates coming in and I have a hard time imagining 256MB of cache 
> filling up very often. (I have at least a fuzzy understanding of how WAL 
> segments affect the write load.)

We run our internal dev server on RAID-6 and it works well enough.
Again, like your usage case, it doesn't get beat up too hard, so
RAID-6 works fine.  I prefer RAID-6 because it doesn't degrade as bad
as RAID-5 when a single drive fails, and of course it's still fully
redundant with a single drive failure.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-17 Thread Scott Ribe
On Jun 17, 2011, at 11:20 AM, Scott Marlowe wrote:

> Generally, yes, but the model of the card is more important than the
> maker.  I.e. an Areca 1880 or 1680 is a fantastic performer.  But the
> older 1120 series aren't gonna set the world on fire or anything.

And, in further digging, I discover that ATTO ExpressSAS is an option for me. 
Anyone got comments on these? (I notice that they use ultracapacitor/flash to 
protect cache...)

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-17 Thread Scott Ribe
Thanks much for the specific info on Areca RAID cards. Very helpful.

On Jun 17, 2011, at 11:20 AM, Scott Marlowe wrote:

> The problem with RAID-5 is crappy write performance.  Being big or
> small won't change that.  Plus if the db is small why use RAID-5?

It's small enough that there's some other things going on at the same small 
server with 4 disk bays ;-) My thinking was that write-back cache might 
mitigate the poor write performance enough to not be noticed. This db doesn't 
generally get big batch updates anyway, it's mostly a constant stream of small 
updates coming in and I have a hard time imagining 256MB of cache filling up 
very often. (I have at least a fuzzy understanding of how WAL segments affect 
the write load.)

RAID-1 & RAID-10 are not ruled out, I'm just exploring options. And I'm not 
actually wanting to use RAID 5; it's RAID 6 that I'm considering...

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2 questions re RAID

2011-06-17 Thread Scott Marlowe
On Fri, Jun 17, 2011 at 11:02 AM, Scott Ribe
 wrote:
> No responses to my earlier post, I'm assuming because OS X experience is 
> rather thin in this group ;-) So a couple of more specific questions:
>
> 1) Is my impression correct that given a choice between Areca & Highpoint, 
> it's a no-brainer to go with Areca?

Generally, yes, but the model of the card is more important than the
maker.  I.e. an Areca 1880 or 1680 is a fantastic performer.  But the
older 1120 series aren't gonna set the world on fire or anything.

Pluses for the Arecas I've used:
Out Of Band monitoring.  Heck, I've updated the firmware on them from
1000 miles away.
fast in RAID-10.  Lots of HW controllers (I'm looking at you, LSI)
perform poorly with layered RAID.
They all use the same simple standard battery backed unit, unlike some
manufacturers that glue them onto the DIMM so you have to buy a new
memory module to replace your BBU (again, I'm looking at you LSI)
Great UI via the web and / or the BIOS.  Again, some other RAID setup
utils are not so nice (and again, I'm looking at you, LSI)

> 2) I understand why RAID 5 is not generally recommended for good db 
> performance. But if the database is not huge (10-20GB), and the server has 
> enough RAM to keep most all of the db cached, and the RAID uses 
> (battery-backed) write-back cache, is it sill really an issue?

The problem with RAID-5 is crappy write performance.  Being big or
small won't change that.  Plus if the db is small why use RAID-5?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 questions re RAID

2011-06-17 Thread Scott Ribe
No responses to my earlier post, I'm assuming because OS X experience is rather 
thin in this group ;-) So a couple of more specific questions:

1) Is my impression correct that given a choice between Areca & Highpoint, it's 
a no-brainer to go with Areca?

2) I understand why RAID 5 is not generally recommended for good db 
performance. But if the database is not huge (10-20GB), and the server has 
enough RAM to keep most all of the db cached, and the RAID uses 
(battery-backed) write-back cache, is it sill really an issue?

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 2 questions to ask

2010-03-08 Thread chaoyong wang

Hi,
I'm using vs2005 to debug PG, and I have 2 questions to ask:
1. I want to add a function to contrib/xml2/xpath.c   I changed xpath.c, 
pgxml.sql, pgxml.sql.in at the same time, then I builded and installed as the 
document said.When I execute "psql test < pgxml.sql", all functions created 
success except the one I added.   ERROR:  could not find function 
"xml_value_index" in file "C:\Program Files\PostgreSQL\8.3\lib/pgxml.dll"   I 
rebuild project pgxml, and replaced C:\Program 
Files\PostgreSQL\8.3\lib\pgxml.dll by the new produced .\Debug\pgxml\pgxml.dll  
 But remains all the same
2. When I execute a simple select query too see how it works, the concept 
"Portal" really puzzled me, maybe it's too abstract   Could you please explain 
it more specifically?

Thanks for your reading and looking forward your reply

Best  RegardsCatcher Wang 
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

[GENERAL] 2 questions when using vs2005 to debug PG

2010-03-08 Thread chaoyong wang

Hi,I'm using vs2005 to debug PG, and I have 2 questions to ask:1. I want to add 
a function to contrib/xml2/xpath.c   I changed xpath.c, pgxml.sql, pgxml.sql.in 
at the same time, then I builded and installed as the document said.When I 
execute "psql test < pgxml.sql", all functions created success except the one I 
added.   ERROR:  could not find function "xml_value_index" in file "C:\Program 
Files\PostgreSQL\8.3\lib/pgxml.dll"   I rebuild project pgxml, and replaced 
C:\Program Files\PostgreSQL\8.3\lib\pgxml.dll by the new produced 
.\Debug\pgxml\pgxml.dll   But remains all the same2. When I execute a simple 
select query too see how it works, the concept "Portal" really puzzled me, 
maybe it's too abstract   Could you please explain it more specifically?Thanks 
for your reading and looking forward your replyBest  RegardsCatcher Wang
   
_
Hotmail: Trusted email with powerful SPAM protection.
https://signup.live.com/signup.aspx?id=60969

Re: [GENERAL] 2 questions about types

2005-03-16 Thread Richard Huxton
Jason Tesser wrote:

There's an example in the manuals - chapter "7.2.1.4. Table Functions"
SELECT *
FROM dblink('dbname=mydb', 'select proname, prosrc from pg_proc')
  AS t1(proname name, prosrc text)
WHERE proname LIKE 'bytea%';
So basically, you need to supply the type definitions in your SELECT if 
you aren't going to supply it in the function definition.

ok I tried to rewrite as follows but I get an error that says "a column
definition list is required fro functions returning record
Because you didn't supply the type definitions in your SELECT...
here is my function and call for it now
CREATE OR REPLACE FUNCTION "public"."loginbyindidgettest" (integer)
RETURNS SETOF "pg_catalog"."record" AS'
...
select * from loginbyindidgettest(43650);
This needs to be something like:
  SELECT * FROM loginbyindidgettest(43650) AS myres(a int, b text, c 
date, ...)

Obviously, the types need to match the results of your function.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] 2 questions about types

2005-03-16 Thread Jason Tesser

> 
> There's an example in the manuals - chapter "7.2.1.4. Table Functions"
> 
> SELECT *
>  FROM dblink('dbname=mydb', 'select proname, prosrc from pg_proc')
>AS t1(proname name, prosrc text)
>  WHERE proname LIKE 'bytea%';
> 
> So basically, you need to supply the type definitions in your SELECT if 
> you aren't going to supply it in the function definition.

ok I tried to rewrite as follows but I get an error that says "a column
definition list is required fro functions returning record

here is my function and call for it now
CREATE OR REPLACE FUNCTION "public"."loginbyindidgettest" (integer)
RETURNS SETOF "pg_catalog"."record" AS'
declare

iindid alias for $1;
returnRec RECORD;

begin

for returnRec in select t1.indid, t1.title, t1.firstname, t1.middlename,
t1.lastname, t1.suffix, t1.loginname, t1.loginnameid, t1.ad,t1.current,
t1.email, t1.note
from tblindividual inner join tblloginname on (tblindividual.indid =
tblloginname.indlink) as t1
where tblloginname.indlink = iindid
order by tblindividual.lastname, tblindividual.firstname,
tblindividual.middlename, tblloginname.loginname
loop
 return next returnRec;
 end loop;
 return;
end;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

select * from loginbyindidgettest(43650);



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] 2 questions about types

2005-03-16 Thread Richard Huxton
Jason Tesser wrote:
OK here is an example of a function where I had to create a type called
login.
How could I have written this function without having to create a type.
CREATE OR REPLACE FUNCTION "public"."loginbyindidget" (integer) RETURNS
SETOF "public"."login" AS'
[snip]
There's an example in the manuals - chapter "7.2.1.4. Table Functions"
SELECT *
FROM dblink('dbname=mydb', 'select proname, prosrc from pg_proc')
  AS t1(proname name, prosrc text)
WHERE proname LIKE 'bytea%';
So basically, you need to supply the type definitions in your SELECT if 
you aren't going to supply it in the function definition.
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] 2 questions about types

2005-03-16 Thread Jason Tesser
OK here is an example of a function where I had to create a type called
login.
How could I have written this function without having to create a type.

CREATE OR REPLACE FUNCTION "public"."loginbyindidget" (integer) RETURNS
SETOF "public"."login" AS'
declare

iindid alias for $1;
returnRec RECORD;

begin

for returnRec in select tblindividual.indid, tblindividual.title,
tblindividual.firstname, tblindividual.middlename,
tblindividual.lastname, tblindividual.suffix, tblloginname.loginname,
tblloginname.loginnameid, tblloginname.ad,tblloginname.current,
tblloginname.email, tblloginname.note
from tblindividual inner join tblloginname on (tblindividual.indid =
tblloginname.indlink)
where tblloginname.indlink = iindid
order by tblindividual.lastname, tblindividual.firstname,
tblindividual.middlename, tblloginname.loginname
loop
 return next returnRec;
 end loop;
 return;
end;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

On Wed, 2005-03-16 at 13:51 +, Richard Huxton wrote:
> Jason Tesser wrote:
> > 1. i have a few funcions that depend on a type.  i don't want to have to 
> > srop every function just so I can drop the type and recreat everything.
> > Is there a better way to do this in Postgres?
> 
> Not really - if you're redefining the type then the functions really 
> have to be recreated. I try to keep related objects in the same file, so 
> I can re-run them all together.
> 
> > 2.  The reason I had to create my own type was because record didn't ork 
> > for me when I was selecting data across multiple tables.
> > I thought it should be dynamic but it only seems to work if i select all 
> > data in one table.  I need 2-3 columns from multiple
> > tables.  
> > Is there a better way to do this in Postgres?
> 
> Could you give more details of what you're trying? RECORD variables in 
> functions should work fine.
> 
> --
>Richard Huxton
>Archonet Ltd

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] 2 questions about types

2005-03-16 Thread Richard Huxton
Jason Tesser wrote:
1. i have a few funcions that depend on a type.  i don't want to have to srop 
every function just so I can drop the type and recreat everything.
Is there a better way to do this in Postgres?
Not really - if you're redefining the type then the functions really 
have to be recreated. I try to keep related objects in the same file, so 
I can re-run them all together.

2.  The reason I had to create my own type was because record didn't ork for me when I was selecting data across multiple tables.
I thought it should be dynamic but it only seems to work if i select all data in one table.  I need 2-3 columns from multiple
tables.  
Is there a better way to do this in Postgres?
Could you give more details of what you're trying? RECORD variables in 
functions should work fine.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] 2 questions about types

2005-03-16 Thread Jason Tesser
1. i have a few funcions that depend on a type.  i don't want to have to srop 
every function just so I can drop the type and recreat everything.
Is there a better way to do this in Postgres?

2.  The reason I had to create my own type was because record didn't ork for me 
when I was selecting data across multiple tables.
I thought it should be dynamic but it only seems to work if i select all data 
in one table.  I need 2-3 columns from multiple
tables.  
Is there a better way to do this in Postgres?

I am using Suse with Postgres 7.4.2  but am considering an upgrade to 8.0

Thank you,
Jason Tesser

---(end of broadcast)---
TIP 8: explain analyze is your friend