Hi,
I have a number of PostgreSQL servers which I often access through ssh
tunnel with Pgadmin3. I would like to double check which one I have landed
on (if the tunnel is really configured the way I want). Is there a way to
query the hostname from the catalogs?
Thanks
Peter
On Fri, Apr 30, 2010 at 03:33:13PM +0200, Péter Kovács wrote:
Hi,
I have a number of PostgreSQL servers which I often access through ssh
tunnel with Pgadmin3. I would like to double check which one I have landed
on (if the tunnel is really configured the way I want). Is there a way to
query
Ross J. Reedstrom reeds...@rice.edu wrote:
If you're talking unix/linux machines, then /etc/hostname _should_
have the current hostname in it
If not, check for /etc/HOSTNAME -- some distributions do that.
-Kevin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make
Thank you, Ross!
inet_server_addr() returns the correct IP address in this case. I am not
sure why... The tunnel goes through at least one port-forwarding node, but I
am not sure this makes postgresql see the connection any less local.
Thanks
Peter
On Fri, Apr 30, 2010 at 4:14 PM, Ross J.
--- On Fri, 30/4/10, Ross J. Reedstrom reeds...@rice.edu wrote:
Hi,
I have a number of PostgreSQL servers which I often
access through ssh
tunnel with Pgadmin3. I would like to double check
which one I have landed
on (if the tunnel is really configured the way I
want). Is there a
Yes, nice and simple.
I just did this in C and it works ok!
hostname.c
===
#include unistd.h
#include postgres.h
#include utils/elog.h
#include utils/palloc.h
#include storage/bufpage.h
#define MAX_HOST_SIZE 200
PG_MODULE_MAGIC;
Gavin Kistner phr...@mac.com writes:
On Apr 29, 2010, at 11:42 PM, Tom Lane wrote:
For comparison, I have these settings on my laptop:
pro:~ tgl$ cat /etc/sysctl.conf
kern.sysv.shmmax=33554432
kern.sysv.shmmin=1
kern.sysv.shmmni=32
kern.sysv.shmseg=8
kern.sysv.shmall=8192
and PG starts
Hi folks,
I'm doing regular (hourly) dumps and putting them into git
(and automatically pushing it to several git servers).
The problem is: the dumps are always differing (even if nothing
happened in that db).
diff --git a/DB-net79.sql b/DB-net79.sql
index 89b682d..c8be6d6 100644
HI,
I have a vacuum process that has been running on a table for the past 20 days.
The server is postgres 8.3 on ubuntu linux.
Can I safely kill the vacuum with:
select pg_cancel_backend(26433)
Some stats
The table sees ~5million and not more than ~10million updates month
$ ps aux |
Jason Culverhouse ja...@mischievous.org writes:
I have a vacuum process that has been running on a table for the past 20 days.
The server is postgres 8.3 on ubuntu linux.
Can I safely kill the vacuum with:
select pg_cancel_backend(26433)
Sure.
20 days is a long time. Is it actually
Jason Culverhouse ja...@mischievous.org wrote:
The server is postgres 8.3 on ubuntu linux.
There have been a number of bugs fixed in vacuuming, including
autovacuum, since the original 8.3 release. I hope you mean
8.3.somethingrecent -- if not, you should get the latest fixes.
Can I
Enrico Weigelt weig...@metux.de writes:
I'm doing regular (hourly) dumps and putting them into git
(and automatically pushing it to several git servers).
The problem is: the dumps are always differing (even if nothing
happened in that db).
CREATE TEMPORARY TABLE pgdump_oid (dummy integer)
I wrote:
I have no idea why EDB recommend changing the maxproc settings, but
I doubt that's related to shared memory. I see that they have shmmax
equal to exactly 4096 times shmall, so that's good, but there must be
some other OSX peculiarity that this is tripping over. Maybe it's too
large
I wrote:
... OTOH I'd have expected you to have to reboot multiple
times while experimenting with the shm parameters, so I'm not entirely
convinced I've hit on the right explanation.
Oh ... were you using sysctl -w rather than rebooting to adjust the
shmmem parameters? I realized after
Scott Whitney wrote:
On the 10k vs 15k rpm disks, there's a _lot_ to be said about that. I don't want to start a flame war here,
but 15k versus 10k rpm hard drives does NOT equivocate to a 50% increase in read/write times, to say
the VERY least.
Your characterization is correct were there
A _very_ valid point which I omitted simply because it once again points to 24
spindles
equating to a faster array than 12. The problem, as you so easily summarize, is
the fact
that once you put any decent RAID controller into this, you've essentially
added
a magic black box that does better
I had a very strange occurrence 2 days ago where autovacuum appears to
have stopped running.
I did not see any error messages in pg_log. The problem caused a file
which is usually around 50Mb but which gets updated extensively to grow
to 105GB, which in turn brought the server's performance down
On Apr 30, 2010, at 3:35 PM, Tom Lane wrote:
Does the theory of a pre-existing smaller shmem segment make sense from
your end? In particular, had you previously had another Postgres server
running on that machine, and perhaps killed it ungracefully? If this
theory is correct, the issue was
18 matches
Mail list logo