Jay Levitt wrote:
We need to do a few bulk updates as Rails migrations. We're a typical
read-mostly web site, so at the moment, our checkpoint settings and
WAL are
all default (3 segments, 5 min, 16MB), and updating a million rows
takes 10
minutes due to all the checkpointing.
We have no
Hi all,
To do this backup remotely we need to open the 5434 port in the Firewall?
Best Regards,
On Mon, Feb 6, 2012 at 5:28 PM, Andreas Kretschmer
akretsch...@spamfence.net wrote:
Fanbin Meng fanbin.m...@kiltechcontrols.com wrote:
I installed the PostgreSql9.0 in windows 7 with one click
On Wed, Feb 15, 2012 at 12:21 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji venkat.bal...@verse.in
wrote:
On Wed, Feb 15, 2012 at 1:35 AM, Jay Levitt jay.lev...@gmail.com
wrote:
We need to do a few bulk updates as Rails migrations.
On Wed, Feb 15, 2012 at 06:40, dennis jenkins
dennis.jenkins...@gmail.com wrote:
djenkins@ostara ~/code/capybara $ psql -U$someuser -dpostgres -c
select version();
version
Andre Lopes lopes80an...@gmail.com wrote:
Hi all,
To do this backup remotely we need to open the 5434 port in the Firewall?
If the database running on this port and if there are a firewall, so
yes.
Andreas
--
Really, I'm not out to destroy Microsoft. That will just be a completely
This must be a function or trigger to break one statement into two. You
could of course simply use two separate statements in PHP as long as
they are in the same transaction. If you're going to perform this
action in two steps then putting both in a function or trigger is often
preferable.
In-short, I would like to understand if i am achieving the same
asynchronous streaming replication by putting synchronous_commit='local' -
I understand that streaming replication is record based log-shipping.
Below is what shows up on our primary test server where we are testing
synchronous
On 15 Únor 2012, 10:38, Venkat Balaji wrote:
Data loss would be an issue when there is a server crash or pg_xlog crash
etc. That many number of pg_xlog files (1000) would contribute to huge
data
loss (data changes not synced to the base are not guaranteed). Of-course,
this is not related to
On Wednesday, February 15, 2012 10:38:23 AM Venkat Balaji wrote:
On Wed, Feb 15, 2012 at 12:21 PM, Scott Marlowe
scott.marl...@gmail.comwrote:
On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji venkat.bal...@verse.in
all of these 1000 files get filled up in less than 5 mins, there are
Data loss would be an issue when there is a server crash or pg_xlog crash
etc. That many number of pg_xlog files (1000) would contribute to huge
data
loss (data changes not synced to the base are not guaranteed). Of-course,
this is not related to the current situation. Normally we
On Wed, Feb 15, 2012 at 4:12 PM, Andres Freund and...@anarazel.de wrote:
On Wednesday, February 15, 2012 10:38:23 AM Venkat Balaji wrote:
On Wed, Feb 15, 2012 at 12:21 PM, Scott Marlowe
scott.marl...@gmail.comwrote:
On Tue, Feb 14, 2012 at 10:57 PM, Venkat Balaji
venkat.bal...@verse.in
Is there anyone interested on this subject?
Il giorno dom, 05/02/2012 alle 23.30 +0100, Giuseppe Sacco ha scritto:
Hi all,
I wrote an application that store a large quantity of files in the
database as large binary objects. There are around 50 tables (all in one
schema) and only one table
What rules of thumb exist for:
* How often a table needs to be vacuumed?
* How often a table needs to be analyzed?
* How to tune Autovacuum?
I have a large DB server, and I'm concerned that it's not being
autovaccumed and autoanalyzed frequently enough. But I have no idea
what proper values
A table has a column obj_type which has very low selectivity (let's
say 5 choices, with the top choice making up 50% of records). Is
there any sense in indexing that column? B-trees won't be that useful,
and the docs discourage other index types/
--
Sent via pgsql-general mailing list
I have a 4 core, 4 GB server dedicated to running Postgres (only other
thing on it are monitoring, backup, and maintenance programs). It
runs about 5 databases, backing up an app, mainly ORM queries, but
some reporting and more complicated SQL JOINs as well.
I'm currently using the out-of-the
On Wed, Feb 15, 2012 at 09:14:34AM -0500, Robert James wrote:
What rules of thumb exist for:
* How often a table needs to be vacuumed?
* How often a table needs to be analyzed?
* How to tune Autovacuum?
I have a large DB server, and I'm concerned that it's not being
autovaccumed and
On 15 Únor 2012, 15:20, Robert James wrote:
I have a 4 core, 4 GB server dedicated to running Postgres (only other
thing on it are monitoring, backup, and maintenance programs). It
runs about 5 databases, backing up an app, mainly ORM queries, but
some reporting and more complicated SQL JOINs
Magnus Hagander mag...@hagander.net writes:
On Wed, Feb 15, 2012 at 06:40, dennis jenkins
dennis.jenkins...@gmail.com wrote:
I recently updated my Gentoo Linux development system from postgresql
9.0.4 to 9.0.6-r1 (9.0.6 plus some Gentoo specific patches). One of
my 'C' language functions
On Wednesday, February 15, 2012 2:15:34 am Venkat Balaji wrote:
In-short, I would like to understand if i am achieving the same
asynchronous streaming replication by putting synchronous_commit='local' -
I understand that streaming replication is record based log-shipping.
Below is what
On 2/15/2012 8:16 AM, Robert James wrote:
A table has a column obj_type which has very low selectivity (let's
say 5 choices, with the top choice making up 50% of records). Is
there any sense in indexing that column? B-trees won't be that useful,
and the docs discourage other index types/
It,
Thanks. What about auto-analyze? When will they be analyzed by default?
And what actions generally require new analyze?
On 2/15/12, Bruce Momjian br...@momjian.us wrote:
On Wed, Feb 15, 2012 at 09:14:34AM -0500, Robert James wrote:
What rules of thumb exist for:
* How often a table needs to
Hi,
In postgresql 9.0.4 i connected to a database and trying to make
queries but
i am facing memory issue, getting err as *glibc* detected
*realloc* invalid next size
so kindly requesting u to provide your valuable feed backs
Regards
Mehdi
Hello all- Is there a way to just dump functions in a schema in to a
txt file/ sql file ? Thank you.
Chris Angelico wrote:
On Wed, Feb 15, 2012 at 5:26 PM, Bartosz Dmytrakbdmyt...@eranet.pl wrote:
e.g. You can use BEGIN... EXCEPTION END, good example of
such approach is
there:
http://www.postgresql.org/docs/9.1/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE;
I wonder
On 2/15/12, Tomas Vondra t...@fuzzy.cz wrote:
On 15 Únor 2012, 15:20, Robert James wrote:
What parameters should I change to use the server best? What are good
starting points or them? What type of performance increase should I
see?
...
But you haven't
mentioned which version of PostgreSQL
You have two options.
- Use contrib module pg_extractor
https://github.com/omniti-labs/pg_extractor
- Use pg_proc catalog to get function definition
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 6:59 PM, Rajan, Pavithra
Hi,
During execute follow statement to reassign owner of objects in the database
I have an error:
unexpected classid 2328
Statement:
REASSIGN OWNED BY olduser TO postgres;
Regards,
Pawel
--
View this message in context:
One more thing you can also get it from pg_get_functiondef() system
function.
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Wed, Feb 15, 2012 at 9:32 PM, Raghavendra
raghavendra@enterprisedb.com wrote:
You have two options.
- Use contrib
Hi,
I need to drop some b-tree indexes because they are not used anymore.
Size of indexes vary between 700 MB and 7 GB. I tried common DROP
INDEX... but this query performed so long and blocked table so I had to
interrupt it. Is there any way how to drop large indexes in non-blocking
or
Could someone point me to documentation regarding DDL triggers in Postgresql?
Bob
Thank you. The PGExtractor is interesting! I was trying to get all the
function declaration and definition ( about 400+) by this method
pg_dump -Fc -v -s schemaname -f temp.dump yourdatabase
pg_restore -l temp.dump | grep FUNCTION functionlist
pg_restore -L functionlist temp.dump
On 02/15/2012 08:25 AM, Bob Pawley wrote:
Could someone point me to documentation regarding DDL triggers in
Postgresql?
Bob
No, because PostgreSQL does not have them (basically triggers on system
tables). There is a sparse wiki page to discuss the issue at
On 02/15/2012 08:25 AM, Bob Pawley wrote:
Could someone point me to documentation regarding DDL triggers in
Postgresql?
You are going to need to be more specific. Are you talking triggers that
are activated by a DDL statement or a trigger that creates a DDL
statement? Also what pl
Hello.
I've got a database with a very large table (currently holding 23.5
billion rows, the output of various data loggers over the course of my
PhD so far). The table itself has a trivial structure (see below) and is
partitioned by data time/date and has quite acceptable INSERT/SELECT
Hi,all
Is there someone using DBT-3 workload? I can make each of the 22
queries work by input independently. When I run run_workload.sh, the
workload is consuming all the CPU, but eventually it doesn't give any
output files. As I see from the online tutorial, I need the output
files to generate
On Wed, Feb 15, 2012 at 18:46, Asher Hoskins as...@piceur.com wrote:
My problem is that the autovacuum system isn't keeping up with INSERTs and I
keep running out of transaction IDs.
This is usually not a problem with vacuum, but a problem with
consuming too many transaction IDs. I suspect
On Wed, Feb 15, 2012 at 19:25, Marti Raudsepp ma...@juffo.org wrote:
VACUUM FULL is extremely inefficient in PostgreSQL 8.4 and older.
Oh, a word of warning, PostgreSQL 9.0+ has a faster VACUUM FULL
implementation, but it now requires twice the disk space of your table
size, during the vacuum
I have a table that is generated through ogr2ogr.
To get ogr2ogr working the way I want, I need to use the -overwrite
function. If I use the append function information is lost. Something to do
with the way the switches work.
Overwrite drops the existing table and also the attached trigger .
On 02/15/2012 09:42 AM, Bob Pawley wrote:
I have a table that is generated through ogr2ogr.
To get ogr2ogr working the way I want, I need to use the -overwrite
function. If I use the append function information is lost. Something to
do with the way the switches work.
Overwrite drops the
pcasper pcas...@wp.pl writes:
During execute follow statement to reassign owner of objects in the database
I have an error:
unexpected classid 2328
Hm, that would be a foreign data wrapper ... looks like REASSIGN OWNED
is missing support for that type of object. For the moment, you'll need
Hi!
I'm trying to query the database of a fictional bookstore to find out
which publisher has sold the most to the bookstore.
This is the database structure
books((book_id), title, author_id, subject_id)
publishers((publisher_id), name, address)
authors((author_id), last_name, first_name)
On 02/14/12 10:29 PM, khizer wrote:
In postgresql 9.0.4 i connected to a database and trying to
make queries but
i am facing memory issue, getting err as *glibc* detected
*realloc* invalid next size
so kindly requesting u to provide your valuable feed backs
insufficient
On 02/15/12 8:46 AM, Asher Hoskins wrote:
I've got a database with a very large table (currently holding 23.5
billion rows,
a table that large should probably be partitioned, likely by time.
maybe a partition for each month. as each partition is filled, it can
be VACUUM FREEZE'd since it
On Wed, Feb 15, 2012 at 12:38 PM, John R Pierce pie...@hogranch.com wrote:
so, your ~ monthly batch run could be something like...
create new partition table
copy/insert your 1-2 billion rows
vacuum analyze (NOT full) new table
vacuum freeze new table
update master partition
On Tue, Feb 14, 2012 at 11:29 PM, khizer khi...@srishtisoft.com wrote:
Hi,
In postgresql 9.0.4 i connected to a database and trying to make
queries but
i am facing memory issue, getting err as glibc detected realloc
invalid next size
so kindly requesting u to provide
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andreas Berglund
Sent: Wednesday, February 15, 2012 2:31 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] query problems
Hi!
I'm trying to query the database of a
Thanks for answering:
I am a member of the PostgreSQL development community here in Cuba. My proposal
is to develop an extension to PostgreSQL that allows query execution using
multiple threads. The central idea is to get the execution plan ready to run,
but send him to run with the same
On Wed, Feb 15, 2012 at 12:42 PM, Bob Pawley rjpaw...@shaw.ca wrote:
I have a table that is generated through ogr2ogr.
To get ogr2ogr working the way I want, I need to use the -overwrite
function. If I use the append function information is lost. Something to do
with the way the switches
Hi Regina
2012/2/14 Paragon Corporation l...@pcorp.us wrote:
Here it is in the docs now:
http://postgis.refractions.net/documentation/manual-svn/using_raster.xml.html#RasterOutput_PSQL
Citation from there: Sadly PSQL doesn't have easy to use built-in
functionality for outputting binaries...
Hi,
On 16 February 2012 01:14, Robert James srobertja...@gmail.com wrote:
What rules of thumb exist for:
* How often a table needs to be vacuumed?
* How often a table needs to be analyzed?
* How to tune Autovacuum?
I prefer to use autovacuum daemon and sets thresholds on per table
basis i.e.
As I understand it the order the of evaluation of search arguments is up to the
optimizer. I've tested the following query, that is supposed to take advantage
of advisory locks to skip over rows that are locked by other consumers running
the exact same query and it seems to work fine. It
Any help in getting function argument names is appreciated. Thank you
To dump the functions and their definitions , I first created a
pga_functions view as mentioned in one of the archives.
First Step: Create a pga_functions view
create or replace view pga_functions as
select
Instalei o Postgresql 8.3 no windows 7 como servidor. Em uma estação com xp
estou tentando me conectar e ocorre a seguinte mensagem: FATAL: no
pg_hba.conf entry for host 192.168.1.51, user Vilson, database
postgres, SSL off .
No servidor com windows 7 está configurado:
postgresql.conf:
Hi
I'm looking for postgresql90-server-9.0.6 RPM package for RHEL5.
I can't find it bellow site.
http://yum.postgresql.org/9.0/redhat/rhel-5-x86_64/repoview/;
http://yum.postgresql.org/9.0/redhat/rhel-5-i386/repoview/;.
I can find it for RHEL6.
Hi,
On Thu, 2012-02-16 at 11:21 +0900, Tomonari Katsumata wrote:
I'm looking for postgresql90-server-9.0.6 RPM package for RHEL5.
I can't find it bellow site.
http://yum.postgresql.org/9.0/redhat/rhel-5-x86_64/repoview/;
http://yum.postgresql.org/9.0/redhat/rhel-5-i386/repoview/;.
It is
On 02/15/12 6:21 PM, Tomonari Katsumata wrote:
Why the package for RHEL5 is not there ?
the repoview on that site seems somewhat broken.
just install the repository yum.conf.d files via the proper RPM, and you
can install postgresql via yum install...
rpm -Uvh
From the manual:
| Because MD5-encrypted passwords use the role name as cryptographic
| salt, renaming a role clears its password if the password is
| MD5-encrypted.
In backend/commands/user.c
if (!pg_md5_encrypt(password, stmt-role, strlen(stmt-role),
On Wed, 2012-02-15 at 18:28 -0800, John R Pierce wrote:
the repoview on that site seems somewhat broken.
It is not *broken*. It just lists the final 20 packages. As written
there, Please go to the navigation menu on the top right for full set
of packages.
Still, this is not the first complaint
Hi, Devrim
thank you for response.
(2012/02/16 11:28), Devrim GUNDUZ wrote:
Hi,
On Thu, 2012-02-16 at 11:21 +0900, Tomonari Katsumata wrote:
I'm looking for postgresql90-server-9.0.6 RPM package for RHEL5.
I can't find it bellow site.
On Thu, 2012-02-16 at 11:44 +0900, Tomonari Katsumata wrote:
but, I need to list the server-package in
http://yum.postgresql.org/9.0/redhat/rhel-5-x86_64/repoview/;.
Why? We cannot guarantee any specific package to be listed on the
repoview page, since as I just wrote, that list contains
Hi,
Is there a ready postgres 9.1 Package for the i.MX51X processor (ARM Cortex
architecture) available or do I need to compile the Postgres source myself ?
I need it for a board having the i.MX51 processor and Linux (one of the latest
versions of kernel - yet to decide the exact version).
If I
Hi,
OK, I understand it.
I've thought all PostgreSQL packages are included
in repoview page...
Sorry.
regards,
(2012/02/16 11:48), Devrim GÜNDÜZ wrote:
On Thu, 2012-02-16 at 11:44 +0900, Tomonari Katsumata wrote:
but, I need to list the server-package in
On Wed, Feb 15, 2012 at 9:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Wed, Feb 15, 2012 at 06:40, dennis jenkins
dennis.jenkins...@gmail.com wrote:
I recently updated my Gentoo Linux development system from postgresql
9.0.4 to 9.0.6-r1 (9.0.6 plus
On Wednesday, February 15, 2012 6:34:21 pm Stefan Weiss wrote:
From the manual:
| Because MD5-encrypted passwords use the role name as cryptographic
| salt, renaming a role clears its password if the password is
| MD5-encrypted.
In backend/commands/user.c
if
On 02/15/12 7:00 PM, Jayashankar K B wrote:
Is there a ready postgres 9.1 Package for the i.MX51X processor (ARM
Cortex architecture) available or do I need to compile the Postgres
source myself ?
I need it for a board having the i.MX51 processor and Linux (one of
the latest versions of
Kiriakos Georgiou kg.postgre...@olympiakos.com writes:
As I understand it the order the of evaluation of search arguments is up to
the optimizer. I've tested the following query, that is supposed to take
advantage of advisory locks to skip over rows that are locked by other
consumers
Andrian,
Thanks a lot !
So in this case you are not waiting for confirmation of the commit being
flushed
to disk on the standby. It that case you are bypassing the primary reason
for
sync replication. The plus is transactions on the master will complete
faster
and do so in the absence of
Maybe to show how found works and how to ignore errors - that is my
assumption only.
Regards,
Bartek
2012/2/15 Berend Tober bto...@broadstripe.net
Chris Angelico wrote:
On Wed, Feb 15, 2012 at 5:26 PM, Bartosz Dmytrakbdmyt...@eranet.pl
wrote:
e.g. You can use BEGIN... EXCEPTION
On Wed, Feb 15, 2012 at 10:37:08PM +0100, Stefan Keller wrote:
Hi Regina
2012/2/14 Paragon Corporation l...@pcorp.us wrote:
Here it is in the docs now:
http://postgis.refractions.net/documentation/manual-svn/using_raster.xml.html#RasterOutput_PSQL
Citation from there: Sadly PSQL
I tested it by visual inspection of advisory locks in pg_locks; once with a
small test table, and once on a larger 'operations' table in our test
environment. It seemed to work, but I hear you, I don't like to depend on the
mood of the optimizer. The drawback of the subquery version is that
70 matches
Mail list logo