On Wed, Apr 14, 2010 at 3:35 PM, raghavendra t raagavendra@gmail.comwrote:
Hi,
Log file
=
LOG: database system was interrupted; last known up at 2010-04-12 10:53:12
IST
LOG: database system was not properly shut down; automatic recovery in
progress
LOG: record with zero
Hi Shoaib,
Tried with pg_resetxlog
[postg...@dbarhel564 bin]$ pg_resetxlog /usr/local/pgsql/mypg/
The database server was not shut down cleanly.
Resetting the transaction log might cause data to be lost.
If you want to proceed anyway, use -f to force reset.
[postg...@dbarhel564 bin]$
On Wed, Apr 14, 2010 at 5:00 PM, raghavendra t raagavendra@gmail.comwrote:
Hi Shoaib,
Tried with pg_resetxlog
[postg...@dbarhel564 bin]$ pg_resetxlog /usr/local/pgsql/mypg/
The database server was not shut down cleanly.
Resetting the transaction log might cause data to be lost.
If
Hi Shoaib,
I have the file with postgres permission on it, but , surprisingly its 0k .
[postg...@dbarhel564 ~]# cd /usr/local/pgsql/mypg/pg_clog/
[postg...@dbarhel564 pg_clog]# ll -lh
total 0
-rw-rw-r-- 1 postgres postgres 0 Apr 12 12:54
[postg...@dbarhel564 pg_clog]#
any step to change
I have a brief question - I can provide more information if it is not clear.
I would like to perform pairwise intersect operations between several
pairs of sets (where a set is a list or vector of labels), I have many
such pairs of sets and the counts of their elements may vary greatly.
Is there
Thank you all for the nice suggestions!
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi all.
We had a crisis this week that was resolved by tuning pg_autovacuum for a
particular table. The table is supposed to contain a small number of items at
any given point in time (typically around 10,000-30,000). The items are
inserted when we send out a message, and are selected, then
If you think that smallints are more bother than they are worth, perhaps you
should remove support for smallints completely. Then people would know where
they stood. (Or you could make smallint a synonym for int.)
The other half of my problem was having to cast the literal 'R' to char(1)
On Wed, Apr 14, 2010 at 6:30 PM, raghavendra t raagavendra@gmail.comwrote:
Hi Shoaib,
I have the file with postgres permission on it, but , surprisingly its 0k .
[postg...@dbarhel564 ~]# cd /usr/local/pgsql/mypg/pg_clog/
[postg...@dbarhel564 pg_clog]# ll -lh
total 0
-rw-rw-r-- 1
Dear All,
How to insert encoded data that is(/\...@#$%^*)(_+) something like
thatI have Csv file .Which contains encoded values.when i try to insert
those. I am getting error..I am not able to insert encoded data.Please
anyone guide me.
I am waiting for your great response.
Thanks and
controlsmartdb=# \d repcopy;
Table public.repcopy
Column | Type | Modifiers
-++---
report_id | integer| not null
dm_ip | character varying(64)
In response to Satish Burnwal (sburnwal) sburn...@cisco.com:
controlsmartdb=# \d repcopy;
Table public.repcopy
Column | Type | Modifiers
-++---
report_id | integer
In response to Herouth Maoz hero...@unicell.co.il:
Hi all.
We had a crisis this week that was resolved by tuning pg_autovacuum for a
particular table. The table is supposed to contain a small number of items at
any given point in time (typically around 10,000-30,000). The items are
Hi,
I have a trigger that runs in my Development machine but not in my
Production machine. the code is the following:
[code]
CREATE OR REPLACE FUNCTION aprtr_geraemail_agcompagamento ()
RETURNS trigger AS
$BODY$
DECLARE
vSUBJECT varchar(500);
vEMAIL_MSG_BRUTO text;
In response to Andre Lopes :
Hi,
I have a trigger that runs in my Development machine but not in my Production
machine. the code is the following:
SQL Error:
ERROR: function replace(text, unknown, integer) does not exist
LINE 1: select replace(replace(replace(replace(replace(replace( $1
OK, I added now index:
Create index repcopy_index on repcopy (dm_user, dm_ip)
And even then query is taking long time. See below. As I mentioned
before, for dm_user=u9 I have about 10,000 records and for dm_user=u9 I
have about 25000 records. As you see in the output below, for u9, I get
results
Thanks a lot, it works!
I'am using Postgres Plus Advanced Server 8.3R2 in development.In production
I user PostreSQL 8.3.9.
Best Regards,
On Wed, Apr 14, 2010 at 2:19 PM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Andre Lopes :
Hi,
I have a trigger that
In response to Satish Burnwal (sburnwal) sburn...@cisco.com:
snip
Man, it's hard to read your emails. I've reformatted, I suggest you
improve the formatting on future emails, as I was about to say to
hell with this question because it was just too difficult to read,
and I expect there are
On 4/14/2010 9:20 AM, Satish Burnwal (sburnwal) wrote:
Index Scan using repcopy_index on repcopy a (cost=0.00..87824607.17
*rows=28* width=142) (actual time=11773.105..689111.440*rows=1* loops=1)
Index Cond: ((dm_user)::text = 'u3'::text)
Filter: ((report_status = 0) AND
Hello
2010/4/14 Gaurav K Srivastav gaurav...@gmail.com:
Hi Pavel ,
First of all I am sorry for this to post on bugs, can you please place it
into pgsql-general maling list .
To get list of views where I have to change the query? can you please let
me know in which table/view the object
hi
i've got the database (about 300G) and it's still growing.
i am inserting new data (about 2G/day) into the database (there is
only one table there) and i'm also deleting about 2G/day (data older
than month).
the documentation says, one should run VACUUM if there are many
changes in the
On Wed, Apr 14, 2010 at 6:51 AM, venkat ven.tammin...@gmail.com wrote:
Dear All,
How to insert encoded data that is(/\...@#$%^*)(_+) something like
thatI have Csv file .Which contains encoded values.when i try to insert
those. I am getting error..I am not able to insert encoded
On 4/14/2010 9:42 AM, Bill Moran wrote:
Man, it's hard to read your emails. I've reformatted, I suggest you
improve the formatting on future emails, as I was about to say to
hell with this question because it was just too difficult to read,
and I expect there are others on the list who did
Dear all -
Can you please help me with this? Is there a way to restore
multiples ( more than one table ) using a single command from a whole
database dump that was created using pg_dump
regards
Hi Shoaib,
Thank you very much, now its working after creating the file with 256k.
I also thank everyone who has supported me in this thread.
Regards
Raghavendra
On Wed, Apr 14, 2010 at 3:44 PM, Shoaib Mir shoaib...@gmail.com wrote:
On Wed, Apr 14, 2010 at 6:30 PM, raghavendra t
On Wed, Apr 14, 2010 at 10:56 AM, akp geek akpg...@gmail.com wrote:
Dear all -
Can you please help me with this? Is there a way to restore
multiples ( more than one table ) using a single command from a whole
database dump that was created using pg_dump
Depends on exactly how you
Herouth Maoz wrote:
We found out that the table's response depends on the rate of ANALYZE being
performed. We have tuned the values in pg_autovacuum so that we have around
one analyze per minute.
What is bothering me is that sometimes the auto vacuum daemon decides to
perform a vacuum
Hi
Hope this will help you out. Its has bothe pg_dump and pg_restore, with -t
option
http://www.postgresql.org/docs/8.4/static/app-pgrestore.html
Regards
Raghavendra
On Wed, Apr 14, 2010 at 8:33 PM, Scott Mead scott.li...@enterprisedb.comwrote:
On Wed, Apr 14, 2010 at 10:56 AM, akp geek
On Wed, Apr 14, 2010 at 11:03 AM, Scott Mead
scott.li...@enterprisedb.comwrote:
On Wed, Apr 14, 2010 at 10:56 AM, akp geek akpg...@gmail.com wrote:
Dear all -
Can you please help me with this? Is there a way to restore
multiples ( more than one table ) using a single command from a
Thanks a lot.
On Wed, Apr 14, 2010 at 11:15 AM, Scott Mead
scott.li...@enterprisedb.comwrote:
On Wed, Apr 14, 2010 at 11:03 AM, Scott Mead scott.li...@enterprisedb.com
wrote:
On Wed, Apr 14, 2010 at 10:56 AM, akp geek akpg...@gmail.com wrote:
Dear all -
Can you please help me
I often have to create test copies of a production database. The
database has several large tables that contain historical data that is
not needed for our system to run, and just wastes time doing backups and
restores. I created two batch files, one for backing up and the other
for restoring.
Alvaro Herrera alvhe...@commandprompt.com writes:
Herouth Maoz wrote:
We found out that the table's response depends on the rate of ANALYZE being
performed. We have tuned the values in pg_autovacuum so that we have around
one analyze per minute.
What is bothering me is that sometimes the
Thanks for guidance. If you could share the file, I appreciate it
Regards
On Wed, Apr 14, 2010 at 11:27 AM, Rob Richardson rob.richard...@rad-con.com
wrote:
I often have to create test copies of a production database. The
database has several large tables that contain historical data that
hi everyone,
I'm looking for info about autoscale a cluster. I mean...with amazon you can
generate automatically virtual machine as far as you need...if you configure
that when the machine get 90% busy a new one will be created. The thing is
that i'd like to do something like that for my
On Wed, 14 Apr 2010 10:56:36 -0400
akp geek akpg...@gmail.com wrote:
Dear all -
Can you please help me with this? Is there a way to restore
multiples ( more than one table ) using a single command from a
whole database dump that was created using pg_dump
Something along the line
On Wed, Apr 14, 2010 at 06:24:00PM +0200, Jesus arteche wrote:
hi everyone,
I'm looking for info about autoscale a cluster.
Reassess this goal in the cold light of reason.
First, find out what trade-offs people make in order to get this
effect. In the unlikely event that, after finding out
First, I'd like to thank Bill and Alvaro as well as you for your replies.
Quoting Tom Lane:
Hmm. Given the churn rate on the table, I'm having a very hard time
believing that you don't need to vacuum it pretty dang often. Maybe the
direction you need to be moving is to persuade autovac to
On Wed, 2010-04-14 at 13:41 +0800, Craig Ringer wrote:
John R Pierce wrote:
is pl/java kind of dead? I don't see much activity since years ago.
I've been a bit worried about that myself. With OpenJDK and a GPL java,
it makes a lot of sense to make Java a first-class PL in PostgreSQL.
In response to Herouth Maoz hero...@unicell.co.il:
First, I'd like to thank Bill and Alvaro as well as you for your replies.
Quoting Tom Lane:
Hmm. Given the churn rate on the table, I'm having a very hard time
believing that you don't need to vacuum it pretty dang often. Maybe the
Jan Krcmar wrote:
hi
i've got the database (about 300G) and it's still growing.
i am inserting new data (about 2G/day) into the database (there is
only one table there) and i'm also deleting about 2G/day (data older
than month).
the documentation says, one should run VACUUM if there are many
In response to Andre Lopes :
Thanks a lot, it works!
I'am using Postgres Plus Advanced Server 8.3R2 in development.In production I
user PostreSQL 8.3.9.
Yeah, AFAIK is the Postgres Plus Advanced Server the version of the
regular PG-version plus 1. So you have 8.2 as development and 8.3 as
2010/4/14 John R Pierce pie...@hogranch.com:
Jan Krcmar wrote:
hi
i've got the database (about 300G) and it's still growing.
i am inserting new data (about 2G/day) into the database (there is
only one table there) and i'm also deleting about 2G/day (data older
than month).
the
On Wed, Apr 14, 2010 at 1:43 PM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Andre Lopes :
Thanks a lot, it works!
I'am using Postgres Plus Advanced Server 8.3R2 in development.In
production I
user PostreSQL 8.3.9.
Yeah, AFAIK is the Postgres Plus Advanced
Herouth Maoz wrote:
First, I'd like to thank Bill and Alvaro as well as you for your replies.
Quoting Tom Lane:
Hmm. Given the churn rate on the table, I'm having a very hard time
believing that you don't need to vacuum it pretty dang often. Maybe the
direction you need to be moving is to
On Wed, 2010-04-14 at 14:20 -0400, Scott Mead wrote:
On Wed, Apr 14, 2010 at 1:43 PM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Andre Lopes :
Thanks a lot, it works!
I'am using Postgres Plus Advanced Server 8.3R2 in
ציטוט Bill Moran:
In response to Herouth Maoz hero...@unicell.co.il:
Did I understand the original problem correctly? I thought you were saying
that _lack_ of analyzing was causing performance issues, and that running
vacuum analyze was taking too long and causing the interval between
In response to Herouth Maoz hero...@unicell.co.il:
If the problem is that overall performance slows too much when vacuum is
running, then you'll probably have to get more/faster hardware. Vacuum
has to run occasionally or your table will bloat. Bloated tables perform
lousy and waste a
Jan Krcmar wrote:
You might consider partitioning this table by date, either by day or by
week, and instead of deleting old rows, drop entire old partitions
this is not really good workaround...
It is in fact the only good workaround for your problem, which you'll
eventually come to
Joshua D. Drake wrote:
Mostly, I think you will find that the back end developers aren't fond
of Java and thus, it doesn't get much love.
There is a reason that plPerl is king in this community (and I don't
even like Perl).
yeah, understood. I'm getting the request 2nd hand, from someone
On Thu, Apr 15, 2010 at 6:18 AM, John R Pierce pie...@hogranch.com wrote:
Joshua D. Drake wrote:
Mostly, I think you will find that the back end developers aren't fond
of Java and thus, it doesn't get much love.
There is a reason that plPerl is king in this community (and I don't
even like
Basically, has anyone done any work with storing gridded spatial data? I
see
lot's of info on Geospatial data, but it's usually cities, stations, etc.,
not a regular grid that doesn't change...
well, you could play around with storing information in arrays.
storing record for each point
Jaime Casanova wrote:
On Wed, Apr 7, 2010 at 10:30 PM, Warren Bell warrenbe...@gmail.com wrote:
Is there a way to create a unique constraint based on the content of a
field? For instance, say you have an integer field where you only want one
record with the number 1 in that field but there
Alvaro Herrera wrote:
Steven Harms escribi?:
I don't have stats on how big they were getting, but they are running
this every night, which I suspect causes issues (and I suspect the
reason their logs were getting big is because they programmed a bunch
of locked transactions):
find
On Wednesday 14 April 2010, Jan Krcmar honza...@gmail.com wrote:
You might consider partitioning this table by date, either by day or by
week, and instead of deleting old rows, drop entire old partitions
this is not really good workaround...
Actually it's a very good workaround, that a lot
Hi
You might consider partitioning this table by date, either by day or by
week, and instead of deleting old rows, drop entire old partitions
this is not really good workaround...
As a First choice, This is a very good workaround for your present
situation.
As a second choice,
Damian Carey wrote:
On Thu, Apr 15, 2010 at 6:18 AM, John R Pierce pie...@hogranch.com wrote:
Joshua D. Drake wrote:
Mostly, I think you will find that the back end developers aren't fond
of Java and thus, it doesn't get much love.
There is a reason that plPerl is king in this
I'm running PostgreSQL 8.4.3 on OS X Snow Leopard via MacPorts and I'm getting
strange inconsistent errors such as:
dbuser-# select * from log_form;
ERROR: syntax error at or near select
LINE 2: select * from log_form;
^
Then later the same query will run fine, as it should.
gvim
--
Merlin Moncure wrote:
On Wed, Apr 14, 2010 at 6:51 AM, venkat ven.tammin...@gmail.com wrote:
Dear All,
?? How to insert encoded data that is(/\...@#$%^*)(_+) something like
thatI have Csv file .Which contains encoded values.when i try to insert
those. I am getting error..I am not
On Wed, Apr 14, 2010 at 11:04 PM, gvim gvi...@googlemail.com wrote:
I'm running PostgreSQL 8.4.3 on OS X Snow Leopard via MacPorts and I'm
getting strange inconsistent errors such as:
dbuser-# select * from log_form;
ERROR: syntax error at or near select
LINE 2: select * from log_form;
Joshua D. Drake wrote:
On Wed, 2010-04-14 at 14:20 -0400, Scott Mead wrote:
On Wed, Apr 14, 2010 at 1:43 PM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Andre Lopes :
Thanks a lot, it works!
I'am using Postgres
Bruce Momjian br...@momjian.us writes:
Merlin Moncure wrote:
aside: anyone know if postgres properly handles csv according to rfc4180?
Wow, I had no idea there was an RFC for CSV.
Me either. I'd bet the percentage of CSV-using programs that actually
conform to the RFC is very small anyway;
Jesus arteche wrote:
hi everyone,
I'm looking for info about autoscale a cluster. I mean...with amazon
you can generate automatically virtual machine as far as you need...if
you configure that when the machine get 90% busy a new one will be
created. The thing is that i'd like to do something
Hello All,
I wrote c function for postgresql, i compiled it as a shared file, next
I added my function to pgadmin everything was ok, but now when i wanna
call it the pgadmin is closing with no report.
Regards
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes
On Wednesday 14 April 2010 16.01:39 Jan Krcmar wrote:
the documentation says, one should run VACUUM if there are many
changes in the database, but the vacuumdb never finishes sooner than
the new data should be imported.
is there any technique that can solve this problem?
- vacuum can run
64 matches
Mail list logo