Jayakumar_Mukundaraju wrote:
I am new to Postgresql Database. My setup is backend is postgresql
database, frontend is Java(JDBC). I installed the postgres in windows
platform. Now I want to setup server and client configuration. Kindly
guide me how to set the configuration parameters, in
c k wrote:
I got following error while testing some newly created functions.
ERROR: cache lookup failed for function 111462
CONTEXT: PL/pgSQL function uf_postdoc line 25 at FOR over
SELECT rows
** Error **
ERROR: cache lookup failed for function 111462
SQL state:
Shashwat_Nigam wrote:
Thanks for the help. But still the user is able to see all
the databases. I defined something like this
# IPv4 local connections:
hostall all 127.0.0.1/32 md5
hostHMRI_database hmri127.0.0.1/32 md5
in the above case
paulo matadr wrote:
you knowns a tool for automatic converter plsql in pgplsql?
this tool exist?
EnterpriseDB claim that they can do something like this,
but I don't believe that there is any tool which can do
more than assist you.
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing list
paulo matadr wrote:
I everyone, need help!!!
My aplication return erro:
2009-01-20 10:22:03,264 INFO [STDOUT] Caused by: org.postgresql.util.
PSQLException: ERROR: invalid memory alloc request size 1705447581
TABLE batch.relatorio_gerado
rege_id integer NOT NULL,
fuin_id
Please don't top post.
paulo matadr wrote:
My aplication return erro:
2009-01-20 10:22:03,264 INFO [STDOUT] Caused by: org.postgresql.util.
PSQLException: ERROR: invalid memory alloc request size 1705447581
What are you doing in terms of SQL?
INSERT, UPDATE, DELETE?
How big are the
Please don't top post.
paulo matadr wrote:
I think identified the problem
lts's check log below:
Query:
SELECTSTATEMENT: select relatorios0_.fuin_id as fuin5_1_,
relatorios0_.rege_id as rege1_1_, relatorios0_.rege_id as r
ege1_624_0_, relatorios0_.rege_tmultimaalteracao as rege2_624_0_,
how to access large objects from the database when using
PGAdmin like client applications? When a normal select is
executed on lo type columns it gives only the number which
refers to large objects stored in 'pg_largeobjects' table.
When records from pg_largeobjects are fetched using
fanlijing wrote:
When I want to save a bytea value into a file, what should I do?
Is there any function dealing with that in PostgreSQL? (like lo_export() to
deal with the large-object) (# I didn't find any)
If you want to save it in a file on the server, you can use
the COPY statement.
fanlijing wrote:
In Fact, I'm doing a porting project from Oracle 10g to PostgreSQL 9.0.4
There is a procedure in Oracle 10g to write a blob value into a file using:
[...]
I know PostgreSQL doesn't support procedure, so I want to porting it into a
function use LANGUAGE plpgsql.
So I must
I wrote:
[fanlijing wants to write bytea to file]
A simple
COPY (SELECT byteacol WROM mytab WHERE ...) TO 'filename' (FORMAT
binary)
should do the trick.
Corrections:
a) binary must be surrounded by single quotes.
b) that won't dump just the binary data - you would have
to remove the
fanlijing wrote:
Actually, now I'm confused with another problem, that is:
Oracle: for update wait 10
PostgreSQL: no support for the parameter wait
What should I do when I'm doing porting on this point?
If I get rid of the parameter wait, there would be a dead lock in my
program...
Is
Matt Williams wrote:
I ran it through xxd and the hex-bytes are different than those of the proper
utf8 character:
03300a0: 7472 c383 c2b6 6d65 7209 3009 5c4e 0931 trmer.0.\N.1 (from the
dump file)
ö : c3b6
ö : c383 c2b6
That looks like your database does not contain what you
Rural Hunter wrote:
My db is in utf-8, I have a row in my table say tmp_article and I
wanted
to generate ts_vector from the article content:
select to_tsvector(content) from tmp_article;
But I got this error:
ERROR: invalid byte sequence for encoding UTF8: 0xf481
I am wondering how this
Please don't top post.
Rural Hunter wrote:
My db is in utf-8, I have a row in my table say tmp_article and I wanted
to generate ts_vector from the article content:
select to_tsvector(content) from tmp_article;
But I got this error:
ERROR: invalid byte sequence for encoding UTF8: 0xf481
I
Rural Hunter wrote:
My db is in utf-8, I have a row in my table say tmp_article and I wanted
to generate ts_vector from the article content:
select to_tsvector(content) from tmp_article;
But I got this error:
ERROR: invalid byte sequence for encoding UTF8: 0xf481
Do you use any nonstandard
service prodat wrote:
first: i hope im right here in this mailing list.
Yes.
My problem:
We have several Postgre servers 9.0.4, one on Windows Server 2008 R2 and
others on different UNIX/Linux systems.
Windows: LC_CTYPE = 'German, Germany'
UNIX: LC_CTYPE='de_DE.UTF-8'
(so LC_COLLATE is
Scott Whitney wrote:
I'll be moving to PG9 (hopefully soon...probably 6 weeks).
At that time, I'll be setting up hot-standby with streaming replication to 2
sites. Off-siting my
pgdumps nightly is no longer going to be possible in the very near future,
due to the size of the
dumps.
Rural Hunter wrote:
Do you get the error if you try Chinese settings without nlpbamboo?
How can I do this? The Chinese processing is provided by nlpbamboo.
Er, sorry, forget what I said. I have no experience with Chinese.
I guess you should try to ask the nlpbamboo people.
Is there anything
Kshirsagar, Swanand wrote:
I am using Postgres version 8.3.17 on one of my database server. I
have received a .sql dump file from
my client. Which is supposed to be taken from a Postgres 8.3.7 db.
Now, I have created an empty
database and I am trying to restore the .sql file with command psql
Wells Oliver wrote:
Is there a way to get the date that starts week 14, or 15 (etc)?
Which week numbering do you use?
http://en.wikipedia.org/wiki/Week_number#Week_numbering
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your
Gnanakumar wrote:
Recently, in our Production server, we found a single query being
held up
in IDLE in transaction for more than 19 hours using the following
query:
select date_trunc('second', current_timestamp - query_start) as
runtime,
datname as database_name, current_query from
Gnanakumar wrote:
SELECT application_name, client_addr, client_hostname, client_port
FROM pg_stat_activity
WHERE procpid = 14740;
(Replace 14740 of the process ID of the idle in transaction
backend).
Look on the client machine and find the process that holds TCP port
client_port open (on
Radovan Jablonovsky wrote:
Could you please help with this peculiar problem?
In PostgreSQL log occurred this message:
2012-06-13 12:58:45.876 MDT [17536]: [1-1] FATAL: terminating
autovacuum process due to administrator
command
The server worked for 48 minutes after and then it
Radovan Jablonovsky wrote:
Could you, please navigate me how to explain 2 different calculation
kernel parameter SEMMNI.
In PostgreSQL documentation:
http://www.postgresql.org/docs/9.1/static/kernel-resources.html#SYSVIPC
is used this calculation of SEMMNI.
SEMMNIMaximum number of
Madhu.Lanka wrote:
Can u please help to write a function for the following scenario?
select [...] from [...] where [...] and p.name=? and p.password=?
Don't use ? for the parameters.
Use $1 for the first parameter and $2 for the second.
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing
Mathias Breuninger wrote:
I configure PostgreSQL 9.1.5 on two servers: one with FreeBSD-9.0/powerpc64
and the other with
FreeBSD-9.0/powerpc.
The 64 bits server is the master and I tried to configure the 32 bits one as
a standby server.
I followed the documentation but I can't start
David Morton wrote:
We have many large tables which contain static historical data, they
are auto vacuumed on
a regular basis (sometimes to prevent wraparound) which i suspect
causes a few annoying side effects:
- Additional WAL file generation
- Increased 'changed' data as far as our online
johnkn63 wrote:
I am in the process of upgrading an existing database from PostgreSQL
8.1 to
9.1 . In trial upgrades I have discovered that in 9.1 only superusers
can
login to phpPgAdmin. The database has a number of users who are
already
accustomed to using phpPgAdmin but for whom it would
johnkn63 wrote:
I have checked the pg_hba.conf the log says 'CST FATAL: password
authentication failed for user test8'.
That indicates a wrong password.
How does pg_hba.conf look?
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to
johnkn63 wrote:
Thank you for your patience. Whilst this is the error one gets for a
wrong
password, this is not the case here simply changing the status of the
user
to superuser without touching the password allows the user to login,
removing superuser status the reverse.
This login error
johnkn63 wrote:
That's if you started the server with -b.
now how do I find out how server is starting and how do I change it?
I am
running 9.1 on debian
Look at the output of pg_ctl status.
What do you see there?
Actually, the error message would be different in this case, so
I don't think
johnkn63 wrote:
[can connect as superuser but not as regular user]
I tried - the results where as predicted whilst a non-superuser does
not
work but once altered to superuser does work.
So what next?
Hmm, I'm running out of ideas here.
You still get password authentication failed in the
Rick Dicaire wrote:
/staging/9.1/main/pg_xlog is really archive directory?
restore_command
should be set to the command which copies the archived WAL files from
the archive area.
Yes. This is the archive directory.
But according to what you wrote before you start the server
with -D
Volkmar Herbst wrote:
I have a problem with xml support on a fresh installation of pg 9.2. I
configures and build postgres
with xml.
I rebuild with /usr/local/pgsql/postgresql-9.2.0/configure
--with-libxml --with-python. Configuring
make and make install went well.
configure: using
Tom Lane wrote:
Craig James cja...@emolecules.com writes:
Is it OK to mix 9.2.x and 8.4.x clients and servers? Can they talk
to each
other? In other words, if I install 9.2 on a new server alongside
older
servers running 8.4.x, can clients on the 8.4 systems talk to 9.2
servers,
and can
Parkirat Bagga wrote:
How to recover, if master postgres WAL got corrupted before it is
applied to
the standby server?
Will I loose all the data present in the WAL segment?
How can I partially apply the WAL?
During normal recovery:
---
WAL will be replayed until the
Lonni J Friedman wrote:
Does rsysc sync's the partial logs as well. As I would be doing the rsync
from old master to new master (when the old master recovers), there might be
some partial logs present in the old master?
I don't think that's how WAL works. The log is either complete, or it
Shams Khan wrote:
Can somebody tell me how we can automate the switch over process, I
know there is once file recovery.conf which contains the parameter
trigger_file. But my question is what code that file containsHope
my thinking is correct.
The contents of the trigger file are
Maybe you will get more enthusiastic support in an Oracle forum...
It is curious that you want to migrate form a fairly
current PostgreSQL version (you didn't say which one) to
an Oracle version that is already out of (Premier) support
and will be terminally abandoned next year.
Shams Khan
Parkirat Bagga wrote:
What, I have observed that there is always only one sender process sending
the data. Is it possible with any configuration that I can optimize this
system for more current and less overhead on master. We are not thinking
in-terms of long running queries.
Replication
teyking2003 wrote:
We might not consider upgrade our existing psql 8.2.5 to the latest
one
because of this psql version is bundle with our web application. and
the
whole system design by the vendor. we only can perform maintenance
steps to
that database, scare any side effect to our web
Rural Hunter wrote:
I'm on 9.1.3. I set auto vacuum off for some tables. I noticed one
thing: when I run manual analyze on parent table, It seems the
children
tables are also analyzed. Here is the analyze log:
INFO: analyzing public.table_parent
INFO: table_parent: scanned 0 of 0 pages,
Ronit Allen wrote:
I have the following query on pg_stat_all_tables to look at updates,
inserts, and deletes:
SELECT relname, n_tup_ins, n_tup_upd, n_tup_del FROM
pg_stat_all_tables;
How can I add a date range to the WHERE clause to show data from a
specified date range?
I don't see
Shams Khan wrote:
There are few query I found at the time of monitoring below
00:54:43.574338 | 26952 | select a.actno,MAX(b.actid) as
actid,MAX(b.actname) as name,MAX(b.phone)
as phone,MAX(b.email) as mail,MAX(a.subsno) as subs,MAX(t.pkgid
) as svcid,MAX(a.expirydt) as
Shams Khan wrote:
Whenever the load of server increases, I find this query running in
which seems to me the reason. Is
there any other way I can written in such a way so that it doesn't
hang up the entire process?
Oh, I see. Try to follow
http://wiki.postgresql.org/wiki/Slow_Query_Questions
if
Ronit Allen wrote:
On 9.1 with postgresql.conf left at defaults, what period of time does
a query on pg_stat_all_tables
cover?
Let's say my database has been running for exactly one year.
I then issue SELECT relname, n_tup_ins FROM pg_stat_all_tables;
Do the results show me inserts for
Anibal David Acosta wrote:
This query returns false
select ('2012-11-20 17:00:00-02:00'::timestamp with time zone) =
('2012-11-20 18:00:00-
03:00'::timestamp with time zone)
But are exactly same time
Why?
They are not the same. Compare:
select ('2012-11-20 17:00:00-02:00'::timestamp
Hariraman Jayaraj wrote:
We are using Postgres 8.3 version, in Suse Linux Enterprise Edition
SP1. Our DB total size is around
530 GB. we have enabled the setting related to
default_statistics_target. Kindly suggest what will be
ideal value for default_statistics_target.
We have more than
Gabriele Bartolini wrote:
Il 29/11/12 02:52, Andrew W. Gibbs ha scritto:
That said, I'm wondering whether I created the base backup with
reliably correct procedures or merely was lucky.
In order to simplify the management of disaster recovery solutions for
PostgreSQL, we have recently
Gabriele Bartolini wrote:
Gabriele, I understand that you want to spread the word,
but the OP's problem was a different one:
You are right. I totally missed that. I skimmed the email (very quickly)
and thought that in that particular context Andrew was referring to PITR
in general.
I admit
Phil Monroe wrote:
So we had to failover and do a full base backup to get our slave database
back online and ran into a
interesting scenario. After copying the data directory, setting up the
recovery.conf, and starting the
slave database, the database crashes while replaying xlogs. However,
Jayashree Rajagopalan wrote:
I've to alter a column which is of datatype bytea to varchar(255).
I used this:
ALTER TABLE tablename ALTER COLUMN columname TYPE varchar(255);
But I'm really not sure, if the value is casted properly.
Is there a way to explicity to cast the value, while
Tom Lane wrote:
You cannot specify a conversion function while altering
a column's type, you'd have to use a new column like this:
Sure you can; that's the whole point of the USING option.
It'd look something like
ALTER TABLE test ALTER COLUMN val TYPE varchar(255) USING convert(val);
I
Baptiste LHOSTE wrote:
We are still trying to fix our issue and we found following logs :
2013-01-17 09:55:01 CET LOG: automatic vacuum of table
flows.public.agg_t1213_incoming_a6_dst_port_and_proto_f5: index scans: 1
pages: 0 removed, 136547 remain
tuples: 0 removed, 4044679
Vinod V wrote:
We need to implement incremental backup (PITR) in our environment, since our
production database is in
high risk.
We are running with PostgreSQL 9.1
We need to implement this on both Windows7 and Linux platforms.
You are not trying to restore WAL files from Windows on
Bert wrote:
I wrote a script to make sure all tables are vacuumed and analyzed every
evening. This works very
well.
Autovacuum doesn't do the job for you?
That would save you from worries like the ones you have.
Does anyone has an idea why in this case the vacuum/analyze takes almost as
Sathish Reddy Yelala wrote:
Thanks for the response.Here I have few more queries regarding
Connections.
1.When the connection goes to IDLE state
When the server has completed a request and waits for
the next one.
2. when the IDLE connection becomes active
When the server receives
Marko Rihtar wrote:
maybe you can help me with this issue.
Here is an example
SELECT NULL::integer
UNION ALL
SELECT NULL
UNION ALL
SELECT 123
For this sql to work i have to cast NULL to integer.
I was wondering if there exists some configuration parameter at database
level that can
CS DBA wrote:
We have a client that is asking for a service as follows:
- a virtual machine in our data center
- they want to upload files to the VM via sftp
- they want to connect to a local db server on the VM via SSL
Here's my question;
Do I need to pay for an official SSL cert? If
Geoff Winkless wrote:
I'm trying to migrate a database from MySQL to postgres and I'm struggling
with the amount of
diskspace the resulting db takes.
I may be missing a setting somewhere but I can't see one anywhere obvious
(apologies if I'm being
stupid...)
Even postgres' idea of the
Geoff Winkless wrote:
[trying to account for the disk space used]
Of course I got that slightly wrong: ItemIdData is for each row, not for each
column; an extra 4 bytes
for each row makes the per-row space 290MB, leaving 167MB unexplained.
I'm assuming the remaining 167MB is related to the
Dev Kumkar wrote:
I am using postgres 9.2 and when executing function dblink facing a fatal
error while trying to
execute dblink_connect as follows:
SELECT * FROM dblink_connect('host=127.0.0.1 port=5432 dbname=postgres
password=test')
ERROR: could not establish connection
Ned Wolpert wrote:
I'm doing a postmortem on a corruption event we had. I have an idea on what
happened, but not sure.
I figure I'd share what happened and see if I'm close to right here.
Event: Running 9.1.6 with hot-standby, archiving 4 months of wal files, and
even a nightly
Cheryl Grant wrote:
Hi, I'm trying to test restoration of a database using point-in-time
recovery. I'm taking a backup of the database using pg_basebackup:
pg_basebackup -D /postgres/data -Fp -l RestorePostgres -U reco -w -h
radmast01 -p 5432
Then attempting to recover the backup on a second
Cheryl Grant wrote:
2844LOG: starting point-in-time recovery to 2013-02-26 12:53:00+11
2844LOG: restored log file 0001017D0056 from archive
2844LOG: unexpected pageaddr 17D/2E00 in log file 381, segment 86,
offset 0
2844LOG: invalid checkpoint record
2844FATAL:
Jesse Johnson wrote:
I have an issue that is driving me crazy. I am using libpq on linux in C. The
insert code below
inserts one key/data pair record with as both as BYTEA into database test2:
Your code as you posted it is not complete (no #includes, undefined
exit_nicely).
This makes it
Rural Hunter wrote:
I encounter the same issue often: Granted update/insert to an user but
forgot to grant it on the related sequence. It's hard to understand that
an user has write access on table but not on necessary sequences. I
think the grant on tables should cascade to related sequences.
archana shinde wrote:
Please look at below mail and help me on this.
Please could you help me to resolve this error which I am getting, I am using
postgres 7.4.5 on
solaris 9.
[...]
2013-03-28 20:00:12 FATAL: out of memory
DETAIL: Failed on request of size 112.
[...]
2013-03-28
Nik Tek wrote:
Could someone tell m how to measure postgres memory usage.
Is there a pg_* view to measure?
No, you have to use operating system tools, like ps on UNIX.
Yours,
Laurenz Albe
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
Jim Mercer wrote:
i'm running pgsql 9.1.3 (soon to be upgraded to 9.2.x) on FreeBSD 8-STABLE.
i've got a db of some 550GB that i want to backup on a daily basis, as we
have had some nasty hardware/power/other issues of late.
with dump on FreeBSD, you can specify -L, which does a filesystem
Armin Resch wrote:
Not sure this is the right list to vent about this but here you go:
I) select regexp_replace('BEFORE.AFTER','(.*)\..*','\1','g') Substring
II) select regexp_replace('BEFORE.AFTER','(.*)\\..*','\\1','g') Substring
Executing (II) against pg 8.4.4 or 9.0.4 yields 'BEFORE',
Samuel Stearns wrote:
Environment:
Postgres 8.4.15
Ubuntu 10.04.4
We have multiple monthly tables inherited from a master. Sample definition:
CREATE TABLE syslog_master (
[...]
);
CREATE TABLE syslog_201008 (CONSTRAINT syslog_201008_datetime_check CHECK
(((datetime = '2010-08-
German Becker wrote:
From my experience, postgres will delete WAL (after checkpoint) regardless if
they have been archived.
Are you saying this is abnormal?
That would be quite abnormal.
Could it be that your archive_command has exit status 0
even if something goes wrong?
What are the
German Becker wrote:
Here is the archive part of the config:
archive_mode = on # allows archiving to be done
# (change requires restart)
archive_command = '/var/lib/postgresql/scripts/archive_copy.sh %p %f'
# command to use to
German Becker wrote:
I am testing version 9.1.9 before putting it in production. One of my tests
involved deleting a the
contents of a big table ( ~ 13 GB size) and then VACUUMing it. During VACUUM
PANICS. Here is the
message:
PANIC: corrupted item pointer: offset = 8128, size = 80
German Becker wrote:
Just in case there are some errors in my first email, where it says after
deleting the context of the
same big table It should say after deleting de contents of the same big
table I essence what i did
is
DELETE from table;
VACUUM table;
And I got the error
I am
Bhanu Murthy wrote:
handsfree wrote:
We're looking to use streaming replication to a target via a secondary host
using stunnel.
I could think of 2 possible solutions:
[...]
2. Use streaming replication config features to secure traffic (encrypted
data over TCP)
Master configuration
handsfree wrote:
Laurenz, thanks for that information; I will bear in mind the problems that
you encountered with SSL renegotiation. I'm not sure that this will
necessarily be an issue using stunnel, but I will talk with my colleagues
who have more experience of stunnel and confirm whether
Rodrigo Barboza wrote:
I have a doubt.
I have a 32-bit postrgesql running with 2.5gb of shared_buffer.
And I have maintenance_work_mem = 1gb and autovacuum_max_workers = 3.
How maintenance_work_mem is related to shared_buffer?
If the 3 workers uses 1gb, will the database crash?
Or their
Kouber Saparev wrote:
We are using the 9.1 built-in streaming replication.
Recently our slave nodes fell behind because of an UPDATE statement. It
took about 3 minutes to execute, but it affected half a million records,
hence the replication broke with the requested WAL segment ... has
Alejandro Brust wrote:
we recently move from pg9.0 to pg 9.2 whit this method:
from a new server PG9.2 we did:
pg_dump -h server -p 5432 -U user -Fc -i -b base-name bk.backup
after that in the new server
createdb -T template0 -E LATIN1 basename
pg_restore -h server -p 5432 -U user -d
prakhar jauhari wrote:
I am facing a similar kind of problem, but in a two node setup. Streaming
replication is being used
with a high wal_keep_segments, using log shipping to bump up timelines on the
standby server to setup
SR(streaming replication).
DB1 - master
DB2 - standby
girish R G peetle wrote:
I was wondering if I can use the modified version of your solution.
1. Convert output of pg_stop_backup to transaction log file name. (This
refers to the latest active
transaction log file)
2. If we are able to see this file under archive(WAL) directory, backup
bricklen wrote:
On Wed, Jun 19, 2013 at 4:22 PM, Sean Dillon s...@dillonsoftware.com wrote:
Just turned on WAL archiving to an S3 bucket for a small database - total
size of perhaps 2-4G.
After turning on achiving, we're seeing WAL logs written to S3 at the rate
of about 1G every 3
Rafal Radecki wrote:
I have to setup a new infrastructure with postgre software. I would like to
use user access
patterns from mysql if possible. For every application:
1) one user for reads: grant select on db.* to 'reader'@'ip space' ...;
2) one user for reads/writes: grant
girish R G peetle wrote:
On PostgreSQL 8.0, I need to convert output of pg_start_backup /
pg_stop_backup to transaction log
file name. In the the latest versions we have pg_xlogfile_name function to do
this, but 8.0 there is
not function to achieve this.
Is it possible to export
girish R G peetle wrote:
backup_label (after execting pg_start_backup) will have the start
transaction log file information, I
can this use this file to extract the current time line ID.
Also I guess the time line ID will be changed only when server is recovered.
So, I can safely use the
Armand du Plessis wrote:
We're running into a scenario where despite doing a manual vacuum as a
superuser the relfrozenxid for
one relation now dangerously close to wraparound is not getting reset.
It's a Postgres 9.2.3 cluster. We shutdown other access to the machine while
running the
Brian Wong wrote:
After upgrading postgresql from 9.1.1 to 9.1.9, launching pgadmin3's server
status window gives an
error dialog box. This is a pgadmin3 client on Windows. The error says:
--
An error has occurred:
ERROR: could
Brian Wong wrote:
After upgrading postgresql from 9.1.1 to 9.1.9, launching pgadmin3's server
status window gives an
error dialog box. This is a pgadmin3 client on Windows. The error says:
--
An error has occurred:
ERROR: could
Brian Wong wrote:
--
An error has occurred:
ERROR: could not access file $libdir/adminpack: No such file or directory
--
I'm on Oracle Enterprise Linux.
The server was
Sergey Arlashin wrote:
I have 2GB backup of WAL archive folder but the backup of postgres database
cluster directory is lost.
However I need to restore any data possible from these WAL logs.
Is that possible? What do I do to get the data.
Unfortunately that is impossible.
Yours,
Laurenz
Dzmitry wrote:
On 8/19/13 11:36 AM, Stéphane Schildknecht
stephane.schildkne...@postgresql.fr wrote:
Le 19/08/2013 10:07, Dzmitry a écrit :
I have postgres server running on ubuntu 12,Intel Xeon 8 CPUs 29 GB RAM.
With following settings:
max_connections = 550
shared_buffers = 12GB
Dzmitry wrote:
On 8/19/13 11:36 AM, Stéphane Schildknecht
stephane.schildkne...@postgresql.fr wrote:
Le 19/08/2013 10:07, Dzmitry a écrit :
I have postgres server running on ubuntu 12,Intel Xeon 8 CPUs 29 GB
RAM.
With following settings:
max_connections = 550
shared_buffers = 12GB
Patrick Dung wrote:
I have some questions about backup on PostgreSQL:
1) pg_dumpall
I am sure that this is asked in somewhere: why the pg_dumpall does not
support custom or tar format
like pg_dump.
I have heard that the custom format could be faster and may generate a
smaller dump file.
Patrick Dung wrote:
I have some questions about backup on PostgreSQL:
1) pg_dumpall
I am sure that this is asked in somewhere: why the pg_dumpall does not
support custom or tar format
like pg_dump.
I have heard that the custom format could be faster and may generate a
smaller dump file.
Ed Tarento wrote:
I'm a newbie, I hope this is the right group and admit I have little postgres
experience. I'm working
on a VM I can easily re-install so killing it doesn't matter. The dir is
full of 508 files, the last
few of which look like:
-rw--- 1 postgres postgres 16777216
prakhar jauhari wrote:
I am using postgres9.2, on a linux 6.3 box.
For start i created a user postgres on linux with login permission and no
password.
Is it necessary for the postgres user to have login permissions?
Can i change it to no login with out breaking basebackup and
Roberto Grandi wrote:
I ask for your help cause I can't point out the solution to my problem on PG
8.3
I would catch an exception outside any function/procedure but directly within
script.
BEGIN;
-- raise an exception code
EXCEPTION
WHEN 'exception_type'
THEN ROLLBACK;
1 - 100 of 105 matches
Mail list logo