Durumdara wrote:
> The pg_catalog schema is system schema, but it is IN the DB.
>
> Is this true? So OID is not global (out from DB)?
The OID generator is global to the instance, but the unicity
checks are local to the tables that use OIDs, including
large objects.
The case when you
Durumdara wrote:
> > Because of upload/download progress we used LargeObjects to store some
> > files in one of our database (and not bytea).
> > Only this database uses the OID-s of these files.
> >
> > In the near future we must move to another server.
> > This new server is also working now,
Hi!
Somebody wrote me that:
The pg_catalog schema is system schema, but it is IN the DB.
Is this true? So OID is not global (out from DB)?
So we can dump and restore the DB with OIDs without collision in new server?
Thank you!
dd
2017-10-12 11:35 GMT+02:00 Durumdara :
Dear Members!
Because of upload/download progress we used LargeObjects to store some
files in one of our database (and not bytea).
Only this database uses the OID-s of these files.
In the near future we must move to another server.
This new server is also working now, the moving of databases is
Hi,
please don't top post.
Il 18/01/2017 15:01, PAWAN SHARMA ha scritto:
Thanks for reply, but I have 120 databases running on a one single
instance. So it's not possible to take backup of instance instead of
taking pg_dumb of all databases separately.
Example: suppose we have two
Thanks for reply, but I have 120 databases running on a one single
instance. So it's not possible to take backup of instance instead of taking
pg_dumb of all databases separately.
Example: suppose we have two instances running on single server, instance A
having 120 databases and instance B
On Wed, Jan 18, 2017 at 7:32 AM, PAWAN SHARMA
wrote:
> Hello All,
>
> I am using postgres 9.5 enterprise edition and postgres 9.5 open source
> where i want know solution of two problems.
>
> 1.How can we restore single database from base backup files only, I don't
Hello All,
I am using postgres 9.5 enterprise edition and postgres 9.5 open source
where i want know solution of two problems.
1.How can we restore single database from base backup files only, I don't
have pg_dump backup.
2.How can we restore single instance on different server where
Hi
We have a system with several users. Sometimes one of the users make a
mistake with his data and want to restore, like he want to do an undo. Only
one user should be restored, not all users.
I work as sysadm so I can not change the system, but has to solve the task
at this level.
My approach
* David Steele (da...@pgmasters.net) wrote:
> On 7/29/16 5:31 PM, Rakesh Kumar wrote:
> > Sure.
> >
> > 1 - You ran pg_basebackup on node-1 against a live cluster and store
> > it on NFS or tape.
> > 2 - Do a restore on node-2 from the backup taken on (1), but only for
> > a subset of the
On 7/29/16 5:31 PM, Rakesh Kumar wrote:
>> Are you saying that?:
>>
>> 1) You ran pg_basebackup against a live cluster and sent the output to
>> another location.
>>
>> 2) At the other location the cluster is not in use.
>>
>> 3) You want to grab the contents of the inactive cluster directly off
On 07/29/2016 02:31 PM, Rakesh Kumar wrote:
Are you saying that?:
1) You ran pg_basebackup against a live cluster and sent the output to
another location.
2) At the other location the cluster is not in use.
3) You want to grab the contents of the inactive cluster directly off the
disk.
If
> Are you saying that?:
>
> 1) You ran pg_basebackup against a live cluster and sent the output to
> another location.
>
> 2) At the other location the cluster is not in use.
>
> 3) You want to grab the contents of the inactive cluster directly off the
> disk.
>
> If that is the case, then no it
On 07/29/2016 02:16 PM, Rakesh Kumar wrote:
If a cluster is backed up physically using pg_basebackup, how can we
restore only a particular schema from it. Is it even possible?
Are you saying that?:
1) You ran pg_basebackup against a live cluster and sent the output to
another location.
2)
If a cluster is backed up physically using pg_basebackup, how can we
restore only a particular schema from it. Is it even possible?
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Hi Jason:
On Sat, Feb 15, 2014 at 10:55 PM, Antman, Jason (CMG-Atlanta)
jason.ant...@coxinc.com wrote:
The short answer is... due to too much technical debt, and some perhaps
bad decisions made in the past... yeah. We've dug ourselves into this
hole, and there's no feasible way out.
The more
Hi Jason:
On Sat, Feb 15, 2014 at 11:17 PM, Antman, Jason (CMG-Atlanta)
jason.ant...@coxinc.com wrote:
the best we'll be able to do is try to run multiple postgres instances
on each host, and manage the whole service postgresql-9.0-24 stop
craziness that comes with that...
Just a sugestion, I
Hi:
On Sun, Feb 16, 2014 at 1:02 AM, Antman, Jason (CMG-Atlanta)
jason.ant...@coxinc.com wrote:
ones to expire out of cache. I.e. we have hardware with 192G of RAM. If
each database is only queried, say, for 10 seconds out of each 5 minute
interval, how do we maximize resource utilization /
Hi Jason:
On Sun, Feb 16, 2014 at 1:30 AM, Antman, Jason (CMG-Atlanta)
jason.ant...@coxinc.com wrote:
I think I jumped on this without really understanding what you were
saying, or the implications of it. If I run N postgres server instances
on the same physical host, I can do away with the
I also asked this question on dba.stackexchange.com, where it received a very
detailed enumeration of the associated problems from Craig Ringer:
http://dba.stackexchange.com/questions/58896/restore-postgres-data-tablespace-to-new-tablespace-at-new-mount-point/58967?noredirect=1#58967
Perhaps
Antman, Jason (CMG-Atlanta) jason.ant...@coxinc.com writes:
Perhaps there's a postgres internals expert around, someone intimitely
familiar with pg_xlog/pg_clog/pg_control, who can comment on whether it's
possible to take the on-disk files from a single database in a single
tablespace, and
On 2/15/2014 10:15 AM, Antman, Jason (CMG-Atlanta) wrote:
I also asked this question on dba.stackexchange.com, where it received
a very detailed enumeration of the associated problems from Craig Ringer:
Well thanks for someone at least sending a reply, though I suppose I
should have asked how do I do this, or what are the major hurdles to
doing this, as it obviously has to be *possible* given unlimited
knowledge, resources and time.
Perhaps I should frame the question differently:
If you had
On Sat, Feb 15, 2014 at 06:15:04PM +, Antman, Jason (CMG-Atlanta) wrote:
I also asked this question on dba.stackexchange.com, where it received a very
detailed enumeration of the associated problems from Craig Ringer:
...
Perhaps there's a postgres internals expert around, someone
On 2/15/2014 10:31 AM, Antman, Jason (CMG-Atlanta) wrote:
If you had a single ~1TB database, and needed to be able to give fresh
data copies to dev/test environments (which are usually largely idle)
either on demand or daily, how would you do it? The only other thing
that comes to mind is
On 02/15/2014 01:22 PM, John R Pierce wrote:
On 2/15/2014 10:15 AM, Antman, Jason (CMG-Atlanta) wrote:
I also asked this question on dba.stackexchange.com, where it
received a very detailed enumeration of the associated problems from
Craig Ringer:
On 02/15/2014 10:31 AM, Antman, Jason (CMG-Atlanta) wrote:
Well thanks for someone at least sending a reply, though I suppose I
should have asked how do I do this, or what are the major hurdles to
doing this, as it obviously has to be *possible* given unlimited
knowledge, resources and time.
On 02/15/2014 02:00 PM, Francisco Olarte wrote:
Hi:
On Sat, Feb 15, 2014 at 7:31 PM, Antman, Jason (CMG-Atlanta)
jason.ant...@coxinc.com wrote:
Well thanks for someone at least sending a reply, though I suppose I
should have asked how do I do this, or what are the major hurdles to
doing
Replies inline below.
Thanks to everyone who's responded so far. The more I explain this, and
answer questions, the more I see how my original brilliant idea
(multiple DBs per postgres instance on one host, instead of 1:1:1
DB:postgres:host) is insane, without some specific support for it in
Forwarding back to list.
Original Message
Subject: Re: [GENERAL] Restore postgresql data directory to tablespace
on new host? Or swap tablespaces?
Date: Sat, 15 Feb 2014 22:08:51 +
From: Antman, Jason (CMG-Atlanta) jason.ant...@coxinc.com
To: Adrian Klaver adrian.kla
On Sat, Feb 15, 2014 at 10:17:05PM +, Antman, Jason (CMG-Atlanta) wrote:
[...] I see how my original brilliant idea
(multiple DBs per postgres instance on one host, [...]) is insane,
without some specific support for it in postgres.
multiple DBs per PostgreSQL instance on one host is
On 02/15/2014 05:27 PM, Karsten Hilbert wrote:
On Sat, Feb 15, 2014 at 10:17:05PM +, Antman, Jason (CMG-Atlanta) wrote:
[...] I see how my original brilliant idea
(multiple DBs per postgres instance on one host, [...]) is insane,
without some specific support for it in postgres.
On 02/15/2014 04:55 PM, Antman, Jason (CMG-Atlanta) wrote:
On 02/15/2014 02:00 PM, Francisco Olarte wrote:
If I NEEDED to be able to provide 100-150 snapshots to test/dev
environments 20% of which maybe active, I'll setup a cluster, buy
somewhere above a quarter terabyte RAM and some big
On 2/15/2014 4:30 PM, Antman, Jason (CMG-Atlanta) wrote:
My current postgres instances for testing have 16GB shared_buffers (and
5MB work_mem, 24GB effective_cache_size). So if, hypothetically (to give
a mathematically simple example), I have a host machine with 100GB RAM,
I can't run 10
I have a bunch of test/development databases which we currently refresh with
production data as-needed using a NetApp filer's snapshot capabilities - we
have a production slave with its datadir on a filer mount (NFS), and once a
night (via cron) we shutdown the slave, snapshot the filer volume,
I'm working with a vendor who is in the process of converting their system
from something else to Postgres. Yay!
My vendor took a dump of our something else database (which runs on
Windows), did their conversion to Postgres, and then sent me back a
postgres dump (custom format) of the database
On Tue, Nov 26, 2013 at 09:25:17AM -0500, Chris Curvey wrote:
CREATE DATABASE TestDatabase WITH TEMPLATE = template0 ENCODING = 'UTF8'
LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United
States.1252';
Guess guessing, but I bet the collation is what hurts, just because
that
Chris Curvey wrote:
My vendor took a dump of our something else database (which runs on
Windows), did their conversion
to Postgres, and then sent me back a postgres dump (custom format) of the
database for me to load onto
my servers for testing.
I was interested to find that while I
Andrew Sullivan wrote:
Guess guessing, but I bet the collation is what hurts, [...]
(The background for my guess: on your Linux box UTF-8 is likely the
normal local encoding, but on Windows that isn't true, and 1252 is
_almost_ but not quite Unicode. This bites people generally in
On Tue, Nov 26, 2013 at 02:48:34PM +, Albe Laurenz wrote:
I beg your pardon, but Windows-1252 has nothing to do with Unicode
Sorry, you're quite right, I'm having a brain fade (I meant ISO
8859-1, of course).
The point I wanted to make, however, is that the collation often
causes trouble
Hello,
Up to now, we're working with openERP v.5. The server has installed on our
ubuntu server station. I succeed in sauvegarding database (v8.3 of pgsql)
via pg_dump.
I would like now restore this database into my new OPENerp server station
(windows7). I succeed in doing it thanks to psql but
On 05/28/2013 07:34 AM, image wrote:
Hello,
Up to now, we're working with openERP v.5. The server has installed on our
ubuntu server station. I succeed in sauvegarding database (v8.3 of pgsql)
via pg_dump.
I would like now restore this database into my new OPENerp server station
(windows7). I
On 2013-01-22, Rich Shepard rshep...@appl-ecosys.com wrote:
I neglected to dump a single table before adding additional rows to it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days ago.
The file
On Sun, 27 Jan 2013, Jasen Betts wrote:
yeah, emacs is slow on large files.
Jasen,
I've noticed this over the years.
for a one-off I'd use less(1), to extract the desired table data.
If I had to repeat it i'd use sed or awk
I used 'joe'. It handled the job with aplomb.
Thanks,
Rich
Hi,
On 23 January 2013 04:57, Rich Shepard rshep...@appl-ecosys.com wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
You should grep for:
- CREATE TABLE
- COPY
statements and
I neglected to dump a single table before adding additional rows to it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days ago.
The file is 386M in size and emacs is taking a very long time to move
On Tue, 22 Jan 2013, Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
My solution: view the file in the pager I use (less), then copy relevant
lines to another
On 01/22/2013 10:07 AM, Rich Shepard wrote:
On Tue, 22 Jan 2013, Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
My solution: view the file in the pager I use
On Tue, 22 Jan 2013, Joshua D. Drake wrote:
Rich, the main problem is using pg_dumpall. Unfortunately pg_dumpall has
not kept up with all the other advances Postgres has had in the last
decade. To set up dump based backups properly I suggest reviewing:
Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
If you have a server with enough free space, you could restore
the whole cluster and then selectively dump what you
On 01/22/2013 09:57 AM, Rich Shepard wrote:
I neglected to dump a single table before adding additional rows to
it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days
ago.
The file is 386M in size
wd wrote:
the time is between backup start and stop.
That is the problem -- until the point where pg_stop_backup() was
run PostgreSQL can't be sure of having a consistent database. It is
waiting from enough WAL to get it there. My practice is always to
keep the last two base backups and all WAL
Kevin Grittner kgri...@mail.com writes:
That is the problem -- until the point where pg_stop_backup() was
run PostgreSQL can't be sure of having a consistent database. It is
waiting from enough WAL to get it there. My practice is always to
keep the last two base backups and all WAL from the
On Tue, Nov 27, 2012 at 6:59 AM, Kevin Grittner kgri...@mail.com wrote:
wd wrote:
the time is between backup start and stop.
That is the problem -- until the point where pg_stop_backup() was
run PostgreSQL can't be sure of having a consistent database. It is
waiting from enough WAL to get
On Tue, Nov 27, 2012 at 6:59 AM, Kevin Grittner kgri...@mail.com wrote:
wd wrote:
the time is between backup start and stop.
That is the problem -- until the point where pg_stop_backup() was
run PostgreSQL can't be sure of having a consistent database.
In 9.2, it seems to be willing to give
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 10 0]HINT: Execute
pg_xlog_replay_resume() to continue.
Well, try
SELECT pg_xlog_replay_resume();
Yours,
Laurenz Albe
I can't connect to postgres at that time.
On Mon, Nov 26, 2012 at 4:33 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 10 0]HINT:
Execute pg_xlog_replay_resume() to continue.
Well, try
SELECT pg_xlog_replay_resume();
I can't connect
On Sat, Nov 24, 2012 at 3:44 PM, wd w...@wdicc.com wrote:
What entries are you getting in the log file?
Logs are something like this:
[2012-11-24 21:51:33.374 CST 583 50b0d0e5.247 1 0]LOG: database system
was shut down in recovery at 2012-11-24 21:51:32 CST
[2012-11-24
Jeff Janes wrote:
FATAL: requested recovery stop point is before consistent recovery point
I don't understand why are you not getting this message.
Is it before the point where pg_stop_backup() was run?
-Kevin
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
On Mon, Nov 26, 2012 at 12:23 PM, Kevin Grittner kgri...@mail.com wrote:
Jeff Janes wrote:
FATAL: requested recovery stop point is before consistent recovery point
I don't understand why are you not getting this message.
Is it before the point where pg_stop_backup() was run?
It turns out
On Mon, Nov 26, 2012 at 11:32 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
wd wrote:
Logs are something like this:
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 9 0]LOG: recovery
has paused
[2012-11-24 21:51:33.591 CST 583 50b0d0e5.247 10 0]HINT:
Execute
On Tue, Nov 27, 2012 at 8:27 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Nov 26, 2012 at 12:23 PM, Kevin Grittner kgri...@mail.com wrote:
Jeff Janes wrote:
FATAL: requested recovery stop point is before consistent recovery point
I don't understand why are you not getting this
Yes, you are right, after set the two command, the recovery will stop at
that time.
But there is an other question, how to make this recovered Postgres can be
read and write? According to the manual, Postgres should be rename
recovery.conf to recovery.done, but it didn't.
I've tried pg_ctl
On Sat, Nov 24, 2012 at 6:00 AM, wd w...@wdicc.com wrote:
Yes, you are right, after set the two command, the recovery will stop at
that time.
But there is an other question, how to make this recovered Postgres can be
read and write? According to the manual, Postgres should be rename
On Sun, Nov 25, 2012 at 4:25 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Nov 24, 2012 at 6:00 AM, wd w...@wdicc.com wrote:
Yes, you are right, after set the two command, the recovery will stop at
that time.
But there is an other question, how to make this recovered Postgres can
On Fri, Nov 23, 2012 at 8:59 AM, wd w...@wdicc.com wrote:
Thanks for your reply, the logs are something like bellow,postgres will
restore every wal log I put in the xlog directory,and then continues
waiting for next wal log. The postgres version is 9.1.6.
[2012-11-22 18:49:24.175 CST
wd wrote:
I've try to restore Postgres to a specific time but failed.
The recovery.conf as bellow
restore_command='cp /t/xlog/%f %p'
recovery_target_time='2012-11-22 5:01:09 CST'
pause_at_recovery_target=true
recovery_target_inclusive=false
The basebackup was made at 2012-11-22 3:10 CST,
On Thu, Nov 22, 2012 at 7:29 PM, wd w...@wdicc.com wrote:
Thanks for your reply, the logs are something like bellow,postgres will
restore every wal log I put in the xlog directory,and then continues waiting
for next wal log. The postgres version is 9.1.6.
[2012-11-22 18:49:24.175 CST
hi,
I've try to restore Postgres to a specific time but failed.
The recovery.conf as bellow
restore_command='cp /t/xlog/%f %p'
recovery_target_time='2012-11-22 5:01:09 CST'
pause_at_recovery_target=true
recovery_target_inclusive=false
The basebackup was made at 2012-11-22 3:10 CST, I've copy
wd wrote:
I've try to restore Postgres to a specific time but failed.
The recovery.conf as bellow
restore_command='cp /t/xlog/%f %p'
recovery_target_time='2012-11-22 5:01:09 CST'
pause_at_recovery_target=true
recovery_target_inclusive=false
The basebackup was made at 2012-11-22 3:10
Thanks for your reply, the logs are something like bellow,postgres will
restore every wal log I put in the xlog directory,and then continues
waiting for next wal log. The postgres version is 9.1.6.
[2012-11-22 18:49:24.175 CST 25744 50ae0334.6490 1 0]LOG: database
system was shut down in
Hi folks,
My server has a daily routine to import a dump file, however its taking
long time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there
are many blob fields), however it takes 4 hours to import using pg_restore.
What can I do to tune this database to
Hi,
On 14 November 2011 11:09, Alexander Burbello burbe...@yahoo.com.br wrote:
What can I do to tune this database to speed up this restore??
My current db parameters are:
shared_buffers = 256MB
maintenance_work_mem = 32MB
You should increase maintenance_work_mem as much as you can.
On 11/13/2011 06:09 PM, Alexander Burbello wrote:
Hi folks,
My server has a daily routine to import a dump file, however its taking long
time to finish it.
The original db has around 200 MB and takes 3~4 minutes to export (there are
many blob fields), however it takes 4 hours to import using
On 01/03/2011 06:37, Malm Paul wrote:
Hi, I've used PgAdmin III to store a server backup. But I'm not able to
restore it.
Please, could any one tell me how to do it? Im using version 1.10
Hi there,
Did you create a text or binary backup?
If binary, you either (i) use pg_restore on the
Hi, I've used PgAdmin III to store a server backup. But I'm not able to restore
it.
Please, could any one tell me how to do it? Im using version 1.10
/Paul
On Mar 1, 2011, at 12:07 PM, Malm Paul wrote:
Hi, I've used PgAdmin III to store a server backup. But I'm not able to
restore it.
Please, could any one tell me how to do it? Im using version 1.10
/Paul
Following link would help for restoring backup:
Le jeudi 30 décembre 2010 à 12:05 -0500, Andrew Sullivan a écrit :
[about Abiword]
It's intended as a word processor rather than a text
editor, isn't it?
It works with text files too. It's not a problem.
--
Vincent Veyron
http://marica.fr/
Progiciel de gestion des dossiers de contentieux et
Le mercredi 29 décembre 2010 à 11:09 -0800, Tim Bruce - Postgres a
écrit :
On Wed, December 29, 2010 10:59, John R Pierce wrote:
I'd also like to throw in Context for Windows as an Editor. It's also
free and has syntax highlighting for almost everything imaginable (on
Windows and *ix).
I'm
On Thu, Dec 30, 2010 at 06:02:54PM +0100, Vincent Veyron wrote:
I'm partial to Emacs, but I'm surprised nobody mentionned Abiword :
http://www.abisource.com/
I think Abiword would be a very bad editor for any kind of database
work, no? It's intended as a word processor rather than a text
On 2010-12-29, Bob Pawley rjpaw...@shaw.ca wrote:
Yes I was just looking at it.
It seems that it was dumped in that form.
Any thoughts on how that could happen?? Not that it will help in this
instance.
could be EOL problem. LF vs CRLF
but I expect that would be merely cosmetic.
--
Sent
On 29 Dec 2010, at 4:29, Adrian Klaver wrote:
What program are you using to look at the plain text file?
Notepad
Bob
Open the file in Wordpad and see if it looks better.
It looks the same.
Bob
Well there goes that theory. Notepad is almost useless as a text editor and
is
On 29 Dec 2010, at 4:40, Bob Pawley wrote:
It seems that this has affected just the triggers - although that is quite
massive I will just plug away at it until it's done
(Gosh, those lines were hard to find!)
How did you create those functions? With notepad, or from within pgadmin? If
you
On 29 Dec 2010, at 7:54, Alan Hodgson wrote:
I'll look at that - I'm also looking at something called Vim
http://www.vim.org/download.php
vim is an excellent open source text editor. Which may fix your problem if
it's related to line endings.
Learning Vim is probably time well-spent, but
On Wednesday 29. December 2010 13.18.40 Alban Hertroys wrote:
Learning Vim is probably time well-spent, but until you do it's
probably not that good a tool for fixing your problem.
Although Vim is indeed a very powerful editor, it's not particularly
easy to use. Unlike your usual editors
On Wednesday 29 December 2010 3:58:35 am Alban Hertroys wrote:
On 29 Dec 2010, at 4:29, Adrian Klaver wrote:
What program are you using to look at the plain text file?
Notepad
Bob
Open the file in Wordpad and see if it looks better.
It looks the same.
Bob
Well there
On Tuesday 28 December 2010 8:45:14 pm Bob Pawley wrote:
-Original Message-
From: Alan Hodgson
Sent: Tuesday, December 28, 2010 8:12 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Restore problem
On December 28, 2010, Adrian Klaver adrian.kla...@gmail.com wrote:
On 12
On Wednesday 29 December 2010 4:34:39 am Leif Biberg Kristensen wrote:
On Wednesday 29. December 2010 13.18.40 Alban Hertroys wrote:
Learning Vim is probably time well-spent, but until you do it's
probably not that good a tool for fixing your problem.
Although Vim is indeed a very powerful
-Original Message-
From: Alban Hertroys
Sent: Wednesday, December 29, 2010 4:03 AM
To: Bob Pawley
Cc: Adrian Klaver ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Restore problem
On 29 Dec 2010, at 4:40, Bob Pawley wrote:
It seems that this has affected just the triggers
-Original Message-
From: Adrian Klaver
Sent: Wednesday, December 29, 2010 8:08 AM
To: pgsql-general@postgresql.org
Cc: Leif Biberg Kristensen
Subject: Re: [GENERAL] Restore problem
On Wednesday 29 December 2010 4:34:39 am Leif Biberg Kristensen wrote:
On Wednesday 29. December 2010
On 12/29/10 4:34 AM, Leif Biberg Kristensen wrote:
Back when I used Windows, my favorite editor was EditPlus
(http://www.editplus.com/). It isn't free, but well worth the 35 bucks.
other good choices are Notepad++ (free) and my personal favorite,
UltraEdit ($$).
UEdit has some nice stuff
On Wednesday 29 December 2010 10:52:50 am Bob Pawley wrote:
-Original Message-
From: Adrian Klaver
Sent: Wednesday, December 29, 2010 8:08 AM
To: pgsql-general@postgresql.org
Cc: Leif Biberg Kristensen
Subject: Re: [GENERAL] Restore problem
On Wednesday 29 December 2010 4:34:39 am
On Wed, December 29, 2010 10:59, John R Pierce wrote:
On 12/29/10 4:34 AM, Leif Biberg Kristensen wrote:
Back when I used Windows, my favorite editor was EditPlus
(http://www.editplus.com/). It isn't free, but well worth the 35 bucks.
other good choices are Notepad++ (free) and my personal
Hi
I have restored a database using psql to windows version 8.4.
During the restore the trigger code became jumbled.
I now have a great number of lines that have moved so that they are now
included in lines the have been commented out – not to mention that the code
is hard to read.
Is
On Tue, Dec 28, 2010 at 6:06 PM, Bob Pawley rjpaw...@shaw.ca wrote:
Hi
I have restored a database using psql to windows version 8.4.
During the restore the trigger code became jumbled.
I now have a great number of lines that have moved so that they are now
included in lines the have
On Tuesday 28 December 2010 3:06:40 pm Bob Pawley wrote:
Hi
I have restored a database using psql to windows version 8.4.
During the restore the trigger code became jumbled.
I now have a great number of lines that have moved so that they are now
included in lines the have been commented
-Original Message-
From: Adrian Klaver
Sent: Tuesday, December 28, 2010 4:21 PM
To: pgsql-general@postgresql.org
Cc: Bob Pawley
Subject: Re: [GENERAL] Restore problem
On Tuesday 28 December 2010 3:06:40 pm Bob Pawley wrote:
Hi
I have restored a database using psql to windows version
On Tuesday 28 December 2010 5:58:51 pm Bob Pawley wrote:
-Original Message-
From: Adrian Klaver
Sent: Tuesday, December 28, 2010 4:21 PM
To: pgsql-general@postgresql.org
Cc: Bob Pawley
Subject: Re: [GENERAL] Restore problem
On Tuesday 28 December 2010 3:06:40 pm Bob Pawley wrote
@postgresql.org
Subject: Re: [GENERAL] Restore problem
On Tuesday 28 December 2010 5:58:51 pm Bob Pawley wrote:
-Original Message-
From: Adrian Klaver
Sent: Tuesday, December 28, 2010 4:21 PM
To: pgsql-general@postgresql.org
Cc: Bob Pawley
Subject: Re: [GENERAL] Restore problem
On Tuesday 28 December
1 - 100 of 341 matches
Mail list logo