Armin Diehl writes:
> of cause that works, but what if we need a case incensitive primary key, the
Then it's not really a primary key, isn't it?
> example ist very slow if you have a lot of records. Is that possible ?
You probably want to convert or check your data on input, depending on
your
I have an application that did many inserts also. Only I was running
on a much lower powered machine and could get nowhere nere your 100
per minute rate. What I did has re-write the application to batch the
inserts and then use COPY. COPY it turns out is _much_ faster than
INSERT maybe by an o
Hi!
On Mon, 13 Mar 2000 17:30:10 +0800
Brian Baquiran <[EMAIL PROTECTED]> wrote:
[snipped]
> I'm running postmaster with the options -B600 -N300. I'm using the RedHat RPMs
> >from a pretty-much-stock RH6.1 installation. Can I tune postgres to load more
> data into memory?
You should increase
unsubscribe
> I was using pg_dumpall nightly (with the help of cron) to backup and
> suddenly my backup fails. The following was what I get from my backup file:
>
> \connect template1
> select datdba into table tmp_pg_shadow from pg_database where datname
> = 'template1';
> delete from pg_shadow where
Bruce Momjian wrote:
> > What's the advantage of using pg_dumpall over tar/gzip for backup?
>
> pg_dumpall grabs a constent snapshot of the data. tar/gzip is just
> backing up the files, so you can get some data in some table that is
> committed, but miss data in another table that is part of th