toring the DB isn't _completely_ out of the
question, but I'd like to avoid it if at all possible.
Thanks,
David.
David Brain
[EMAIL PROTECTED]
919.297.1078
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
may be
missing, this table is created in a schema that is often dropped/re-
created.
David.
The select results in one line (
On Sep 24, 2007, at 10:58 AM, Tom Lane wrote:
David Brain <[EMAIL PROTECTED]> writes:
I am getting the error mentioned in the subject ('pg_dump: sche
working just fine (which it wasn't previously).
Again, thanks for the help, it is this kind of access to assistance
that makes PG a much easier 'sell'.
David.
David Brain
[EMAIL PROTECTED]
919.297.1078
---(end of broadcast)
Hi,
I have a situation where trying to drop a table results in:
#drop table cdrimporterror_old;
NOTICE: default for table cdrimporterror column cdrimporterrorid
depends on sequence cdrimporterror_cdrimporterrorid_seq
ERROR: cannot drop table cdrimporterror_old because other objects depend on it
Hi,
On Thu, Sep 10, 2009 at 2:55 PM, Tom Lane wrote:
>
> The "ownership" link is still there, evidently, and should be switched
> to the new table. Read up on ALTER SEQUENCE OWNED BY.
>
> regards, tom lane
>
Thank you - that was the issue, once the ownership was switched
Hi,
Is there a way of using EXECUTE in trigger functions to to do
something like:
CREATE OR REPLACE FUNCTION insert_trigger()
RETURNS trigger AS
$BODY$
BEGIN
EXECUTE('INSERT INTO public_partitions.table_'
|| date_part('year',NEW.eventdate)::VarChar
|| lpad(date_part(
I had in interesting issue the other day while trying to generate
delimited files from a query in psql, using:
\f'|'
\t
\a
\o out.file
select * from really_big_table sort by createddate;
This quantity of data involved here is fairly large (maybe 2-4GB).
Watching the memory usage, the postmaste
ut I then have
same issue just with different constraint/index names) as the tables
involved are pretty huge and a dump/restore isn't really an option.
Thanks,
David.
--
David Brain - bandwidth.com
[EMAIL PROTECTED]
919.297.1078
---(end of broadcast)
eding a restart (and even a restart could be scheduled if
necessary).
Let me know if I can provide any more info.
David.
Tom Lane wrote:
David Brain <[EMAIL PROTECTED]> writes:
This could well be a recurrence of this issue:
http://archives.postgresql.org/pgsql-general/2007-01/msg0180
There is also a add on in contrib (pg_buffercache) that can be used to
give an indication of the number of buffers in use, this can be used to
help find a 'good' shared mem size for your configuration.
David.
---(end of broadcast)---
TIP 4: Have
Hi,
Recently tried an upgrade from 7.3.2 to 8.1.2. The actual upgrade went
pretty well. However my application is now getting >10 times slower
INSERT times than it was under 7.3.2. As far as possible I mirrored the
settings in postgresql.conf (obviously I couldn't just drop in the old
conf
m going to investigate what effect upgrading npgsql has too - as it
appears there is a new version available. I will report back with what
I find.
I just enabled autovacuum - with no apparent speed increase.
Odd.
David.
--
David Brain - bandwidth.com
[EMAIL PROTECTED]
91
gsql 1.0beta2 - from http://pgfoundry.org/projects/npgsql
and things have improved significantly.
Thanks for the help,
David.
--
David Brain - bandwidth.com
[EMAIL PROTECTED]
919.297.1078
---(end of broadcast)---
TIP 3: Have you checked our extensive
I have a cron job that vacuums one of my dbs daily (Postgres 7.3.11)-
using 'vacuumdb -a -f -z -q -U postgres'. Currently I get an email
containing the folowing error messages:
NOTICE: number of page slots needed (38320) exceeds max_fsm_pages (2)
HINT: Consider increasing the configurat
14 matches
Mail list logo