re important to find out why this
happens only in my case or what kind of other information is needed to
find out why this happen?
Aldor
---(end of broadcast)---
TIP 6: explain analyze is your friend
if you use php you can use the function:
pg_escape_bytea()
when you want to retrieve the data from db you have to use:
pg_unescape_bytea()
for unescaping it
see the php documentation on this function for more information
Michael Fuhr wrote:
On Sat, Aug 20, 2005 at 01:17:55PM -0400, Postgres
1st
---
if you use php you can use the function:
"pg_escape_bytea()"
when you want to retrieve the data from db you have to use:
"pg_unescape_bytea()"
for unescaping it
see the php documentation on this function for more information
2nd
---
if you want to insert biiig data volumes try eith
select * from pg_class;
or
select relname, relpages from pg_class where relname = '[index-name]';
the pages give you the information about the space the index uses, a
page has 8kb.
[...]Every table and index is stored as an array of pages of a fixed
size (usually 8Kb, although a different p
before actually create
the index, having the information of the table on wich I will build
the index), is this possible?
thanks in advance to everyone that can help me.
On 8/23/05, Aldor <[EMAIL PROTECTED]> wrote:
select * from pg_class;
or
select relname, relpages from pg_class where r
I don't understand the error message in detail but my opinion is that
there is something really broken - so I would backup your databases,
initialize the pgdata with initdb again and then restore it.
Maybe somebody else understands/knows what the error means.
Erol Oz wrote:
Hi,
I get,
ERROR:
> ПОДРОБНО: 414193 dead row versions cannot be removed yet.
Increase max_fsm_pages in postgresql.conf.
Ivan wrote:
Hello,
OS: Windows 2003 Server SP1
DB: Postgresql-8.0.3
I have a database for caching html pages. Main table consists of
fields for URL (varchar(8192)), page body (text) and a f
Hi Chris,
> If you're running VACUUM often enough, then there's nothing wrong, and
> nothing to be done. You're simply observing how PostgreSQL handles
> large tables.
Wrong. I have a big table - running VACUUM the first time needs as long
as I run it after the VACUUM has finished. There are o
As I know EXPLAIN ANALYZE runs the query. I think you are just running
the query two times. The first time you run the query it will take a
long time to be processed - after the first run the query planner will
remember the best way to run the query so your second run runs much faster.
I can r
Can you post the code you created the INDEX with - I think there is
something which could explain the duration.
Marc G. Fournier wrote:
On Thu, 25 Aug 2005, Aldor wrote:
As I know EXPLAIN ANALYZE runs the query. I think you are just running
the query two times. The first time you run the
Aug 25 14:53:32 forgehouse-s1 postgres[23721]: [2-1] LOG: duration:
567.559 ms statement: UPDATE session SET hit_time=now() WHERE
md5='7537b74eab488de54d6e0167d1919207';
Aug 25 14:53:32 forgehouse-s1 postgres[23722]: [2-1] LOG: duration:
565.966 ms statement: UPDATE session SET hit_time=now(
nt: UPDATE session SET hit_time=now() WHERE
> md5='d84613009a95296fb511c2cb051ad618';
20 seconds - 13 seconds (execution time) = 7 seconds
So it also happens when they are not close to each other.
The hint with the log_min_duration is a good idea.
Michael Fuhr wrote:
On Fri, Aug
stats_start_collector = true
stats_command_string = true
stats_block_level = true
stats_row_level = true
Problem
---
In my opinion the VACUUM of the third table takes too long time. The
table is less complex then the other ones - so going on from the badest
case that every entry needs as long as for the most complex table (tbl1)
then I end up with this calculation:
148 seconds for 400.000 rows
= 2702 rows per second
100.000.000 rows = 37.009 seconds (=616 minutes, =10 hours)
But the VACUUM of the tbl3 needs more then 24h for beeing VACUUMed even
without changes.
Hardware + Software configuration
-
SCSI RAID5 with usable space of 120GB
2 x Intel(R) Xeon(TM) CPU 3.20GHz
RAM: 4148840 kB (4GB)
Kernel: 2.6.12.2
PGDATA is running on a separate partition
There are running no other things on this server.
Question
The duration of the VACUUM of tbl1 and tbl2 is okay - so how to speed up
now the VACUUM of tbl3?
--- Regards, Aldor
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
her information
--
Distribution: Debian
Encoding of Database: LATIN10
Locale: C
Regards, Aldor
Tom Lane wrote:
> Aldor <[EMAIL PROTECTED]> writes:
>
>>bgwriter_percent = 0
>>bgwriter_maxpages = 0
>
>
> Why did you do that? There's certainl
PostgreSQL 8.0.3: Any idea what this is?
---
Warning: pg_query(): Query failed: ERROR: deadlock detected
DETAIL: Process 8835 waits for ShareLock on transaction 193588236;
blocked by process 8834.
Process 8834 waits for ShareLock on transaction 193588228; blocked by
process 8835
---
I have neve
thanks for the explanation... i found the mistake in a new pl/pgsql
function i wrote before this error occured;-)
Oliver Elphick wrote:
> On Mon, 2005-08-29 at 16:50 +0100, Aldor wrote:
>
>>PostgreSQL 8.0.3: Any idea what this is?
>>
>>---
>>Warning: pg_query():
>Are there any non-statement level solution? Something that appends to
the WAL
>> on the slave server?
For this kind of solution I'm also looking for a while - but in the
meantime you can also take a look at "slony" - it can do really many
things. Don't give up at the beginning, after you've u
Why you don't make:
COPY carax_exec_bck TO '/home/postgres/FILE';
?
Then you could also use something like
COPY carax_exec_bck TO '/home/postgres/FILE' WITH BINARY;
etc.
Cédric Buschini wrote:
> Hi
>
> Here I am again with my 'stupd' question
>
> here is a script :
>
>
ue
in your case.
Regards,
Aldor
Bui Phu Ngot wrote:
> Dear all,
>
> I am a newbie to postgresql. After installing Postgresql 8.0.3, I run
> postmaster and get an error (log). Please help me ! Thanks in advance.
>
> Bui Phu Ngot
>
>
>
> /usr/local/pgsql/bi
gain by COPY, I want that it only performs the buffered transactions
which were made in the meantime on the master database.
What do I have to do to get this type of behavior?
Thanks,
Aldor
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
ss - even if Slony should not care about
this other table?!
Thanks,
Aldor
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
Hi,
I'm curious how other people do it:
What is faster?
1. CREATE TABLE
2. restore data
3. CREATE INDEX
or
1. CREATE TABLE
2. CREATE INDEX
3. restore data
Thanks,
Aldor
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi,
I'm just curious about a VACUUM plan of this table:
1st) When to do VAUUM?
2nd) How often to do VACUUM?
3rd) With which postgresql.conf paremeters to set up vacuum?
4th) When to do a VACUUM FULL?
5th) Can autovacuum of 8.1 be used in this case?
I'm a little bit afraid about the size of the t
When I set up a database with tons of data I forgot to check in the
.conf file that autovacuum was on.
Now it seems that the stat collection has prepared some tasks to be
done for autovacuum - from my experience I can tell that this will
take weeks or even months.
I disabled autovacuum in the con
Get the oid of your old tablespace:
select oid
frompg_tablespace
where spcname = '[OLDTABLESPACE]';
Get your tables from the old tablespace:
select select relname
frompg_class
where pc.reltablespace = [TABLESPACEOID]
optional
and relkind = 'r' ;
With "ALTER TABLE [TAB
ad of your databases.
Switching to Slave and back to master when it's again ready (also
with automatic replication back to the master) works great in slony.
If you have any further questions or need any help on setting up the
slony scripts just contact me directly.
Regards,
Aldor
Walfred Tedes
!
Well, one more question:
Is it possible to link vacrelid with a database in pg_database
instead of a table?
Aldor wrote:
> When I set up a database with tons of data I forgot to check in the
> .conf file that autovacuum was on.
>
> Now it seems that the stat collection has prepa
Hint: When turning configuration for autovacuum to minimal settings
they are recognized by the autovacuum process started from
postmaster. The setting "autovacuum = off" - which I tried to put
also into other areas of the conf file (to avoid that it is maybe
not parsed corretly) - is not recogniz
Well, the database is also showing a hint when less then 9 million
transactions are available - and this hint is telling us to vacuum
the whole database.
If, vacuum the whole database, would take days (quite unsure how
many days sometimes) is it possible to find out manually which
tables originall
29 matches
Mail list logo