Ron Mayer <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> There's no lock, unless you are using VACUUM FULL which you shouldn't.
> Or, I believe, if he has any GIST indexes (such as tsearch or
> postgis ones). At least it seems normal vacuum locks GIST indexes
> for quite some time here.
Good p
Tom Lane wrote:
David B <[EMAIL PROTECTED]> writes:
15minute lock is a long time.
There's no lock, unless you are using VACUUM FULL which you shouldn't.
Or, I believe, if he has any GIST indexes (such as tsearch or
postgis ones). At least it seems normal vacuum locks GIST indexes
for quite some ti
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Steve Crawford)
belched out:
> On Thursday 17 March 2005 3:51 pm, Tom Lane wrote:
>> Steve Crawford <[EMAIL PROTECTED]> writes:
>> > My autovacuum config is running and I do see regular periodic
>> > vacuums of these pg_ tables but still
Steve Crawford <[EMAIL PROTECTED]> writes:
> On Thursday 17 March 2005 3:51 pm, Tom Lane wrote:
>> Do you have the FSM settings set large enough to account for all
>> the free space?
> max_fsm_pages = 2
> max_fsm_relations = 1000
That doesn't sound like nearly enough pages for a 2G database.
On Thursday 17 March 2005 3:51 pm, Tom Lane wrote:
> Steve Crawford <[EMAIL PROTECTED]> writes:
> > My autovacuum config is running and I do see regular periodic
> > vacuums of these pg_ tables but still they grow.
>
> Do you have the FSM settings set large enough to account for all
> the free spac
Steve Crawford <[EMAIL PROTECTED]> writes:
> My autovacuum config is running and I do see regular periodic vacuums
> of these pg_ tables but still they grow.
Do you have the FSM settings set large enough to account for all the
free space?
Also you might want to check for newer versions of autova
> In my case ( I have more than 500,000,000 rows) I had to
> 'select * into new_big_table from big_table'
> it was faster and didn't kill server.
> As a bonus, you could 'CLUSTER' your big table if add
> 'order by somekey';
>
> After that, dont' forget to recreate indices and then you could
> drop
On Thursday 17 March 2005 3:15 pm, Steve Crawford wrote:
> I'm having trouble with physical growth of postgresql system
> tables
Additional info. The most recent autovacuum entries for the
pg_attribute table are:
[2005...] Performing: VACUUM ANALYZE "pg_catalog"."pg_attribute"
[2005...] t
I'm having trouble with physical growth of postgresql system tables.
Server is 7.4.6 and there are several databases in the cluster. The
autovacuum daemon has been running since the data was restored after
an upgrade a few months ago. Unfortunately my system tables are
taking an unreasonable am
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (David B) wrote:
> Environment. PG v8. (Opteron 2CPU. Raid 5 disks 1TB. 12GB Ram)
> Environment would be one master feeding 3 slaves. Similar configs.
> New transactions coming into master. Cust Service Reps using that box.
> Analysis be
"Mark Travis" <[EMAIL PROTECTED]> writes:
> Tom, I owe you bigtime. That was exactly the problem. I would remove selinux
> from my machine if I wasn't worried that it wasn't actually protecting me
> from the outside world. I had problems installing OpenGroupware as well with
> selinux, but I though
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> Scott Marlowe wrote:
>> Is there a reason you're doing a full vacuum?
> Because I'm only running pg_autovacuum since one month now, but I see
> that for same table is a disaster do not vacuum full once in a day.
You need to find out why regular vacuu
Tom, I owe you bigtime. That was exactly the problem. I would remove selinux
from my machine if I wasn't worried that it wasn't actually protecting me
from the outside world. I had problems installing OpenGroupware as well with
selinux, but I thought I had them resolved. I bet it got overwritten on
On Thu, Mar 17, 2005 at 05:16:34PM -0500, Mark Travis wrote:
> I've placed several "echo" statements into /etc/rc.d/init.d/postgresql to
> see what branches the scripts are executing and what the variables are.
Just for the record: it's much easier to debug shell scripts by doing
sh -x /etc/rc.d/
Also,
just speculating, is't possible to create new table (select * into) in
different tablespace if there is no space on disk ?
I didn't find this.
Oleg
On Fri, 18 Mar 2005, Oleg Bartunov wrote:
On Thu, 17 Mar 2005, Lee Wu wrote:
I wish.
I am having a table, size of more than 60G, with 2
"Mark Travis" <[EMAIL PROTECTED]> writes:
> If I manually enter the command on the command line
> postgres -c /usr/bin/postmaster -p 5432 -D /var/lib/pgsql
> I get nothing not even a warning that I shouldn't start postgres as
> root.
> If I just type "postgres" on the command line nothing
I've placed several "echo" statements into /etc/rc.d/init.d/postgresql to
see what branches the scripts are executing and what the variables are.
I've narrowed it down to the final call
$SU -l postgres -c "$PGENGINE/postmaster -p
(snipped the rest of the line from this post because it's the stand
On Thu, 17 Mar 2005, Lee Wu wrote:
I wish.
I am having a table, size of more than 60G, with 2.04412e+08 rows.
Vacuum full and reindex it just kill me.
In my case ( I have more than 500,000,000 rows) I had to
'select * into new_big_table from big_table'
it was faster and didn't kill server.
As a bo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Scott Marlowe wrote:
>
> Vacuum full doesn't use fsm, lazy vacuum does (i.e. plain vacuum).
Are you sure? Why then after a vacuum full verbose the FSM settings
are displayed ?
> Is there a reason you're doing a full vacuum?
Because I'm only r
Hi is there some way to log actions about performance into database (into
table)?
We want to tune performance of our database and to read whole log file to
find top 10 sql queries of the day is not much confortable.
Thank you Ales
--
No virus found in this outgoing message.
Checked by AVG Anti
I wish.
I am having a table, size of more than 60G, with 2.04412e+08 rows.
Vacuum full and reindex it just kill me.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gaetano Mendola
Sent: Thursday, March 17, 2005 2:10 PM
To: pgsql-admin@postgresql.org
Subje
On Thu, 2005-03-17 at 15:10, Gaetano Mendola wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
> is there a way to vacuum full a table but working only
> a part of the table ? I have a table with 6 milion rows
> and vacuum full it will send out of line for hours my
> server, so
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
is there a way to vacuum full a table but working only
a part of the table ? I have a table with 6 milion rows
and vacuum full it will send out of line for hours my
server, so I'll like to vacuum that table multiple times
in order to not block
David B <[EMAIL PROTECTED]> writes:
> I'm looking at PG vs MySql for a high volume site.
> Few 10's of millions inserts per day.
> Also some deletes (1% - say 250,000 per day).
> Also updates (perhaps 10% say 2.5m per day)
> Lots of indexes on master table.
> When I test vacuum it seems very slow.
Environment. PG v8. (Opteron 2CPU. Raid 5 disks 1TB. 12GB Ram)
Environment would be one master feeding 3 slaves. Similar configs.
New transactions coming into master. Cust Service Reps using that box.
Analysis being done on slave boxes.
Hi Folks,
I'm looking at PG vs MySql for a high volume sit
On Mar 17, 2005, at 10:33 AM, Sabio - PSQL wrote:
how to implement a fulltext search on table with a varchar(300) field
and over 6 millions of records?
There is a full text search implementation called tsearch2:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
Here is an article to get you
On Thu, Mar 17, 2005 at 09:33:44AM -0600, Sabio - PSQL wrote:
> how to implement a fulltext search on table with a varchar(300) field
> and over 6 millions of records?
See contrib/tsearch2.
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
---(end of broadcast)--
how to implement a fulltext search on table with a varchar(300) field
and over 6 millions of records?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
On Tue, Mar 08, 2005 at 01:36:31PM +0200, Milen A. Radev wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> I have installed a script that executes vacuumdb for all DBs in my
> cluster (run by cron every night):
are you sure you're getting _every_ database? My bet is you're
missing
Martin Thoma <[EMAIL PROTECTED]> writes:
> I try to init the DB cluster without success. I am working on a RHEL 4
> box and I installed postgresql, postgresql-server and postgresql-jdbc.
> ...
> setting password...
> initdb: The password file wasn't generated. Please report this problem.
Do you
Hi list!!
I tested pgpool with pgpool.conf as follows:
listen_addresses =
'localhost'
port =
socket_dir = '/tmp'
backend_host_name = ''
backend_port = 5432
backend_socket_dir =
'/tmp'
secondary_backend_host_name
= ''
secondary_backend_port =
0
num_init_children = 15
Dear list
I try to init the DB cluster without success. I am working on a RHEL 4
box and I installed postgresql, postgresql-server and postgresql-jdbc.
For the installation I used up2date and everything went fine.
To init the DB cluster I typed:
[EMAIL PROTECTED] bin]# su - postgres
-bash-3.00$
32 matches
Mail list logo