Hi ,
Gist indexes take a long time to create as compared
to normal indexes is there any way to speed them up ?
(for example by modifying sort_mem or something temporarily )
Regds
Mallah.
---(end of broadcast)---
TIP 9: the planner will ignore
The Types of the join columns were different text vs varchar(100),
now its working fine and using a Hash Join
Thanks once again.
regds
mallah.
explain analyze select b.state,a.city from data_bank.updated_profiles a
join public.city_master b using(city) where source='BRANDING
> Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
>> I have a view which is a union of select of certain feilds from
>> indentical tables. The problem is when we query a column on
>> which index exists exists foreach of the tables does not use the
>> indexes.
d_idx on
catalog_rfi (cost=0.00..16.19 rows=8 width=55)
(actual time=0.01..0.01 rows=0 loops=1)
Index Cond: (sender_uid = 38466)
Total runtime: 0.41 msec
(18 rows)
regds
mallah.
> <[EMAIL PROTECTED]&
ter: "domain")::text = 'SystemInternal'::text) OR
(("domain")::text = 'UserDefined'::text) OR
(("domain")::text =
'ACLEquivalence'::text) OR (("domain")::text
y does PostgreSQL suffer so badly ??
I think not all developers write very nice SQLs.
Its really sad to see that a fine peice of work (RT) is performing sub-optimal
becoz of malformed SQLs. [ specially on database of my choice ;-) ]
Regds
Mallah.
>
> Dear PostgreSQL gurus,
>
> I rea
>
>
>
> On Thu, Oct 30, 2003 at 01:15:44AM +0530, [EMAIL PROTECTED] wrote:
>> Actually PostgreSQL is at par with MySQL when the query is being Properly
>> Written(simplified)
>>
>> In mysql:
>> mysql> SELECT DISTINCT main.* FROM Groups main join Principals Principals_1
>> using(id) join
>> ACL
>> So its not just PostgreSQL that is suffering from the bad SQL but MySQL also. But
>> the
>> question is my does PostgreSQL suffer so badly ?? I think not all developers write
>> very nice
>> SQLs.
>>
>> Its really sad to see that a fine peice of work (RT) is performing sub-optimal
>> becoz
e generic one is messing with postgresql.
>> >
>> > I've removed the CC to ivan, to my knowledge, he has nothing to do with SB these
>> > days
>> > anymore.
>> >
>> >
>> >> i think i have to work in :
>>
=1)
> Merge Cond: ("outer".id = "inner".id)
>
> This estimate is WAY off. Are both of those fields indexed and analyzed?
Yes both are primary keys. and i did vacuum full verbose analyze;
Have you tried
> upping the statistics target on those two fields?
> [EMAIL PROTECTED] (Rajesh Kumar Mallah) wrote:
>> Can you please have a Look at the below and suggest why it
>> apparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend
>> running it
>> approches 99%.
>
> What would be useful, for t
> Rajesh Kumar Mallah <[EMAIL PROTECTED]> writes:
>> I am sure there is no transaction open with the table banner_stats2. Still VACUUM
>> FULL does
>> not seems to effective in removing the
>> dead rows.
>
> That is not the issue --- the limiting factor is
ssing the
value of cout(*) as a query parameter through the link in page
numbers. This works for us.
This ofcourse assumes that that the number of rows matching the
Where clause does not changes while the user is viewing the search
results.
Hope it helps.
Regds
Mallah.
I cannot use the
>
greetings!
on a dedicated pgsql server is putting pg_xlog
in drive as OS almost equivalent to putting on a seperate
drive?
in both case the actual data files are in a seperate
drive.
regds
mallah
---(end of broadcast)---
TIP 9: the planner will
he table was vacuum analyzed during the tests
total number of records in table: 93
-----
Regds
Rajesh Kumar Mallah.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 9/28/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > Hi
> >
> > While doing some stress testing for updates in a small sized table
> > we found the following results. We are not too happy about the speed
>
On 9/29/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > > > Number of Copies | Update perl Sec
> > > >
> > > > 1 --> 119
> > > > 2 ---> 59
> > > > 3 ---> 3
On 12/5/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Jean Arnaud <[EMAIL PROTECTED]> writes:
> Is there a relation between database size and PostGreSQL restart
duration ?
No.
> Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
befo
On 12/6/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Rajesh Kumar Mallah" <[EMAIL PROTECTED]> writes:
> Startup time of a clean shutdown database is constant. But we still
> face problem when it comes to shutting down. PostgreSQL waits
> for clients to finish graceful
On 12/6/06, asif ali <[EMAIL PROTECTED]> wrote:
Hi,
I have a "product" table having 350 records. It takes approx 1.8 seconds to
get all records from this table. I copies this table to a "product_temp"
table and run the same query to select all records; and it took 10ms(much
faster).
I did "VACU
We have a view in our database.
CREATE view public.hogs AS
SELECT pg_stat_activity.procpid, pg_stat_activity.usename,
pg_stat_activity.current_query
FROM ONLY pg_stat_activity;
Select current_query from public.hogs helps us to spot errant queries
at times.
regds
mallah.
On 12/7/06
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Hello,
How to get Postgresql Threshold value ?. Any commands available ?.
What is meant my threshold value ?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
ils are
not appreciated by many people. if possible pls avoid it.
Regds
mallah.
-
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's
e JDBC expert would tell better how its done with JDBC.
Will it bring performance improvement compared to SELECT UNION solution?
COPY is quite faast.
Regds
mallah.
many thanks in advance,
Jens Schipkowski
--
**
APUS Software GmbH
---(end of broadcast)
of slowdown though.
Regds
mallah.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4 Greg
?
also does single channel or dual channel controllers makes lot
of difference in raid10 performance ?
regds
mallah.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
i got 2 options
1. create a new mirror
D5 raid1 D6 --> MD2
MD0 raid0 MD1 raid0 MD2 --> MDF final
OR
D1 raid1 D2 raid1 D5 --> MD0
D3 raid1 D4 raid1 D6 --> MD1
MD0 raid0 MD1 --> MDF (final)
thanks , hope my question is clear now.
Regds
mallah.
In the stripe of mirrors
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also
PasteBin for the vmstat output
http://pastebin.com/mpHCW9gt
On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
wrote:
> Dear List ,
>
> I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu
> bound and there is a high load average in the server > 50 usuall
On 6/23/10, Kevin Grittner wrote:
> Rajesh Kumar Mallah wrote:
>> PasteBin for the vmstat output
>> http://pastebin.com/mpHCW9gt
>>
>> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
>> wrote:
>>> Dear List ,
>>>
>>> I observe th
riable class names
general.report_level = ''
general.disable_audittrail2 = ''
general.employee=''
Also i would like to apologize that some of the discussions on this problem
inadvertently became private between me & kevin.
On Thu, Jun 24, 2010 at 12:10 AM, Rajes
und and 90% of syscalls being
lseek(XXX, 0, SEEK_END) = YYY
>
> Rajesh Kumar Mallah wrote:
>
>> 3. we use xfs and our controller has BBU , we changed barriers=1
>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync
>> as the sync method, the
010 at 10:55 PM, Rajesh Kumar Mallah
wrote:
> On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner
> wrote:
>> I'm not clear whether you still have a problem, or whether the
>> changes you mention solved your issues. I'll comment on potential
>> issues that leap out a
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly , some of the relevant sar -r output are:
10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
kbswpfree kbswpused %swpused kbswpcad
10:40:01 AM979068 31892208 97.02
g business hours.
Warm Regds
Rajesh Kumar Mallah.
On Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>>
>> A scary phenomenon is being exhibited by the server , which is the server
>> is slurping all the swap suddenly
>> 8 1 4192912 9
I changed shared_buffers from 10G to 4G ,
swap usage has almost become nil.
# free
total used free sharedbuffers cached
Mem: 32871276 245758248295452 0 11064 22167324
-/+ buffers/cache:2397436 30473840
Swap: 4192912
Dear List,
pgtune suggests the following:
(current value are in braces via reason) , (*) indicates significant
difference from current value.
default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100
via default)
(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB v
Dear Criag,
also check for the possibility of installing sysstat in our system.
it goes a long way in collecting the system stats. you may
consider increasing the frequency of data collection by
changing the interval of cron job manually in /etc/cron.d/
normally its */10 , you may make it */2 for
commit nor rollback.
On 6/25/10, Tom Molesworth wrote:
> On 25/06/10 16:59, Rajesh Kumar Mallah wrote:
>> when i reduce max_connections i start getting errors, i will see again
>> concurrent connections
>> during business hours. lot of our connections are in > transactio
Dear Greg/Kevin/List ,
Many thanks for the comments regarding the params, I am however able to
change an
experiment on production in a certain time window , when that arrives i
shall post
my observations.
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B eMarketPlace.
Dear List,
Today has been good since morning. Although it is a lean day
for us but the indications are nice. I thank everyone who shared
the concern. I think the most significant change has been to reduce
shared_buffers from 10G to 4G , this has lead to reduced memory
usage and some breathing spa
-
Looks like most of the graph space is filled with (.) or (?) and very
less active queries (long running queries > 1s). on a busy day and busi hour
i shall check the and post again. The script is presented which depends only
on perl , DBI and DBD::Pg.
script pasted here:
http://pastebin.com/mrj
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> just by removing the order by co_name reduces the query time dramatically
>> from ~ 9 sec to 63 ms. Can anyone please help.
>>
> The 63 ms query result
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
The way to make this go faster is to set up the actually recommended
> infrastructure for full text search, namely create an index on
> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector
> column). If you don't want to maintain such an index, fine, but don't
> expect full text
analysis a trivial problem. We want that the subsequent runs
of query should take similar times as the first run so that we can work
on the optimizing the calling patterns to the database.
regds
Rajesh Kumar Mallah.
the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes wrote:
> Hello,
>
> When I run an SQL to create new tables and indexes is when Postgres
> consumes
Dear Sri,
Please post at least the Explain Analyze output . There is a nice posting
guideline
also regarding on how to post query optimization questions.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
On Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata wrote:
>
> Please tell me What is the best
On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer
wrote:
> On 01/07/10 17:41, Rajesh Kumar Mallah wrote:
> > Hi,
> >
> > this is not really a performance question , sorry if its bit irrelevant
> > to be posted here. We have a development environment and we want
> > t
mar Mallah wrote:
>
> > I had set it to 128kb
> > it does not really work , i even tried your next suggestion. I am in
> > virtualized
> > environment particularly OpenVz. where echo 3 > /proc/sys/vm/drop_caches
> > does not work inside the virtual container, i di
about how much data you are loading ? rows count or
GB data etc
2. how many indexes are you creation ?
regds
Rajesh Kumar Mallah.
rious
why
inspite of 0 clients waiting pgbounce introduces a drop in tps.
Warm Regds
Rajesh Kumar Mallah.
CTO - tradeindia.com.
Keywords: pgbouncer performance
On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner wrote:
> Craig Ringer wrote:
>
> > So rather than asking "
note: my postgresql server & pgbouncer were not in virtualised environment
in the first setup. Only application server has many openvz containers.
Nice suggestion to try ,
I will put pgbouncer on raw hardware and run pgbench from same hardware.
regds
rajesh kumar mallah.
> Why in VM (openvz container) ?
>
> Did you also try it in the same OS as your appserver ?
>
> Perhaps even connecting from appserver via unix seckets
i get less performance
(even if no clients waiting)
without pooling the dbserver CPU usage increases but performance of apps
is also become good.
Regds
Rajesh Kumar Mallah.
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was 10 ( -c 10) carrying out 1 transactions each
>> (-t 1) .
>> pgbench db was initilised with scaling factor -s 100.
>>
>> since client co
Looks like ,
pgbench cannot be used for testing with pgbouncer if number of
pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.
pgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.
Are there changes to pgbench in this aspect ?
regds
Rajesh Kumar Mallah.
On
Thanks for the thought but it (-C) does not work .
>
>
> BTW, I think you should use -C option with pgbench for this kind of
> testing. -C establishes connection for each transaction, which is
> pretty much similar to the real world application which do not use
> connection pooling. You will be s
applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige <[EMAIL PROTECTED]> wrote:
> I am using postgresql to be the central database for a variety of tools for
> our testing infrastructure. We have web tools and CLI tools that require
> access
> to machine
3. its not a performance question , it shud have been marked more appropriately to pgsql-sql i think.
4. its not a good etiquette to address email to someone and mark Cc to a list.
kind regds
mallah.
>> > BEGIN > > SELECT a1,a2,a3,a4,a5
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]
> wrote:
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]
> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip >
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de <
[EMAIL PROTECTED]> wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's w
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
Rajesh Kumar Mallah wrote:>> I'd r
functions are *NOT* slower than RAW SQL.
Regds
mallah.
---(end of broadcast)---
TIP 6: explain analyze is your friend
table
as more and more applications will access the same table.
Any ideas if its better to split the table application wise or is it ok?
Regds
mallah.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
t ;
drop index ;
insert into forecastelement select * from temp_table ;
commit;
create indexes
Analyze forecastelement ;
note that distinct on will keep only one row out of all rows having
distinct values
of the specified columns. kindly go thru the distinct on manual before
trying
the queries
Richard Huxton wrote:
On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages.
VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
Regds
mallah.
postgresql.conf
--
max_fsm_pages = 55099264 # min
804 records per second. Is it an acceptable
performance on the hardware below:
RAM: 2 GB
DISKS: ultra160 , 10 K , 18 GB
Processor: 2* 2.0 Ghz Xeon
What kind of upgrades shoud be put on the server for it to become
reasonable fast.
Regds
mallah.
Richard Huxton wrote:
On Wednesday 14 April 2004 18
The relation size for this table is 1.7 GB
tradein_clients=# SELECT public.relation_size ('general.rfis');
+--+
| relation_size|
+--+
|1,762,639,872 |
+--+
(1 row)
Regds
mallah.
Rajesh Kumar Mallah wrote:
The problem is that
, hardware or dead rows.
I already did vacumm full on the table but it still did not
have that effect on performance.
In fact the last figures were after doing a vacuum full.
Can there be any more elegent solution to this problem.
Regds
Mallah.
Richard Huxton wrote:
On Thursday 15 April 2004 08:10
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid >
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts
le sleep and does it relate
to the apparent poor performance? Is it problem with the disk
hardware. I know at nite this query will run reasonably fast.
I am running on a decent hardware .
Regds
mallah.
1:41pm up 348 days, 21:10, 1 user, load average: 11.59, 13.69, 11.49
85 processes: 83 sl
Have you checked Tsearch2
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
is the most feature rich Full text Search system available
for postgresql. We are also using the same system in
the revamped version of our website.
Regds
Mallah.
Mark Stosberg wrote:
Hello,
I work for
% improvement in performance
for certain queries. None, everything works just fine.
Regds
Mallah.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
hosting the data , i am hiring the storage primarily for
storing base base backups and log archives for PITR implementation.
as retal of separate machine was higher than SATA SAN.
Regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
ray...
Where exactly is there limitation of 32 drives.
the datasheet of 1680 states support upto 128drives
using enclosures.
regds
rajesh kumar mallah.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
&g
e_leads.profile_id = pm.profile_id)
Filter: ((status)::text = 'm'::text)
-> Bitmap Index Scan on trade_leads_profile_id
(cost=0.00..3.41 rows=47 width=0) (actual time=73.579..73.579 rows=0
loops=7)
Index Cond: (trade_leads.profile_id = pm.profile_id)
Total runtime: 1530.137 ms
regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
=0 loops=7)
Recheck Cond: (trade_leads.profile_id = pm.profile_id)
Filter: ((status)::text = 'm'::text)
-> Bitmap Index Scan on trade_leads_profile_id
(cost=0.00..3.41 rows=47 width=0) (actual time=1.285..1.285 rows=0
loops=7)
> Can't use an undefined value as an ARRAY reference at
> /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.
>
> Can someone please give inputs to resolve this issue? Any help on this will
> be appreciated.
519 sub transactions {
520 my $self = shift;
521 return @{$self->{data}->
On Tue, Feb 10, 2009 at 9:09 PM, Tom Lane wrote:
> Rajesh Kumar Mallah writes:
>> On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrote:
>>> I'm guessing that the problem is that the selectivity estimate for
>>> co_name_vec @@ to_tsquery('plastic&tubes
r_uid) CLUSTER
"rfis_part_2009_01_sender_uid" btree (sender_uid)
Check constraints:
"rfis_part_2009_01_generated_date_check" CHECK (generated_date >=
3289 AND generated_date <= 3319)
"rfis_part_2009_01_rfi_id_check" CHECK (rfi_id >= 12344252 AND
rfi_id <= 126
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8 ms
and the total query time is also acceptable. But it is surprising that
the scan was
taking so long consistently at that point of time. I shall test again
under similar
circumstance tomorrow.
Is i
eiver_uid = 1320721)
Filter: (generated_date >= 2251)
Total runtime: 0.082 ms
(5 rows)
tradein_clients=>
On Wed, Feb 11, 2009 at 6:07 PM, Rajesh Kumar Mallah
wrote:
> thanks for the hint,
>
> now the peak hour is over and the same scan is taking 71 ms in place of 8
> ms
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours.
Regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Wed, Feb 11, 2009 at 7:11 PM, Guillaume Cottenceau wrote:
> Rajesh Kumar Mallah writes:
>
>> Hi,
>>
>> Is it possible to configure autovacuum to run only
>> during certain hours ? We are forced to keep
>> it off because it pops up during the peak
>> q
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz wrote:
> On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
> wrote:
>
>>> vacuum_cost_delay = 150
>>> vacuum_cost_page_hit = 1
>>> vacuum_cost_page_miss = 10
>>> vacuum_cost
On Wed, Feb 11, 2009 at 11:30 PM, Brad Nicholson
wrote:
> On Wed, 2009-02-11 at 22:57 +0530, Rajesh Kumar Mallah wrote:
>> On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz
>> wrote:
>> > On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
>> > wrote:
&g
JITSU Model: MBC2073RC Rev: D506
Type: Direct-Access ANSI SCSI revision: 05
thanks
regds
-- mallah
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of "autovacuum at lean hours only"
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne wrote:
> mallah.raj...@gmail.com (Rajesh
th=512 /dev/sda7
sda8 --> ext3 (default)
it looks like mkfs.xfs options sunit=128 and swidth=512 did not improve
io throughtput as such in bonnie++ tests .
it looks like ext3 with default options performed worst in my case.
regds
-- mallah
NOTE: observations made in this post are interpret
The URL of the result is
http://98.129.214.99/bonnie/report.html
(sorry if this was a repost)
On Tue, Feb 17, 2009 at 2:04 AM, Rajesh Kumar Mallah
wrote:
> BTW
>
> our Machine got build with 8 15k drives in raid10 ,
> from bonnie++ results its looks like the machine is
>
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
> On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
>>
>> sda6 --> xfs with default formatting options.
>> sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
>> sda8 --> ext3 (default)
>>
&g
than the ending sections , considering this is it worth
creating a special tablespace at the begining of drives
if at all done what kind of data objects should be placed
towards begining , WAL , indexes , frequently updated tables
or sequences ?
regds
mallah.
>On Tue, Feb 17, 2009 at 9:49
Detailed bonnie++ figures.
http://98.129.214.99/bonnie/report.html
On Wed, Feb 18, 2009 at 1:22 PM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
>
> # fdisk -l /dev/sda
> Disk /dev/sda: 290.9 GB, 290984034304 bytes
1 - 100 of 137 matches
Mail list logo