a little more insight.
I will read more on the processes status and try to keep a close
eye over it. I shall be responding after a few hours on it.
regds
mallah.
|
| --
| Sent via pgsql-performance mailing list
| (pgsql-performance@postgresql.org)
| To make changes to your subscription:
| http
Dear Andy ,
Following the discussion on load average we are now investigating on some
other parts of the stack (other than db).
Essentially we are bumping up the limits (on appserver) so that more requests
goes to the DB server.
|
| Maybe you are hitting some locks? If its not IO and
| From: Steve Crawford scrawf...@pinpointresearch.com
| To: Rajesh Kumar. Mallah mal...@tradeindia.com
| Cc: Andy Colson a...@squeakycode.net, Claudio Freire
klaussfre...@gmail.com, pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:47 PM
| Subject: Re: [PERFORM] High load
- Stephen Frost sfr...@snowman.net wrote:
| From: Stephen Frost sfr...@snowman.net
| To: Rajesh Kumar. Mallah mal...@tradeindia.com
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:27:37 PM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait
hardware
to 4 equal virtual environments , ie 1 for master (r/w) and 3 slaves r/o
and distribute the r/o load on the 3 slaves ?
regds
mallah
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
- Claudio Freire klaussfre...@gmail.com wrote:
| From: Claudio Freire klaussfre...@gmail.com
| To: Rajesh Kumar. Mallah mal...@tradeindia.com
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:43 AM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O
of 0 clients waiting pgbounce introduces a drop in tps.
Warm Regds
Rajesh Kumar Mallah.
CTO - tradeindia.com.
Keywords: pgbouncer performance
On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Craig Ringer cr...@postnewspapers.com.au wrote:
So
note: my postgresql server pgbouncer were not in virtualised environment
in the first setup. Only application server has many openvz containers.
Nice suggestion to try ,
I will put pgbouncer on raw hardware and run pgbench from same hardware.
regds
rajesh kumar mallah.
Why in VM (openvz container) ?
Did you also try it in the same OS as your appserver ?
Perhaps even connecting from appserver via unix seckets ?
and all my
performance
(even if no clients waiting)
without pooling the dbserver CPU usage increases but performance of apps
is also become good.
Regds
Rajesh Kumar Mallah.
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith g...@2ndquadrant.com wrote:
Rajesh Kumar Mallah wrote:
the no of clients was 10 ( -c 10
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith g...@2ndquadrant.com wrote:
Rajesh Kumar Mallah wrote:
the no of clients was 10 ( -c 10) carrying out 1 transactions each
(-t 1) .
pgbench db was initilised with scaling factor -s 100.
since client count was less there was no queuing
Looks like ,
pgbench cannot be used for testing with pgbouncer if number of
pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.
pgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.
Are there changes to pgbench in this aspect ?
regds
Rajesh Kumar Mallah
Thanks for the thought but it (-C) does not work .
BTW, I think you should use -C option with pgbench for this kind of
testing. -C establishes connection for each transaction, which is
pretty much similar to the real world application which do not use
connection pooling. You will be
/10 01:59, Rajesh Kumar Mallah wrote:
I had set it to 128kb
it does not really work , i even tried your next suggestion. I am in
virtualized
environment particularly OpenVz. where echo 3 /proc/sys/vm/drop_caches
does not work inside the virtual container, i did it in the hardware node
about how much data you are loading ? rows count or
GB data etc
2. how many indexes are you creation ?
regds
Rajesh Kumar Mallah.
analysis a trivial problem. We want that the subsequent runs
of query should take similar times as the first run so that we can work
on the optimizing the calling patterns to the database.
regds
Rajesh Kumar Mallah.
the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes dfuen...@eldocomp.comwrote:
Hello,
When I run an SQL to create new tables and indexes is when Postgres
consumes
Dear Sri,
Please post at least the Explain Analyze output . There is a nice posting
guideline
also regarding on how to post query optimization questions.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
On Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata srika...@inventum.netwrote:
Please tell
On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer
cr...@postnewspapers.com.auwrote:
On 01/07/10 17:41, Rajesh Kumar Mallah wrote:
Hi,
this is not really a performance question , sorry if its bit irrelevant
to be posted here. We have a development environment and we want
to optimize the non
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga yebhavi...@gmail.com wrote:
Rajesh Kumar Mallah wrote:
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
The 63 ms query result is probably useless since
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
The way to make this go faster is to set up the actually recommended
infrastructure for full text search, namely create an index on
(co_name_vec)::tsvector (either directly or using an auxiliary tsvector
column). If you don't want to maintain such an index, fine, but don't
expect full text
Dear List,
Today has been good since morning. Although it is a lean day
for us but the indications are nice. I thank everyone who shared
the concern. I think the most significant change has been to reduce
shared_buffers from 10G to 4G , this has lead to reduced memory
usage and some breathing
of the graph space is filled with (.) or (?) and very
less active queries (long running queries 1s). on a busy day and busi hour
i shall check the and post again. The script is presented which depends only
on perl , DBI and DBD::Pg.
script pasted here:
http://pastebin.com/mrjSZfLB
Regds
mallah.
On Sat
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly , some of the relevant sar -r output are:
10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
kbswpfree kbswpused %swpused kbswpcad
10:40:01 AM979068 31892208 97.02
hours.
Warm Regds
Rajesh Kumar Mallah.
On Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga yebhavi...@gmail.com wrote:
Rajesh Kumar Mallah wrote:
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly
8 1 4192912 906164 6100 2787364000
I changed shared_buffers from 10G to 4G ,
swap usage has almost become nil.
# free
total used free sharedbuffers cached
Mem: 32871276 245758248295452 0 11064 22167324
-/+ buffers/cache:2397436 30473840
Swap: 4192912
Dear List,
pgtune suggests the following:
(current value are in braces via reason) , (*) indicates significant
difference from current value.
default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100
via default)
(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB
Dear Criag,
also check for the possibility of installing sysstat in our system.
it goes a long way in collecting the system stats. you may
consider increasing the frequency of data collection by
changing the interval of cron job manually in /etc/cron.d/
normally its */10 , you may make it */2 for
a commit nor rollback.
On 6/25/10, Tom Molesworth t...@audioboundary.com wrote:
On 25/06/10 16:59, Rajesh Kumar Mallah wrote:
when i reduce max_connections i start getting errors, i will see again
concurrent connections
during business hours. lot of our connections are in IDLE in
transaction state
Dear Greg/Kevin/List ,
Many thanks for the comments regarding the params, I am however able to
change an
experiment on production in a certain time window , when that arrives i
shall post
my observations.
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B eMarketPlace.
that some of the discussions on this problem
inadvertently became private between me kevin.
On Thu, Jun 24, 2010 at 12:10 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
It was nice to go through the interesting posting guidelines. i shall
be analyzing the slow queries more objectively
being
lseek(XXX, 0, SEEK_END) = YYY
Rajesh Kumar Mallah mallah.raj...@gmail.com wrote:
3. we use xfs and our controller has BBU , we changed barriers=1
to barriers=0 as i learnt that having barriers=1 on xfs and fsync
as the sync method, the advantage of BBU is lost unless barriers
PM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
I'm not clear whether you still have a problem, or whether the
changes you mention solved your issues. I'll comment on potential
issues that leap out at me
On 6/23/10, Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Rajesh Kumar Mallah mallah.raj...@gmail.com wrote:
PasteBin for the vmstat output
http://pastebin.com/mpHCW9gt
On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
Dear List ,
I observe that my
. ( excess
ram can be used in disk block caching)
if its cpu bound add more cores or high speed cpus
if its io bound put better raid arrays controller.
regds
mallah.
On Thu, Mar 12, 2009 at 4:22 PM, Nagalingam, Karthikeyan
karthikeyan.nagalin...@netapp.com wrote:
Hi,
Can you guide me, Where
does not seems to be having much effect
unless its totally disabled.
regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
spindles will reduce perf.
I also have a SATA SAN though from which i can boot!
but the server needs to be rebuilt in that case too.
I (may) give it a shot.
regds
-- mallah.
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling matt...@flymine.org wrote:
On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
sda6 -- xfs with default formatting options.
sda7 -- mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
sda8 -- ext3 (default)
it looks like mkfs.xfs options sunit=128
than the ending sections , considering this is it worth
creating a special tablespace at the begining of drives
if at all done what kind of data objects should be placed
towards begining , WAL , indexes , frequently updated tables
or sequences ?
regds
mallah.
On Tue, Feb 17, 2009 at 9:49 PM, Scott
Detailed bonnie++ figures.
http://98.129.214.99/bonnie/report.html
On Wed, Feb 18, 2009 at 1:22 PM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
# fdisk -l /dev/sda
Disk /dev/sda: 290.9 GB, 290984034304
/dev/sda7
sda8 -- ext3 (default)
it looks like mkfs.xfs options sunit=128 and swidth=512 did not improve
io throughtput as such in bonnie++ tests .
it looks like ext3 with default options performed worst in my case.
regds
-- mallah
NOTE: observations made in this post are interpretations
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of autovacuum at lean hours only
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne cbbro...@acm.org wrote:
mallah.raj...@gmail.com (Rajesh
Model: MBC2073RC Rev: D506
Type: Direct-Access ANSI SCSI revision: 05
thanks
regds
-- mallah
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
:
rfis_part_2009_01_generated_date_check CHECK (generated_date =
3289 AND generated_date = 3319)
rfis_part_2009_01_rfi_id_check CHECK (rfi_id = 12344252 AND
rfi_id = 12681399)
Inherits: rfis
regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8 ms
and the total query time is also acceptable. But it is surprising that
the scan was
taking so long consistently at that point of time. I shall test again
under similar
circumstance tomorrow.
Is
= 2251)
Total runtime: 0.082 ms
(5 rows)
tradein_clients=
On Wed, Feb 11, 2009 at 6:07 PM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8
ms
and the total query time is also acceptable
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours.
Regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Wed, Feb 11, 2009 at 7:11 PM, Guillaume Cottenceau g...@mnc.ch wrote:
Rajesh Kumar Mallah mallah.rajesh 'at' gmail.com writes:
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz gryz...@gmail.com wrote:
On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
vacuum_cost_delay = 150
vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20
vacuum_cost_limit
On Wed, Feb 11, 2009 at 11:30 PM, Brad Nicholson
bnich...@ca.afilias.info wrote:
On Wed, 2009-02-11 at 22:57 +0530, Rajesh Kumar Mallah wrote:
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz gryz...@gmail.com
wrote:
On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
mallah.raj
: 1530.137 ms
regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
: (trade_leads.profile_id = pm.profile_id)
Total runtime: 55.333 ms
(11 rows)
SELECT SUM(1) FROM general.trade_leads WHERE status = 'm';
sum
127371
this constitutes 90% of the total rows.
regds
mallah.
On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Feb 10, 2009
Can't use an undefined value as an ARRAY reference at
/usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.
Can someone please give inputs to resolve this issue? Any help on this will
be appreciated.
519 sub transactions {
520 my $self = shift;
521 return
On Tue, Feb 10, 2009 at 9:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Rajesh Kumar Mallah mallah.raj...@gmail.com writes:
On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas robertmh...@gmail.com wrote:
I'm guessing that the problem is that the selectivity estimate for
co_name_vec @@ to_tsquery
option? I've
checked out the latest Areca controllers, but the manual available on
their website states there's a limitation of 32 disks in an array...
Where exactly is there limitation of 32 drives.
the datasheet of 1680 states support upto 128drives
using enclosures.
regds
rajesh kumar mallah
of hosting the data , i am hiring the storage primarily for
storing base base backups and log archives for PITR implementation.
as retal of separate machine was higher than SATA SAN.
Regds
mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED] [EMAIL
On 5/31/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule
?
also does single channel or dual channel controllers makes lot
of difference in raid10 performance ?
regds
mallah.
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your
. create a new mirror
D5 raid1 D6 -- MD2
MD0 raid0 MD1 raid0 MD2 -- MDF final
OR
D1 raid1 D2 raid1 D5 -- MD0
D3 raid1 D4 raid1 D6 -- MD1
MD0 raid0 MD1 -- MDF (final)
thanks , hope my question is clear now.
Regds
mallah.
In the stripe of mirrors you can lose up to half of the disks
.
Regds
mallah.
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4
On 12/11/06, Ravindran G - TLS, Chennai. [EMAIL PROTECTED] wrote:
Hello,
How to get Postgresql Threshold value ?. Any commands available ?.
What is meant my threshold value ?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
are
not appreciated by many people. if possible pls avoid it.
Regds
mallah.
-
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do
to SELECT UNION solution?
COPY is quite faast.
Regds
mallah.
many thanks in advance,
Jens Schipkowski
--
**
APUS Software GmbH
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail
On 12/6/06, asif ali [EMAIL PROTECTED] wrote:
Hi,
I have a product table having 350 records. It takes approx 1.8 seconds to
get all records from this table. I copies this table to a product_temp
table and run the same query to select all records; and it took 10ms(much
faster).
I did VACUUM
On 12/5/06, Tom Lane [EMAIL PROTECTED] wrote:
Jean Arnaud [EMAIL PROTECTED] writes:
Is there a relation between database size and PostGreSQL restart
duration ?
No.
Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
before
On 12/6/06, Tom Lane [EMAIL PROTECTED] wrote:
Rajesh Kumar Mallah [EMAIL PROTECTED] writes:
Startup time of a clean shutdown database is constant. But we still
face problem when it comes to shutting down. PostgreSQL waits
for clients to finish gracefully. till date i have never been able
was wondering if it is usual for stored procedures to perform slower on
PostgreSQL than raw SQL?
No.
RETURN NEXT keeps accumulating the data before returning.
I am not sure if any optimisations have been done to that effect.
In general functions are *NOT* slower than RAW SQL.
Regds
mallah
On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah [EMAIL PROTECTED]
wrote:
On 4/10/06, Jesper Krogh [EMAIL PROTECTED]
wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.something-goodI'd run pg_dump | gzip sqldump.gzon the old
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de
[EMAIL PROTECTED] wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's why
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh [EMAIL PROTECTED] wrote:
Rajesh Kumar Mallah wrote: I'd run pg_dump | gzip
to someone and mark Cc to a list.
kind regds
mallah.
BEGIN SELECT a1,a2,a3,a4,a5,a6 FROM (SELECT * FROM T1, T2……WHERE etc… Flag = 0 $1 $2 $3 $4) ORDER
BY
……. RETURN NEXT ………; END LOOP; RETURN; END;
' LANGUAGE 'plpgsql';
NOTE :The values for$1 $2 $3
$4will be passed when
applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige [EMAIL PROTECTED] wrote:
I am using postgresql to be the central database for a variety of tools for
our testing infrastructure. We have web tools and CLI tools that require
access
to machine configuration and other
Hi ,
Gist indexes take a long time to create as compared
to normal indexes is there any way to speed them up ?
(for example by modifying sort_mem or something temporarily )
Regds
Mallah.
---(end of broadcast)---
TIP 9: the planner will ignore
% improvement in performance
for certain queries. None, everything works just fine.
Regds
Mallah.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
Have you checked Tsearch2
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
is the most feature rich Full text Search system available
for postgresql. We are also using the same system in
the revamped version of our website.
Regds
Mallah.
Mark Stosberg wrote:
Hello,
I work
it relate
to the apparent poor performance? Is it problem with the disk
hardware. I know at nite this query will run reasonably fast.
I am running on a decent hardware .
Regds
mallah.
1:41pm up 348 days, 21:10, 1 user, load average: 11.59, 13.69, 11.49
85 processes: 83 sleeping, 1 running, 0
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid 0;
Time
;
insert into forecastelement select * from temp_table ;
commit;
create indexes
Analyze forecastelement ;
note that distinct on will keep only one row out of all rows having
distinct values
of the specified columns. kindly go thru the distinct on manual before
trying
the queries.
regds
mallah
in the table
as more and more applications will access the same table.
Any ideas if its better to split the table application wise or is it ok?
Regds
mallah.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
greetings!
on a dedicated pgsql server is putting pg_xlog
in drive as OS almost equivalent to putting on a seperate
drive?
in both case the actual data files are in a seperate
drive.
regds
mallah
---(end of broadcast)---
TIP 9: the planner
scott.marlowe wrote:
On Tue, 13 Jan 2004, David Shadovitz wrote:
We avert the subsequent execution of count(*) by passing the
value of count(*) as a query parameter through the link in page
numbers.
Mallah, and others who mentioned caching the record count
I am sure there is no transaction open with the table banner_stats2.
Still VACUUM FULL does not seems to effective in removing the
dead rows.
Can any one please help?
Regds
mallah
tradein_clients=# VACUUM FULL verbose banner_stats2 ;
INFO: vacuuming public.banner_stats2
INFO: banner_stats2
of
insert , deletes and updates.
Regds
Mallah.
VACUUM FULL VERBOSE ANALYZE data_bank.profiles;
INFO: vacuuming data_bank.profiles
INFO: profiles: found 430524 removable, 371784 nonremovable row versions in 43714
pages
INFO: index profiles_pincode now contains 371784 row versions in 3419
calculated it)
Regds
Mallah
tradein_clients=# SELECT count(*) from data_bank.profiles ;
++
| count |
++
| 123065 |
++
(1 row)
Time: 49756.969 ms
tradein_clients=#
tradein_clients=#
tradein_clients=# VACUUM full verbose analyze data_bank.profiles ;
INFO: vacuuming
Hi,
NOT EXISTS is taking almost double time than NOT IN .
I know IN has been optimised in 7.4 but is anything
wrong with the NOT EXISTS?
I have vaccumed , analyze and run the query many times
still not in is faster than exists :
Regds
Mallah.
NOT IN PLAN
tradein_clients=# explain analyze
in
7.3?
Will surely post the overvation sometime.
Regards
Mallah.
Robert Treat
On Thu, 2003-11-13 at 02:53, Rajesh Kumar Mallah wrote:
Hi,
NOT EXISTS is taking almost double time than NOT IN .
I know IN has been optimised in 7.4 but is anything
wrong with the NOT EXISTS?
I
:294:
... empty database -- empty results
perror() reports: Resource temporarily unavailable
someone sighup'd the parent
Any clue?
--
Regards
Mallah.
---(end of broadcast)---
TIP 5: Have you checked our
the error mentioned in first email has been overcome
by running osdb on the same machine hosting the DB server.
regds
mallah.
Rajesh Kumar Mallah wrote:
Hi,
I plan to put 7.4-RC2 in our production servers in next few hours.
Since the hardware config the performance related GUCs parameter
website is *unavailable* for past 30 mins.
I ran OSDB .15 version and pg_bench .
Regds
Mallah.
has made it easier to spot the faulty data.
eg in fkey violation.
Will post the OSDB .15 versions' results on 7.3 7.4 soon.
Regds
Mallah.
Christopher Browne wrote:
After a long battle with technology,[EMAIL PROTECTED] (Rajesh Kumar Mallah), an earthling, wrote:
the error mentioned
is will this behaviour not
allow to go such mistakes unnoticed?
Regards
Mallah.
On Friday 31 Oct 2003 4:08 am, Greg Stark wrote:
Well, you might want to try the EXISTS version. I'm not sure if it'll be
faster or slower though. In theory it should be the same.
Hum, I didn't realize the principals table
is with the Query Generator.
Apologies for delayed response to your email.
Regards
Mallah.
regards, tom lane
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's
/
regds
mallah.
=
== # this code is all hacky and evil. but people
desperately want _something_ and I'm # super tired. refactoring
gratefully appreciated
[EMAIL PROTECTED] (Rajesh Kumar Mallah) wrote:
Can you please have a Look at the below and suggest why it
apparently puts 7.3.4 on an infinite loop . the CPU utilisation of the backend
running it
approches 99%.
What would be useful, for this case, would be to provide the query plan
1 - 100 of 114 matches
Mail list logo