- "Stephen Frost" wrote:
| From: "Stephen Frost"
| To: "Rajesh Kumar. Mallah"
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:27:37 PM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
CPU is idle
|
| From: "Steve Crawford"
| To: "Rajesh Kumar. Mallah"
| Cc: "Andy Colson" , "Claudio Freire"
, pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:47 PM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
Dear Andy ,
Following the discussion on load average we are now investigating on some
other parts of the stack (other than db).
Essentially we are bumping up the limits (on appserver) so that more requests
goes to the DB server.
|
| Maybe you are hitting some locks? If its not IO and no
|
| Load avg is the number of processes in the running queue, which can
| be either waiting to be run or actually running.
|
| So if you had 100% CPU usage, then you'd most definitely have a load
| avg of 64, which is neither good or bad. It may simply mean that
| you're using your hardware's ful
- "Claudio Freire" wrote:
| From: "Claudio Freire"
| To: "Rajesh Kumar. Mallah"
| Cc: pgsql-performance@postgresql.org
| Sent: Thursday, May 24, 2012 9:23:43 AM
| Subject: Re: [PERFORM] High load average in 64-core server , no I/O wait and
CPU is idle
|
| On
Dear List ,
We are having scalability issues with a high end hardware
The hardware is
CPU = 4 * opteron 6272 with 16 cores ie Total = 64 cores.
RAM = 128 GB DDR3
Disk = High performance RAID10 with lots of 15K spindles and a working BBU
Cache.
normally the 1 min load average of the system
Thanks for the thought but it (-C) does not work .
>
>
> BTW, I think you should use -C option with pgbench for this kind of
> testing. -C establishes connection for each transaction, which is
> pretty much similar to the real world application which do not use
> connection pooling. You will be s
Looks like ,
pgbench cannot be used for testing with pgbouncer if number of
pgbench clients exceeds pool_size + reserve_pool_size of pgbouncer.
pgbench keeps waiting doing nothing. I am using pgbench of postgresql 8.1.
Are there changes to pgbench in this aspect ?
regds
Rajesh Kumar Mallah.
On
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was 10 ( -c 10) carrying out 1 transactions each
>> (-t 1) .
>> pgbench db was initilised with scaling factor -s 100.
>>
>> since client co
i get less performance
(even if no clients waiting)
without pooling the dbserver CPU usage increases but performance of apps
is also become good.
Regds
Rajesh Kumar Mallah.
On Sun, Jul 18, 2010 at 10:55 PM, Greg Smith wrote:
> Rajesh Kumar Mallah wrote:
>
>> the no of clients was
Nice suggestion to try ,
I will put pgbouncer on raw hardware and run pgbench from same hardware.
regds
rajesh kumar mallah.
> Why in VM (openvz container) ?
>
> Did you also try it in the same OS as your appserver ?
>
> Perhaps even connecting from appserver via unix seckets
note: my postgresql server & pgbouncer were not in virtualised environment
in the first setup. Only application server has many openvz containers.
rious
why
inspite of 0 clients waiting pgbounce introduces a drop in tps.
Warm Regds
Rajesh Kumar Mallah.
CTO - tradeindia.com.
Keywords: pgbouncer performance
On Mon, Jul 12, 2010 at 6:11 PM, Kevin Grittner wrote:
> Craig Ringer wrote:
>
> > So rather than asking "
about how much data you are loading ? rows count or
GB data etc
2. how many indexes are you creation ?
regds
Rajesh Kumar Mallah.
profiling requires multiple
iterations it is not feasible to reboot the machine. I think i will try to
profile
my code using new and unique input parameters in each iteration, this shall
roughly serve my purpose.
On Fri, Jul 2, 2010 at 8:30 AM, Craig Ringer wrote:
> On 02/07/10 01:59, Rajesh Ku
On Thu, Jul 1, 2010 at 10:07 PM, Craig Ringer
wrote:
> On 01/07/10 17:41, Rajesh Kumar Mallah wrote:
> > Hi,
> >
> > this is not really a performance question , sorry if its bit irrelevant
> > to be posted here. We have a development environment and we want
> > t
Dear Sri,
Please post at least the Explain Analyze output . There is a nice posting
guideline
also regarding on how to post query optimization questions.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
On Thu, Jul 1, 2010 at 10:49 AM, Srikanth Kata wrote:
>
> Please tell me What is the best
the i/o bandwidth . I think you should check when
the max cpu utilisation
is taking place exactly.
regds
Rajesh Kumar Mallah.
On Sat, Jun 26, 2010 at 3:55 AM, Deborah Fuentes wrote:
> Hello,
>
> When I run an SQL to create new tables and indexes is when Postgres
> consumes
analysis a trivial problem. We want that the subsequent runs
of query should take similar times as the first run so that we can work
on the optimizing the calling patterns to the database.
regds
Rajesh Kumar Mallah.
The way to make this go faster is to set up the actually recommended
> infrastructure for full text search, namely create an index on
> (co_name_vec)::tsvector (either directly or using an auxiliary tsvector
> column). If you don't want to maintain such an index, fine, but don't
> expect full text
Dear Tom/Kevin/List
thanks for the insight, i will check the suggestion more closely and post
the results.
regds
Rajesh Kumar Mallah.
On Mon, Jun 28, 2010 at 5:09 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>
>> Dear List,
>>
>> just by removing the order by co_name reduces the query time dramatically
>> from ~ 9 sec to 63 ms. Can anyone please help.
>>
> The 63 ms query result
Dear List,
just by removing the order by co_name reduces the query time dramatically
from ~ 9 sec to 63 ms. Can anyone please help.
Regds
Rajesh Kumar Mallah.
explain analyze SELECT * from ( SELECT
a.profile_id,a.userid,a.amount,a.category_id,a.catalog_id,a.keywords,b.co_name
from
SZfLB
Regds
mallah.
On Sat, Jun 26, 2010 at 3:23 PM, Rajesh Kumar Mallah <
mallah.raj...@gmail.com> wrote:
> Dear List,
>
> Today has been good since morning. Although it is a lean day
> for us but the indications are nice. I thank everyone who shared
> the concern. I think
Dear List,
Today has been good since morning. Although it is a lean day
for us but the indications are nice. I thank everyone who shared
the concern. I think the most significant change has been to reduce
shared_buffers from 10G to 4G , this has lead to reduced memory
usage and some breathing spa
Dear Greg/Kevin/List ,
Many thanks for the comments regarding the params, I am however able to
change an
experiment on production in a certain time window , when that arrives i
shall post
my observations.
Rajesh Kumar Mallah.
Tradeindia.com - India's Largest B2B eMarketPlace.
commit nor rollback.
On 6/25/10, Tom Molesworth wrote:
> On 25/06/10 16:59, Rajesh Kumar Mallah wrote:
>> when i reduce max_connections i start getting errors, i will see again
>> concurrent connections
>> during business hours. lot of our connections are in > transactio
Dear Criag,
also check for the possibility of installing sysstat in our system.
it goes a long way in collecting the system stats. you may
consider increasing the frequency of data collection by
changing the interval of cron job manually in /etc/cron.d/
normally its */10 , you may make it */2 for
Dear List,
pgtune suggests the following:
(current value are in braces via reason) , (*) indicates significant
difference from current value.
default_statistics_target = 50 # pgtune wizard 2010-06-25 (current 100
via default)
(*) maintenance_work_mem = 1GB # pgtune wizard 2010-06-25 (16MB v
I changed shared_buffers from 10G to 4G ,
swap usage has almost become nil.
# free
total used free sharedbuffers cached
Mem: 32871276 245758248295452 0 11064 22167324
-/+ buffers/cache:2397436 30473840
Swap: 4192912
g business hours.
Warm Regds
Rajesh Kumar Mallah.
On Fri, Jun 25, 2010 at 4:58 PM, Yeb Havinga wrote:
> Rajesh Kumar Mallah wrote:
>>
>> A scary phenomenon is being exhibited by the server , which is the server
>> is slurping all the swap suddenly
>> 8 1 4192912 9
A scary phenomenon is being exhibited by the server , which is the server
is slurping all the swap suddenly , some of the relevant sar -r output are:
10:30:01 AM kbmemfree kbmemused %memused kbbuffers kbcached
kbswpfree kbswpused %swpused kbswpcad
10:40:01 AM979068 31892208 97.02
010 at 10:55 PM, Rajesh Kumar Mallah
wrote:
> On Thu, Jun 24, 2010 at 8:57 PM, Kevin Grittner
> wrote:
>> I'm not clear whether you still have a problem, or whether the
>> changes you mention solved your issues. I'll comment on potential
>> issues that leap out a
und and 90% of syscalls being
lseek(XXX, 0, SEEK_END) = YYY
>
> Rajesh Kumar Mallah wrote:
>
>> 3. we use xfs and our controller has BBU , we changed barriers=1
>> to barriers=0 as i learnt that having barriers=1 on xfs and fsync
>> as the sync method, the
riable class names
general.report_level = ''
general.disable_audittrail2 = ''
general.employee=''
Also i would like to apologize that some of the discussions on this problem
inadvertently became private between me & kevin.
On Thu, Jun 24, 2010 at 12:10 AM, Rajes
On 6/23/10, Kevin Grittner wrote:
> Rajesh Kumar Mallah wrote:
>> PasteBin for the vmstat output
>> http://pastebin.com/mpHCW9gt
>>
>> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
>> wrote:
>>> Dear List ,
>>>
>>> I observe th
PasteBin for the vmstat output
http://pastebin.com/mpHCW9gt
On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah
wrote:
> Dear List ,
>
> I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu
> bound and there is a high load average in the server > 50 usuall
Databases are usually IO bound , vmstat results can confirm individual
cases and setups.
In case the server is IO bound the entry point should be setting up
properly performing
IO. RAID10 helps a great extent in improving IO bandwidth by
parallelizing the IO operations,
more spindles the better. Al
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz wrote:
> have you tried hanging bunch of raid1 to linux's md, and let it do
> raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing spindles will red
>> Effect of ReadAhead Settings
>> disabled,256(default) , 512,1024
>>
SEQUENTIAL
>> xfs_ra0 414741 , 66144
>> xfs_ra256403647, 545026 all tests on sda6
>> xfs_ra512411357, 564769
>> xfs_ra1024 404392, 431168
>>
>> looks like 512
Detailed bonnie++ figures.
http://98.129.214.99/bonnie/report.html
On Wed, Feb 18, 2009 at 1:22 PM, Rajesh Kumar Mallah
wrote:
> the raid10 voulme was benchmarked again
> taking in consideration above points
>
> # fdisk -l /dev/sda
> Disk /dev/sda: 290.9 GB, 290984034304 bytes
ad-performance-single-command
>
> ____
> From: pgsql-performance-ow...@postgresql.org
> [pgsql-performance-ow...@postgresql.org] On Behalf Of Rajesh Kumar Mallah
> [mallah.raj...@gmail.com]
> Sent: Tuesday, February 17, 2009 5:25 AM
> To:
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling wrote:
> On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
>>
>> sda6 --> xfs with default formatting options.
>> sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
>> sda8 --> ext3 (default)
>>
&g
The URL of the result is
http://98.129.214.99/bonnie/report.html
(sorry if this was a repost)
On Tue, Feb 17, 2009 at 2:04 AM, Rajesh Kumar Mallah
wrote:
> BTW
>
> our Machine got build with 8 15k drives in raid10 ,
> from bonnie++ results its looks like the machine is
>
BTW
our Machine got build with 8 15k drives in raid10 ,
from bonnie++ results its looks like the machine is
able to do 400 Mbytes/s seq write and 550 Mbytes/s
read. the BB cache is enabled with 256MB
sda6 --> xfs with default formatting options.
sda7 --> mkfs.xfs -f -d sunit=128,swidth=512 /
Its nice to know the evolution of autovacuum and i understand that
the suggestion/requirement of "autovacuum at lean hours only"
was defeating the whole idea.
regds
--rajesh kumar mallah.
On Fri, Feb 13, 2009 at 11:07 PM, Chris Browne wrote:
> mallah.raj...@gmail.com (Rajesh
I have received Dell Poweredge 2950 MIII with 2 kind of
drives. I cant' make out the reason behind it , does it
make any difference in long run or in performance
the drives are similar in overall characteristics but does
the minor differences if will cause any problem ?
scsi0 : LSI Logic SAS based
On Wed, Feb 11, 2009 at 11:30 PM, Brad Nicholson
wrote:
> On Wed, 2009-02-11 at 22:57 +0530, Rajesh Kumar Mallah wrote:
>> On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz
>> wrote:
>> > On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
>> > wrote:
&g
On Wed, Feb 11, 2009 at 10:03 PM, Grzegorz Jaśkiewicz wrote:
> On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
> wrote:
>
>>> vacuum_cost_delay = 150
>>> vacuum_cost_page_hit = 1
>>> vacuum_cost_page_miss = 10
>>> vacuum_cost
On Wed, Feb 11, 2009 at 7:11 PM, Guillaume Cottenceau wrote:
> Rajesh Kumar Mallah writes:
>
>> Hi,
>>
>> Is it possible to configure autovacuum to run only
>> during certain hours ? We are forced to keep
>> it off because it pops up during the peak
>> q
Hi,
Is it possible to configure autovacuum to run only
during certain hours ? We are forced to keep
it off because it pops up during the peak
query hours.
Regds
rajesh kumar mallah.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
eiver_uid = 1320721)
Filter: (generated_date >= 2251)
Total runtime: 0.082 ms
(5 rows)
tradein_clients=>
On Wed, Feb 11, 2009 at 6:07 PM, Rajesh Kumar Mallah
wrote:
> thanks for the hint,
>
> now the peak hour is over and the same scan is taking 71 ms in place of 8
> ms
thanks for the hint,
now the peak hour is over and the same scan is taking 71 ms in place of 8 ms
and the total query time is also acceptable. But it is surprising that
the scan was
taking so long consistently at that point of time. I shall test again
under similar
circumstance tomorrow.
Is i
r_uid) CLUSTER
"rfis_part_2009_01_sender_uid" btree (sender_uid)
Check constraints:
"rfis_part_2009_01_generated_date_check" CHECK (generated_date >=
3289 AND generated_date <= 3319)
"rfis_part_2009_01_rfi_id_check" CHECK (rfi_id >= 12344252 AND
rfi_id <= 126
On Tue, Feb 10, 2009 at 9:09 PM, Tom Lane wrote:
> Rajesh Kumar Mallah writes:
>> On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrote:
>>> I'm guessing that the problem is that the selectivity estimate for
>>> co_name_vec @@ to_tsquery('plastic&tubes
> Can't use an undefined value as an ARRAY reference at
> /usr/lib/perl5/site_perl/5.8.8/Test/Parser/Dbt2.pm line 521.
>
> Can someone please give inputs to resolve this issue? Any help on this will
> be appreciated.
519 sub transactions {
520 my $self = shift;
521 return @{$self->{data}->
Index Cond: (trade_leads.profile_id = pm.profile_id)
Total runtime: 55.333 ms
(11 rows)
SELECT SUM(1) FROM general.trade_leads WHERE status = 'm';
sum
127371
this constitutes 90% of the total rows.
regds
mallah.
On Tue, Feb 10, 2009 at 6:36 PM, Robert Haas wrot
Hi ,
I have a query in which two huge tables (A,B) are joined using an indexed
column and a search is made on tsvector on some column on B. Very limited
rows of B are expected to match the query on tsvector column.
With default planner settings the query takes too long ( > 100 secs) , but
with h
t; Are there any reasonable choices for bigger (3+ shelf) direct-connected
> RAID10 arrays, or are hideously expensive SANs the only option? I've
> checked out the latest Areca controllers, but the manual available on
> their website states there's a limitation of 32 disks in an ar
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in PERC5 is not really
strip
On 5/31/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Thu, May 31, 2007 at 01:28:58AM +0530, Rajesh Kumar Mallah wrote:
> i am still not clear what is the best way of throwing in more
> disks into the system.
> does more stripes means more performance (mostly) ?
> also
Sorry for posting and disappearing.
i am still not clear what is the best way of throwing in more
disks into the system.
does more stripes means more performance (mostly) ?
also is there any thumb rule about best stripe size ? (8k,16k,32k...)
regds
mallah
On 5/30/07, [EMAIL PROTECTED] <[EMAIL
you can lose up to half of the disks and still be
operational. In the mirror of stripes, the most you could lose is two
drives. The performance of the two should be similar - perhaps the seek
performance would be different for high concurrent use in PG.
- Luke
On 5/29/07 2:14 PM, "Raj
hi,
this is not really postgresql specific, but any help is appreciated.
i have read more spindles the better it is for IO performance.
suppose i have 8 drives , should a stripe (raid0) be created on
2 mirrors (raid1) of 4 drives each OR should a stripe on 4 mirrors
of 2 drives each be created
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4 Greg
On 12/13/06, Steven Flatt <[EMAIL PROTECTED]> wrote:
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table
1. how frequently are you comm
So, my questions:
Is it possible to use COPY FROM STDIN with JDBC?
Should be. Its at least possible using DBI and DBD::Pg (perl)
my $copy_sth = $dbh -> prepare( "COPY
general.datamining_mailing_lists (query_id,email_key) FROM STDIN;") ;
$copy_sth -> execute();
while (my ($email_key ) = $fetc
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Thanks.
I am using Postgres 8.1.4 in windows 2000 and i don't get the proper
response for threshold.
what is the response you get ? please be specific about the issues.
also the footer that comes with your emails are
not appr
On 12/11/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTED]> wrote:
Hello,
How to get Postgresql Threshold value ?. Any commands available ?.
What is meant my threshold value ?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet,
We have a view in our database.
CREATE view public.hogs AS
SELECT pg_stat_activity.procpid, pg_stat_activity.usename,
pg_stat_activity.current_query
FROM ONLY pg_stat_activity;
Select current_query from public.hogs helps us to spot errant queries
at times.
regds
mallah.
On 12/7/06, asif
On 12/6/06, asif ali <[EMAIL PROTECTED]> wrote:
Hi,
I have a "product" table having 350 records. It takes approx 1.8 seconds to
get all records from this table. I copies this table to a "product_temp"
table and run the same query to select all records; and it took 10ms(much
faster).
I did "VACU
On 12/6/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Rajesh Kumar Mallah" <[EMAIL PROTECTED]> writes:
> Startup time of a clean shutdown database is constant. But we still
> face problem when it comes to shutting down. PostgreSQL waits
> for clients to finish graceful
On 12/5/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Jean Arnaud <[EMAIL PROTECTED]> writes:
> Is there a relation between database size and PostGreSQL restart
duration ?
No.
> Does anyone now the behavior of restart time ?
It depends on how many updates were applied since the last checkpoint
befo
On 4/11/06, Simon Dale <[EMAIL PROTECTED]> wrote:
>
>
>
> Hi,
>
>
>
> I'm trying to evaluate PostgreSQL as a database that will have to store a
> high volume of data and access that data frequently. One of the features on
> our wish list is to be able to use stored procedures to access the data and
4. fsync can also be turned off while loading huge dataset , but seek others comments too (as study docs) as i am not sure about the reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
Rajesh Kumar Mallah wrote:>> I'd r
what is the query ?use LIMIT or a restricting where clause.regdsmallah.On 4/10/06, soni de <
[EMAIL PROTECTED]> wrote:Hello,
I have difficulty in fetching the records from the database.
Database table contains more than 1 GB data.
For fetching the records it is taking more the 1 hour and that's w
sorry for the post , i didn' saw the other replies only after posting.On 4/10/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]
> wrote:
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]
> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip >
On 4/10/06, Jesper Krogh <[EMAIL PROTECTED]> wrote:
HiI'm currently upgrading a Posgresql 7.3.2 database to a8.1.I'd run pg_dump | gzip > sqldump.gz on the old system. That took about30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psqlinto the 8.1 database seems to take
On 4/9/06, Chethana, Rao (IE10) <[EMAIL PROTECTED]> wrote:> > > > > Hello! > >
> > Kindly go through the following , > >> >> >
I wanted to know whether, the command line arguments(function
arguments) -- $1 $2 $3 -- can be
used as in the following , like, ---
applicable to your case.
Regds
Rajesh Kumar Mallah
On 4/3/06, Kenji Morishige <[EMAIL PROTECTED]> wrote:
> I am using postgresql to be the central database for a variety of tools for
> our testing infrastructure. We have web tools and CLI tools that require
> access
> to machine
On 9/29/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > > > Number of Copies | Update perl Sec
> > > >
> > > > 1 --> 119
> > > > 2 ---> 59
> > > > 3 ---> 3
On 9/28/05, Gavin Sherry <[EMAIL PROTECTED]> wrote:
> On Wed, 28 Sep 2005, Rajesh Kumar Mallah wrote:
>
> > Hi
> >
> > While doing some stress testing for updates in a small sized table
> > we found the following results. We are not too happy about the speed
>
he table was vacuum analyzed during the tests
total number of records in table: 93
-------------
Regds
Rajesh Kumar Mallah.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
V i s h a l Kashyap @ [Sai Hertz And Control Systems] wrote:
Dear all,
Have anyone compiled PostgreSQL with kernel 2.6.x if YES
1. Was their any performance gains
Else
1. Is it possible
2. What problems would keeping us away from compiling on kernel 2.6
We run pgsql on 2.6.6 there was upto 30% impr
Have you checked Tsearch2
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/
is the most feature rich Full text Search system available
for postgresql. We are also using the same system in
the revamped version of our website.
Regds
Mallah.
Mark Stosberg wrote:
Hello,
I work for Summersault
.8 0:00 postmaster
Richard Huxton wrote:
On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 second
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid >
, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid > 0;
Time: 117560.635 ms
Which is approximate 4804 records
The relation size for this table is 1.7 GB
tradein_clients=# SELECT public.relation_size ('general.rfis');
+--+
| relation_size|
+--+
|1,762,639,872 |
+--+
(1 row)
Regds
mallah.
Rajesh Kumar Mallah wrote:
The problem is that
:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from eyp_rfi;
If this is the actual query you're runn
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages.
VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
Regds
mallah.
postgresql.conf
--
max_fsm_pages = 55099264 # min max_fsm_rela
Richard Huxton wrote:
On Wednesday 14 April 2004 18:53, Rajesh Kumar Mallah wrote:
Hi
I have .5 million rows in a table. My problem is select count(*) takes
ages. VACUUM FULL does not help. can anyone please tell me
how to i enhance the performance of the setup.
SELECT count(*) from
Shea,Dan [CIS] wrote:
The index is
Indexes:
"forecastelement_rwv_idx" btree (region_id, wx_element, valid_time)
-Original Message-
From: Shea,Dan [CIS] [mailto:[EMAIL PROTECTED]
Sent: Monday, April 12, 2004 10:39 AM
To: Postgres Performance
Subject: [PERFORM] Deleting certain duplicates
Greetings,
Is there any performance penalty of having too many columns in
a table in terms of read and write speeds.
To order to keep operational queries simple (avoid joins) we plan to
add columns in the main customer dimension table.
Adding more columns also means increase in concurrency in the
Greetings!
Why does creation of gist indexes takes significantly more time
than normal btree index. Can any configuration changes lead to faster index
creation?
query:
CREATE INDEX co_name_index_idx ON profiles USING gist (co_name_index
public.gist_txtidx_ops);
regds
mallah.
scott.marlowe wrote:
On Tue, 13 Jan 2004, David Shadovitz wrote:
We avert the subsequent execution of count(*) by passing the
value of count(*) as a query parameter through the link in page
numbers.
Mallah, and others who mentioned caching the record count:
I am sure there is no transaction open with the table banner_stats2.
Still VACUUM FULL does not seems to effective in removing the
dead rows.
Can any one please help?
Regds
mallah
tradein_clients=# VACUUM FULL verbose banner_stats2 ;
INFO: vacuuming "public.banner_stats2"
INFO: "banner_stats
Ever Since i upgraded to 7.4RC2 i am facing problem
with select count(*) . In 7.3 the problem was not there
select count(*) from data_bank.profiles used to return almost
instantly , but in 7.4
explain analyze SELECT count(*) from data_bank.profiles;
1 - 100 of 123 matches
Mail list logo