So your table is about 80 MB in size, or perhaps 120 MB if it fits in
shared_buffers. You can check it using ³SELECT
pg_size_pretty(pg_relation_size(mytable¹))²
- Luke
On 3/26/08 4:48 PM, "Peter Koczan" <[EMAIL PROTECTED]> wrote:
> FWIW, I did a select count(*) on a table with just over 300
On Tue, Mar 25, 2008 at 3:35 AM, sathiya psql <[EMAIL PROTECTED]> wrote:
> Dear Friends,
> I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> S
[EMAIL PROTECTED] ("sathiya psql") writes:
> On Tue, Mar 25, 2008 at 2:09 PM, jose
> javier parra sanchez <[EMAIL PROTECTED]> wrote:
>
>
> It's been said zillions of
> times on the maillist. Using a select
>
>
>
> 1st: you should not use a ramdisk for this, it will slow things down as
> compared to simply having the table on disk. Scanning it the first time
> when on disk will load it into the OS IO cache, after which you will get
> memory speed.
>
absolutely
after getting some replies, i dropp
Hello Sathiya,
1st: you should not use a ramdisk for this, it will slow things down as
compared to simply having the table on disk. Scanning it the first time
when on disk will load it into the OS IO cache, after which you will get
memory speed.
2nd: you should expect the ³SELECT COUNT(*)² to r
In response to "sathiya psql" <[EMAIL PROTECTED]>:
> >
> > Yes. It takes your hardware about 3 seconds to read through 700M of ram.
> >
> >
> > Keep in mind that you're not just reading RAM. You're pushing system
> > requests through the VFS layer of your operating system, which is treating
> > t
>
> Yes. It takes your hardware about 3 seconds to read through 700M of ram.
>
>
> Keep in mind that you're not just reading RAM. You're pushing system
> requests through the VFS layer of your operating system, which is treating
> the RAM like a disk (with cylinder groups and inodes and blocks
In response to "sathiya psql" <[EMAIL PROTECTED]>:
> Dear Friends,
> I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> So now everything is
sathiya psql escribió:
> So now everything is in RAM, if i do a count(*) on this table it returns
> 327600 in 3 seconds, why it is taking 3 seconds ? because am sure that
> no Disk I/O is happening.
It has to scan every page and examine visibility for every record. Even
if there's no I/O
hubert depesz lubaczewski wrote:
> On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
>> Any Idea on this ???
>
> yes. dont use count(*).
>
> if you want whole-table row count, use triggers to store the count.
>
> it will be slow. regeardless of whether it's in ram or on hdd.
In othe
On Tue, Mar 25, 2008 at 02:05:20PM +0530, sathiya psql wrote:
> Any Idea on this ???
yes. dont use count(*).
if you want whole-table row count, use triggers to store the count.
it will be slow. regeardless of whether it's in ram or on hdd.
depesz
--
quicksil1er: "postgres is excellent, but li
On Tue, Mar 25, 2008 at 2:09 PM, jose javier parra sanchez <
[EMAIL PROTECTED]> wrote:
> It's been said zillions of times on the maillist. Using a select
> count(*) in postgres is slow, and probably will be slow for a long
> time. So that function is not a good way to measure perfomance.
>
Yes, bu
Dear Friends,
I have a table with 32 lakh record in it. Table size is nearly 700 MB,
and my machine had a 1 GB + 256 MB RAM, i had created the table space in
RAM, and then created this table in this RAM.
So now everything is in RAM, if i do a count(*) on this table it returns
327600 in 3
13 matches
Mail list logo