Hi
A quick question, when pg receives data to be written to a table, does
it cache that data in memory in case a subsequent request/query would
need it?
As I understand it, data is stored in pages and those pages have to be
retrieved in order to write or read data from them. So my assumption
On Thu, Jan 22, 2009 at 4:42 PM, Thomas Finneid wrote:
>
> As I understand it, data is stored in pages and those pages have to be
> retrieved in order to write or read data from them. So my assumption is that
> a page used to write data would not be replaced until memory is low and
> different pa
>
> A quick question, when pg receives data to be written to a
> table, does it cache that data in memory in case a
> subsequent request/query would need it?
>
Afaik all pages are modified in memory, so the modified data would still be
cached.
--
Sent via pgsql-performance mailing list (pg
Pavan Deolasee wrote:
On Thu, Jan 22, 2009 at 4:42 PM, Thomas Finneid wrote:
As I understand it, data is stored in pages and those pages have to be
retrieved in order to write or read data from them. So my assumption is that
a page used to write data would not be replaced until memory is low a
(Sorry, did not include the list in the reply)
Pavan Deolasee wrote:
Yes. That's how it works.
Is that how it works for an index as well? I just found out that I have
an index that is 35GB, and the table is 85GB. ( I will look into the
index, it works fine, but an index that is almost one
> I tried work_mem and maintenance_work_mem but it does not seem to make much
> difference yet. Admittedly I had set it to 100M and 80M, so after reading a
> little bit more I have found that I could easily set it to several GBs. But
> I am not sure those are the correct config parameters to use fo
> Is that how it works for an index as well? I just found out that I have an
> index that is 35GB, and the table is 85GB. ( I will look into the index, it
> works fine, but an index that is almost one third of the size of the table,
> seems a little bit strange. )
> So if it works the same way an
Hello !
I'm having a problem with a query that takes more or less 3.2 seconds to
be executed.
This query uses a view which encapsulates some calculations (in order to
avoid duplicating theses calculations at several places in the project).
In order to keep that post readable, I've put the v
Hi,
I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
$ pgbench -i pgbench -s 50 -U pgsql
[pg...@$ pgbench
> I'm having a problem with a query that takes more or less 3.2 seconds to be
> executed.
>
> This query uses a view which encapsulates some calculations (in order to
> avoid duplicating theses calculations at several places in the project).
>
> In order to keep that post readable, I've put the vie
> I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
> Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
> Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
>
> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
>
> [pg...@
On Thu, 2009-01-22 at 17:47 +0200, Ibrahim Harrani wrote:
> Hi,
>
> I am running postgresql 8.3.5 on FreeBSD with Dual core Intel(R)
> Xeon(R) CPU 3065 @ 2.33GHz, 2GB RAM and Seagate Technology -
> Barracuda 7200.10 SATA 3.0Gb/ (RAID 1).
>
> I made several benchmark test with pgbench, TPS rate i
On Wed, 21 Jan 2009, M. Edward (Ed) Borasky wrote:
Re the OOM killer -- maybe a patch to the kernel could make things
"better"??
People have tried to raise awareness of it; sample:
http://lkml.org/lkml/2007/2/9/275
without much success. The Linux kernel hackers dislike the whole approach
P
On 1/22/09, Ibrahim Harrani wrote:
>
> Is this rate is normal or not? What can I do to improve tps and insert
> performance?
>
> postgresql.conf
>
> shared_buffers = 800MB # min 128kB or max_connections*16kB
> work_mem = 2MB # min 64kB
> maintenance_
On Thu, 2009-01-22 at 13:11 +0100, Thomas Finneid wrote:
> Is there any possibilites of telling pg to save to disk that memory
> cached data and state when the server is shutdown, so that when the
> server starts up again, itreads it back into the memory?
It's possible, but not by any directly
Greg Smith wrote:
> On Wed, 21 Jan 2009, M. Edward (Ed) Borasky wrote:
>
>> Re the OOM killer -- maybe a patch to the kernel could make things
>> "better"??
>
> People have tried to raise awareness of it; sample:
>
> http://lkml.org/lkml/2007/2/9/275
>
> without much success. The Linux kernel
Ibrahim Harrani escribió:
> I made several benchmark test with pgbench, TPS rate is almost 40 +/- 5.
> $ pgbench -i pgbench -s 50 -U pgsql
>
> [pg...@$ pgbench -c 200 -t 2 -U pgsql -d pgbench
Try with 1000 transactions per client or more, instead of 2.
Also, I think you should set the "scale" i
Hi list,
Clustering my indexes dramatically improves the query performance of many of
my queries. Also, the actual clustering takes a very long time for big
databases, roughly 20 hours. I have two questions about how to improve this:
1. I've tweaked maintenance_mem_max and effective_cache_size to
On Thu, Jan 22, 2009 at 02:52:12PM -0500, Harold A. Gim?nez Ch. wrote:
> Hi list,
>
> Clustering my indexes dramatically improves the query performance of many of
> my queries. Also, the actual clustering takes a very long time for big
> databases, roughly 20 hours. I have two questions about how
Many thanks for your answer. I did see a comment about this in the
documentation on the link I posted below.
My main question remains though: Is it necessary to cluster after a restore?
Thanks again!
On Thu, Jan 22, 2009 at 2:58 PM, Kenneth Marshall wrote:
> On Thu, Jan 22, 2009 at 02:52:12PM
Hi Merlin,
Here is the bonnie++ and new pgbench result with high transaction numbers.
$ pgbench -i -s 30 -U pgsql pgbench
$ pbench -c 100 -t 1000 -U pgsql -d pgbench
transaction type: TPC-B (sort of)
scaling factor: 30
number of clients: 100
number of transactions per client: 1000
number of tra
This is the another bonnie++ test result with version 1.03
Delete files in random order...done.
Version 1.03e --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
wrote:
> Version 1.93d --Sequential Output-- --Sequential Input-
> --Random-
> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
Thomas Markus wrote:
try to reorganize your data with CLUSTER and create appropriate indixes
(dont forget to check statistics).
One question. Assume I have clustered and new data has been added after
that, according to the docs that data is added "outside" of the
clustered data. What happens
On Thu, Jan 22, 2009 at 10:58:25PM +0100, Thomas Finneid wrote:
> Thomas Markus wrote:
>
>> try to reorganize your data with CLUSTER and create appropriate indixes
>> (dont forget to check statistics).
>
> One question. Assume I have clustered and new data has been added after
> that, according t
Hi David,
$ I run the test again with the following options. Also I added the
html output of the result.
$ bonnie++ -u pgsql -n 128 -r 2048 -s 4096 -x 1
Using uid:70, gid:70.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently
On Thu, Jan 22, 2009 at 3:29 PM, Ibrahim Harrani
wrote:
> This is a intel server with onboard raid. I will check raid
> configuration again tomorrow. Especially Write Cache and Read Ahead
> values mentioned at
> http://www.intel.com/support/motherboards/server/sb/CS-021019.htm
It would be good
David Rees wrote:
On Thu, Jan 22, 2009 at 1:27 PM, Ibrahim Harrani
wrote:
Version 1.93d --Sequential Output-- --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K
On Mon, Dec 22, 2008 at 12:59 AM, Greg Smith wrote:
> On Sat, 20 Dec 2008, Mark Wong wrote:
>
>> Here are links to how the throughput changes when increasing
>> shared_buffers: http://pugs.postgresql.org/node/505 My first glance takes
>> tells me that the system performance is quite erratic when i
On Thu, 22 Jan 2009, Mark Wong wrote:
I'm also capturing the PostgreSQL parameters as suggested so we can
see what's set in the config file, default, command line etc. It's
the "Settings" link in the "System Summary" section on the report web
page.
Those look good, much easier to pick out the
On Thu, 22 Jan 2009, Alvaro Herrera wrote:
Also, I think you should set the "scale" in the prepare step (-i) at
least as high as the number of clients you're going to use. (I dimly
recall some recent development in this area that might mean I'm wrong.)
The idea behind that maxim (clients>=sca
Thomas Finneid wrote:
> Thomas Markus wrote:
>
>> try to reorganize your data with CLUSTER and create appropriate
>> indixes (dont forget to check statistics).
>
> One question. Assume I have clustered and new data has been added after
> that, according to the docs that data is added "outside" of
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real15m46.206s
user0m0
On Thu, Jan 22, 2009 at 7:44 PM, Greg Smith wrote:
> On Thu, 22 Jan 2009, Mark Wong wrote:
>
>> I'm also capturing the PostgreSQL parameters as suggested so we can
>> see what's set in the config file, default, command line etc. It's
>> the "Settings" link in the "System Summary" section on the r
Ibrahim Harrani wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=100; sync)
100+0 records in
100+0 records out
819200 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real
35 matches
Mail list logo