On Sun, Mar 10, 2013 at 11:28 PM, Greg Smith wrote:
> On 3/10/13 9:18 PM, Jon Nelson wrote:
>
>> The following is with ext4, nobarrier, and noatime. As noted in the
>> original post, I have done a fair bit of system tuning. I have the
>> dirty_bytes and dirty_background_
On Sun, Mar 10, 2013 at 10:46 AM, Greg Smith wrote:
> On 3/5/13 10:00 PM, Jon Nelson wrote:
>>
>> On Tue, Mar 5, 2013 at 1:35 PM, Jon Nelson
>> wrote:
>>>
>>>
>>> pgbench -h BLAH -c 32 -M prepared -t 10 -S
>>> I get 95,000 to 100,000
On Tue, Mar 5, 2013 at 1:35 PM, Jon Nelson wrote:
>
> pgbench -h BLAH -c 32 -M prepared -t 10 -S
> I get 95,000 to 100,000 tps.
>
> pgbench -h BLAH -c 32 -M prepared -t 10
> seems to hover around 6,200 tps (size 100) to 13,700 (size 400)
Some followup:
The read test goe
On Tue, Mar 5, 2013 at 7:02 PM, Josh Berkus wrote:
>
>> Do these basically sniff right?
>
> Well, the read test seems reasonable. I'm impressed by the speed of the
> write test ... how large is the raid card cache?
>
> And why 8.4? Can you try 9.2?
8.4 because it's what I've got, basically. I m
I was hoping to just get a "gut reaction" on some pgbench numbers I have, to
see if I'm in the ballpark.
OS: ScientificLinux 6.3, x86_64
Hardware: 4x real disks (not SSD) behind an LSI 9260 in raid10, Xeon E5-2680
with hyperthreading OFF, 128GB of RAM.
Setup: postgresql 8.4.13, ext4, barriers ON,
On Mon, Feb 18, 2013 at 6:39 PM, Josh Berkus wrote:
> Scott,
>
>> So do you have generally slow IO, or is it fsync behavior etc?
>
> All tests except pgBench show this system as superfast. Bonnie++ and DD
> tests are good (200 to 300mb/s), and test_fsync shows 14K/second.
> Basically it has no is
I was wondering if somebody could clear up how tablespaces are used.
Let's say I have three classes of storage:
- ramdisk (tmpfs)
- SSD
- spinning rust
Furthermore, let's say I'd like to be able to tell postgresql to
prefer them - in that order - until they each get full. IE, use tmpfs
until it re
On Thu, Nov 22, 2012 at 7:42 AM, Jeremy Harris wrote:
> On 22/11/2012 00:08, Craig Ringer wrote:
>>
>> WITH
>>FENCE foo AS (SELECT ...),
>>bar AS (SELECT ...)
>> SELECT * FROM bar;
>>
>> Are we fencing just foo? Or all expressions?
>>
>
> WITH foo AS (FENCED SELECT ...),
> bar AS (SEL
My perspective on this is that CTEs *should* be just like creating a
temporary table and then joining to it, but without the
materialization costs. In that respect, they seem like they should be
like nifty VIEWs. If I wanted the behavior of materialization and then
join, I'd do that explicitly with
On Wed, Nov 14, 2012 at 1:01 PM, Jeff Janes wrote:
> On Wed, Nov 14, 2012 at 6:41 AM, Jon Nelson wrote:
>>
>> UPDATE: I have been able to replicate the issue. The parent table (the
>> one referenced in the LIKE portion of the CREATE TABLE statement) had
>> three indice
On Tue, Nov 13, 2012 at 7:10 PM, Jon Nelson wrote:
> I had moved on to a different approach to importing the data which
> does not work concurrently. However, I went back and tried to
> re-create the situation and - at least a naive attempt failed. I'll
> give it a few mo
On Tue, Nov 13, 2012 at 2:43 PM, Jeff Janes wrote:
> On Tue, Nov 13, 2012 at 12:03 PM, Jon Nelson
> wrote:
>> On Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas
>> wrote:
>>>
>>> On 13.11.2012 21:13, Jon Nelson wrote:
>>>>
>>>
>&g
On Tue, Nov 13, 2012 at 1:27 PM, Heikki Linnakangas wrote:
> On 13.11.2012 21:13, Jon Nelson wrote:
>
>> I was working on a data warehousing project where a fair number of files
>> could be COPY'd more or less directly into tables. I have a somewhat nice
>> machine
I was working on a data warehousing project where a fair number of files
could be COPY'd more or less directly into tables. I have a somewhat nice
machine to work with, and I ran on 75% of the cores I have (75% of 32 is
24).
Performance was pretty bad. With 24 processes going, each backend (in COP
On Thu, Jul 19, 2012 at 11:07 AM, Jon Nelson wrote:
> Recently I found myself wondering what was taking a particular query so long.
> I immediately assumed it was a lack of I/O, because lack of I/O is a
> thorn in my side.
> Nope, the I/O was boring. CPU? Well, the process was using
On Mon, Jul 16, 2012 at 12:35 PM, Samuel Gendler
wrote:
> On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton wrote:
>>>
>>>
>> Every insert updates four indexes, so at least 3 of those will be in
>> random order. The indexes don't fit in memory, so all those updates will
>> involve reading most of th
I have a single *table* that is some 560GB in size, 6 columns, average
row width 63.
There are approximately 6.1 billion rows.
It has 4 indices, 131, 313, 131 and 190 GB in size, respectively. All
are btree indices.
I tried inserting new data into the table, and it's taking a *very* long time.
I p
On Tue, Feb 28, 2012 at 4:46 PM, Claudio Freire wrote:
> On Tue, Feb 28, 2012 at 6:54 PM, Tom Lane wrote:
>>
>>> ... which is the ORM library (SQLAlchemy) doing a reflection of the
>>> table(s) involved.
>>
>> Oh, there's an ORM involved? I'll bet a nickel it's doing something
>> surprising, lik
On Tue, Feb 28, 2012 at 2:51 PM, Tom Lane wrote:
> Jon Nelson writes:
>> On Tue, Feb 28, 2012 at 1:28 PM, Tom Lane wrote:
>>> Jon Nelson writes:
>>>> Why does PostgreSQL /sometimes/ use the globally-configured values and
>>>> sometimes use the value
On Tue, Feb 28, 2012 at 1:28 PM, Tom Lane wrote:
> Jon Nelson writes:
>> The config shows 128MB for work_mem and 2GB for maintenance_work_mem.
>> Why does PostgreSQL /sometimes/ use the globally-configured values and
>> sometimes use the values that come from the connectio
I happened to be looking in the PostgreSQL logs (8.4.10, x86_64,
ScientificLinux 6.1) and noticed that an app was doing some sorting
(group by, order by, index creation) that ended up on disk rather than
staying in memory.
So I enabled trace_sort and restarted the app.
What followed confused me.
I
I created a table with two columns: an id SERIAL (primary key) and a
text (not null), and then added a unique index on the text field.
Then I ran the following query (with a huge work_mem - 20GB):
insert into tableA (text_field) select distinct other_text_field from
some_huge_set_of_tables
After
On Fri, Jan 27, 2012 at 12:05 PM, Heikki Linnakangas
wrote:
> On 27.01.2012 19:43, Jon Nelson wrote:
>>
>> Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
>> Furthermore, let's say I have a machine with sufficient memory for
Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
Furthermore, let's say I have a machine with sufficient memory for me
to set the work_mem and maintenance_work_mem to 20GB (just for this
session).
When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
way of str
On Wed, Dec 14, 2011 at 9:51 AM, Jon Nelson wrote:
> On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson wrote:
>> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
>>> Jon Nelson writes:
>>>> Regarding caching, I tried caching it across calls by making the
>>>&
On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson wrote:
> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
>> Jon Nelson writes:
>>> Regarding caching, I tried caching it across calls by making the
>>> TupleDesc static and only initializing it once.
>>> Wh
On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
> Jon Nelson writes:
>> Regarding caching, I tried caching it across calls by making the
>> TupleDesc static and only initializing it once.
>> When I tried that, I got:
>
>> ERROR: number of columns (6769856) exceed
On Wed, Dec 14, 2011 at 12:18 AM, Tom Lane wrote:
> Jon Nelson writes:
>> The only thing I have left are these statements:
>
>> get_call_result_type
>> TupleDescGetAttInMetadata
>> BuildTupleFromCStrings
>> HeapTupleGetDatum
>> and fina
On Tue, Dec 13, 2011 at 1:57 PM, Tom Lane wrote:
> Jon Nelson writes:
>> I've got a 5GB table with about 12 million rows.
>> Recently, I had to select the distinct values from just one column.
>> The planner chose an index scan. The query took almost an hour.
>> W
I've got a 5GB table with about 12 million rows.
Recently, I had to select the distinct values from just one column.
The planner chose an index scan. The query took almost an hour.
When I forced index scan off, the query took 90 seconds (full table scan).
The planner estimated 70,000 unique values
On Mon, Dec 12, 2011 at 10:38 AM, Merlin Moncure wrote:
> On Sat, Dec 10, 2011 at 7:27 PM, Jon Nelson wrote:
>> I was experimenting with a few different methods of taking a line of
>> text, parsing it, into a set of fields, and then getting that info
>> into a table.
&
On Sat, Dec 10, 2011 at 8:32 PM, Craig Ringer wrote:
> On 12/11/2011 09:27 AM, Jon Nelson wrote:
>>
>> The first method involved writing a C program to parse a file, parse
>> the lines and output newly-formatted lines in a format that
>> postgresql's COPY funct
I was experimenting with a few different methods of taking a line of
text, parsing it, into a set of fields, and then getting that info
into a table.
The first method involved writing a C program to parse a file, parse
the lines and output newly-formatted lines in a format that
postgresql's COPY f
On Sun, Nov 20, 2011 at 7:56 AM, Jeremy Harris wrote:
> On 2011-11-17 17:10, Jon Nelson wrote:
>>
>> external sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u
>> sec elapsed 58966.76 sec
>>
>> Am I to understand that the CPU portion of the sorting on
A follow-up question.
Even with both work_mem and maintenance_work_mem equal to 16GB, I see this:
LOG: 0: begin index sort: unique = f, workMem = 16777216, randomAccess = f
and shortly thereafter:
LOG: 0: switching to external sort with 59919 tapes: CPU
2.59s/13.20u sec elapsed 16.85 sec
I'll try to compile multiple questions/answers into a single response.
On Thu, Nov 17, 2011 at 11:16 AM, Claudio Freire wrote:
> On Thu, Nov 17, 2011 at 2:10 PM, Jon Nelson wrote:
>> What sorts of things should I be looking at to improve the performance
>> of this query? Is
I have one query which does not run very often. Sometimes it may be
months between runs.
However, when it does get executed, it scans approximately 100
identically-structured tables (a form of partitioning), extracts and
groups on a subset of the columns, and creates a new table. The
individual tab
On Mon, Sep 19, 2011 at 7:53 PM, Stephen Frost wrote:
> Igor,
>
> * Igor Chudov (ichu...@gmail.com) wrote:
>> Would the time that it takes, differ a great deal, depending on whether the
>> table has only 100,000 or 5,000,000 records?
>
> Yes, because PostgreSQL is going to copy the data. If you d
On Mon, Jun 20, 2011 at 11:08 AM, Tom Lane wrote:
> Jon Nelson writes:
>> I ran a query recently where the result was very large. The outer-most
>> part of the query looked like this:
>
>> HashAggregate (cost=56886512.96..56886514.96 rows=200 width=30)
>> ->
PostgreSQL 8.4.8 on CentOS 5.6, x86_64. Default settings except work_mem = 1MB.
NOTE: I am using partitioned tables here, and was querying the
'master' table. Perhaps is this a Known Issue.
I ran a query recently where the result was very large. The outer-most
part of the query looked like this:
On Mon, Mar 7, 2011 at 2:00 PM, Tom Lane wrote:
> Jon Nelson writes:
>> I was hoping that somebody could help me understand the differences
>> between three plans.
>> All of the plans are updating a table using a second table, and should
>> be logically equivalent.
&g
Originally, I posted to -general but I found some time to write some
samples, and realized it's probably more of a performance question.
The original post is here:
http://archives.postgresql.org/pgsql-general/2011-03/msg00198.php
I was hoping that somebody could help me understand the differences
On Thu, Feb 3, 2011 at 7:41 AM, Kenneth Marshall wrote:
> On Thu, Feb 03, 2011 at 02:11:58AM -0800, da...@lang.hm wrote:
>> On Thu, 3 Feb 2011, Vitalii Tymchyshyn wrote:
>>
>>> 02.02.11 20:32, Robert Haas ???(??):
Yeah. Any kind of bulk load into an empty table can be a problem,
eve
On Wed, Feb 2, 2011 at 12:11 PM, Mladen Gogala
wrote:
> Robert Haas wrote:
>>
>> On Tue, Feb 1, 2011 It would be pretty hard to make autoanalyze work on
>> such tables
>> without removing some of the performance benefits of having such
>> tables in the first place - namely, the local buffer manage
On Tue, Feb 1, 2011 at 1:24 PM, Greg Smith wrote:
> Mladen Gogala wrote:
>>
>> Did anyone try using "shake" while the cluster is active? Any problems
>> with corruption or data loss? I ran the thing on my home directory and
>> nothing was broken. I didn't develop any performance test, so cannot vo
On Fri, Jan 14, 2011 at 2:11 PM, Jon Nelson wrote:
> On Thu, Jan 13, 2011 at 6:10 PM, Tom Lane wrote:
>> Jon Nelson writes:
>>> On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane wrote:
>>>> If you have enough memory to de-dup them individually, you surely have
>
On Thu, Jan 13, 2011 at 6:10 PM, Tom Lane wrote:
> Jon Nelson writes:
>> On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane wrote:
>>> If you have enough memory to de-dup them individually, you surely have
>>> enough to de-dup all at once.
>
>> If everything were
On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane wrote:
>>> I don't believe there is any case where hashing each individual relation
>>> is a win compared to hashing them all together. If the optimizer were
>>> smart enough to b
On Thu, Jan 13, 2011 at 4:49 PM, Robert Haas wrote:
> On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson wrote:
> I don't believe there is any case where hashing each individual relation
> is a win compared to hashing them all together. If the optimizer were
> smart enough to be considerin
I was recently asked to look into why one particular set of queries
was taking a long time.
The queries are all of the same form. They select the UNION of a few
columns from around 100 tables.
The query in particular was taking some 7-8 minutes to run.
On a whim, I changed the query from this form
On Thu, Jan 13, 2011 at 2:05 PM, Tom Lane wrote:
> Jon Nelson writes:
>> Your comment regarding "each individual de-duplication looked like it
>> would fit in work_mem" doesn't really make sense, exactly. Maybe I'm
>> misunderstanding you.
>
>
On Thu, Jan 13, 2011 at 11:13 AM, Tom Lane wrote:
> Jon Nelson writes:
>> In the former case, the query plan was a bitmap heap scan for each
>> table. Then those results were Appended, Sorted, Uniqued, Sorted
>> again, and then returned.
>
>> In the latter, before A
I was recently asked to look into why one particular set of queries
was taking a long time. The queries are all of the same form. They
select the UNION of a few
columns from around 100 tables.
The query in particular was taking some 7-8 minutes to run.
On a whim, I changed the query from this fo
On Mon, Dec 6, 2010 at 1:46 PM, bricklen wrote:
> On Sat, Dec 4, 2010 at 11:45 AM, felix wrote:
>> Ok, I caught one : an update that is stuck in waiting.
>> the first one blocks the second one.
>> ns | 5902 | nssql | UPDATE "fastadder_fastadderstatus" SET "built"
>> = false WHERE "fasta
On Wed, Nov 17, 2010 at 3:24 PM, Greg Smith wrote:
> Scott Carey wrote:
>>
>> Did you recompile your test on the RHEL6 system?
>
> On both systems I showed, I checked out a fresh copy of the PostgreSQL 9.1
> HEAD from the git repo, and compiled that on the server, to make sure I was
> pulling in t
On Sat, Nov 13, 2010 at 9:41 AM, Tom Lane wrote:
> Jon Nelson writes:
>> OK. This is a highly distilled example that shows the behavior.
>
>> BEGIN;
>> CREATE TEMPORARY TABLE foo AS SELECT x AS A, chr(x % 75 + 32) AS b,
>> ''::text AS c from generate
I was doing some testing with temporary tables using this sql:
begin;
select pg_sleep(30);
create temporary TABLE foo (a int, b int, c int, d text);
insert into foo SELECT (x%1000) AS a,(x%1001) AS b, (x % 650) as c, ''
as d FROM generate_series( 1, 100 ) AS x;
-- create temporary TABLE foo A
On Thu, Oct 28, 2010 at 9:23 AM, Tom Lane wrote:
> Jon Nelson writes:
>> On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane wrote:
>>> It thinks it's faster, or there is some reason why it *can't* use the
>>> index, like a datatype mismatch. You could tell which by
On Thu, Nov 11, 2010 at 10:38 PM, Pavel Stehule wrote:
> 2010/11/12 Jon Nelson :
>> On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule
>> wrote:
>>> Hello
>>>
>>> look on EXPLAIN ANALYZE command. Probably your statistic are out, and
>>> then plan
On Thu, Nov 11, 2010 at 10:26 PM, Pavel Stehule wrote:
> Hello
>
> look on EXPLAIN ANALYZE command. Probably your statistic are out, and
> then planner can be confused. EXPLAIN ANALYZE statement show it.
As I noted earlier, I did set statistics to 1000 an re-ran vacuum
analyze and the plan did no
On Fri, Nov 5, 2010 at 7:26 PM, Jon Nelson wrote:
> I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to
> go crazy with the amount of memory it consumes.
> When I run the query below, in a matter of a few seconds memory
> balloons to 5.3G (virtual), 4.6G (reside
On Thu, Nov 11, 2010 at 1:41 PM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> Besides the "fully-scanned object size relative to relation size
>> costing adjustment" idea, the only one which seemed to be likely to
>> be useful for this sort of issue was the "costing factors by user
>> ID" idea -
I also found this. Perhaps it is related?
http://postgresql.1045698.n5.nabble.com/Hash-Aggregate-plan-picked-for-very-large-table-out-of-memory-td1883299.html
--
Jon
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.
I have a postgres 8.4.5 instance on CentOS 5 (x86_64) which appears to
go crazy with the amount of memory it consumes.
When I run the query below, in a matter of a few seconds memory
balloons to 5.3G (virtual), 4.6G (resident) and 1840 (shared), and
eventually the oom killer is invoked, killing the
On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau wrote:
> Marti Raudsepp writes:
>
>> On Fri, Nov 5, 2010 at 13:32, A B wrote:
>>> I was just thinking about the case where I will have almost 100%
>>> selects, but still needs something better than a plain key-value
>>> storage so I can do som
I've been having trouble with a query.
The query is a cross join between two tables.
Initially, I mis-typed the query, and one of the columns specified in
the query doesn't exist, however the query ran nonetheless.
The actual query:
select gid from t2, t3 where t2.name = t3.name and t3.scope = 'ci
On Wed, Oct 27, 2010 at 5:36 PM, Tom Lane wrote:
> Jon Nelson writes:
>> I'd like to zoom out a little bit and, instead of focusing on the
>> specifics, ask more general questions:
..
>> - is there some way for me to determine /why/ the planner chooses a
>> se
On Wed, Oct 27, 2010 at 4:45 PM, Tom Lane wrote:
> Jon Nelson writes:
>> The sequence goes exactly like this:
>
>> BEGIN;
>> CREATE TEMPORARY TABLE (20 columns, mostly text, a few int).
>> COPY (approx 8 million rows, ~900 MB)[1]
>> UPDATE (2.8 million of th
On Wed, Oct 27, 2010 at 2:43 PM, Tom Lane wrote:
> Jon Nelson writes:
>> The most recent experiment shows me that, unless I create whatever
>> indexes I would like to see used *before* the large (first) update,
>> then they just don't get used. At all.
>
> You
On Wed, Oct 27, 2010 at 1:52 PM, Jon Nelson wrote:
> On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson wrote:
>> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
>>> set it to 500 and restarted postgres.
>>
>> did you re-analyze?
>
> Not recently. I tri
On Wed, Oct 27, 2010 at 1:32 PM, Reid Thompson wrote:
> On Wed, 2010-10-27 at 13:23 -0500, Jon Nelson wrote:
>> set it to 500 and restarted postgres.
>
> did you re-analyze?
Not recently. I tried that, initially, and there was no improvement.
I'll try it again now that I
On Wed, Oct 27, 2010 at 12:59 PM, Jon Nelson wrote:
> On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
> wrote:
>> On 10/27/2010 1:29 PM, Jon Nelson wrote:
>> How big is your default statistics target? The default is rather small, it
>> doesn't produce v
On Wed, Oct 27, 2010 at 12:44 PM, Mladen Gogala
wrote:
> On 10/27/2010 1:29 PM, Jon Nelson wrote:
> How big is your default statistics target? The default is rather small, it
> doesn't produce very good or usable histograms.
Currently, default_statistics_target is 50.
I note that
On Tue, Oct 26, 2010 at 4:02 PM, Mladen Gogala
wrote:
> On 10/26/2010 11:41 AM, Merlin Moncure wrote:
>>
>> yup, that's exactly what I mean -- this will give you more uniform
>> insert performance (your temp table doesn't even need indexes). Every
>> N records (say 1) you send to permanent an
On Tue, Oct 19, 2010 at 9:36 AM, Tom Lane wrote:
> Jon Nelson writes:
>> This is another situation where using pread would have saved a lot of
>> time and sped things up a bit, but failing that, keeping track of the
>> file position ourselves and only lseek'ing when ne
On Tue, Oct 19, 2010 at 8:25 AM, Merlin Moncure wrote:
> On Tue, Oct 19, 2010 at 9:10 AM, Jon Nelson wrote:
>> No replies?
>>
>> This is another situation where using pread would have saved a lot of
>> time and sped things up a bit, but failing that, keeping tr
No replies?
This is another situation where using pread would have saved a lot of
time and sped things up a bit, but failing that, keeping track of the
file position ourselves and only lseek'ing when necessary would also
help. Postgresql was spending 37% of it's time in redundant lseek!
--
Jon
On Mon, Oct 18, 2010 at 6:01 PM, Tom Lane wrote:
> Jon Nelson writes:
>> CREATE INDEX foo_idx ON t USING GIN (alternatecodes) WHERE
>> alternatecodes IS NOT NULL;
>> SELECT * FROM t WHERE alternatecodes IS NOT NULL;
>> ERROR: GIN indexes do not support whole-index sca
I have a table with an array column.
I added a GIN index to the array:
CREATE INDEX foo_idx ON t USING GIN (alternatecodes) WHERE
alternatecodes IS NOT NULL;
That's all well and good.
However, some queries started failing and I was able to reproduce the
behavior in psql!
SELECT * FROM t WHERE al
postgres 8.4.4 on openSUSE 11.3 (2.6.36rc7, x86_64).
I was watching a fairly large query run and observed that the disk
light went out. I checked 'top' and postgres was using 100% CPU so I
strace'd the running process.
This is what I saw:
lseek(87, 0, SEEK_END) = 585531392
lseek(
Just my take on this.
The first thing I'd do is think real hard about whether you really
really want 'numeric' instead of boolean, smallint, or integer. The
second thing is that none of your indices (which specify a whole bunch
of fields, by the way) have only just emailok, emailbounced, or only
Are there any performance implications (benefits) to executing queries
in a transaction where
SET TRANSACTION READ ONLY;
has been executed?
--
Jon
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailp
On Tue, Oct 12, 2010 at 8:18 AM, Greg Smith wrote:
> No. Usually the sequence used to remove all cached data from RAM before a
> benchmark is:
All cached data (as cached in postgresql - *not* the Linux system
caches)..., right?
--
Jon
--
Sent via pgsql-performance mailing list (pgsql-perfor
On Tue, Oct 12, 2010 at 7:27 AM, Mladen Gogala
wrote:
>
> So, the results weren't cached the first time around. The explanation is the
> fact that Oracle, as of the version 10.2.0, reads the table in the private
> process memory, not in the shared buffers. This table alone is 35GB in
> size, Or
On Sun, Oct 10, 2010 at 12:14 PM, Mladen Gogala
wrote:
>
>
>
> In other words, when I batched the sequential scan to do 128 blocks I/O, it
> was 4 times faster than when I did the single block I/O.
> Does that provide enough of an evidence and, if not, why not?
These numbers tell us nothing beca
On Wed, Oct 6, 2010 at 5:31 PM, Ivan Voras wrote:
> On 10/04/10 20:49, Josh Berkus wrote:
>
>>> The other major bottleneck they ran into was a kernel one: reading from
>>> the heap file requires a couple lseek operations, and Linux acquires a
>>> mutex on the inode to do that. The proper place to
On Thu, Jul 15, 2010 at 9:41 AM, Patrick Donlin wrote:
> I have two servers with equal specs, one of them running 8.3.7 and the new
> server running 8.4.4. The only tweak I have made from the default install
> (from Ubuntu repositories) is increasing shared_buffers to 768MB. Both
> servers are run
On Mon, May 17, 2010 at 7:28 AM, A. Kretschmer
wrote:
> In response to Jon Nelson :
>> On Mon, May 17, 2010 at 5:10 AM, Pierre C wrote:
>> > - or use a JOIN delete with a virtual VALUES table
>> > - or fill a temp table with ids and use a JOIN DELETE
>>
>>
2010/5/17 Віталій Тимчишин :
>
>
> 2010/5/17 Jon Nelson
>>
>> On Mon, May 17, 2010 at 5:10 AM, Pierre C wrote:
>> > - or use a JOIN delete with a virtual VALUES table
>> > - or fill a temp table with ids and use a JOIN DELETE
>>
>> What is a
On Mon, May 17, 2010 at 5:10 AM, Pierre C wrote:
> - or use a JOIN delete with a virtual VALUES table
> - or fill a temp table with ids and use a JOIN DELETE
What is a virtual VALUES table? Can you give me an example of using a
virtual table with selects, joins, and also deletes?
--
Jon
--
Se
On Thu, Oct 1, 2009 at 4:46 AM, S Arvind wrote:
>
> Is it FreeBSD, CentOS, Fedora, Redhat xxx??
FreeBSD isn't Linux.
Don't run Fedora, it undergoes way too much Churn.
No real difference between CentOS and RedHat.
I personally prefer openSUSE (or SLES/SLED if you want their
commerical offering).
On Thu, Oct 1, 2009 at 4:46 AM, S Arvind wrote:
> Hi everyone,
> What is the best Linux flavor for server which runs postgres alone.
> The postgres must handle greater number of database around 200+. Performance
> on speed is the vital factor.
> Is it FreeBSD, CentOS, Fedora, Redhat xxx??
F
On Tue, Jul 28, 2009 at 4:11 PM, Scott Marlowe wrote:
> On Tue, Jul 28, 2009 at 2:58 PM, Merlin Moncure wrote:
>> On Mon, Jul 27, 2009 at 2:05 PM, Dave Youatt wrote:
>>> On 01/-10/-28163 11:59 AM, Greg Smith wrote:
On Tue, 21 Jul 2009, Doug Hunley wrote:
> Just wondering is the issue
93 matches
Mail list logo