On Tue, Nov 7, 2017 at 2:25 PM, Ulf Lohbrügge wrote:
> 2017-11-07 20:45 GMT+01:00 Andres Freund :
>>
>> On 2017-11-07 18:48:14 +0100, Ulf Lohbrügge wrote:
>> > Hi,
>> >
>> > 2017-11-07 16:11 GMT+01:00 Andres Freund :
>> >
>> > > Hi,
>> > >
>> > > On 2017-11-07 11:11:36 +0100, Ulf Lohbrügge wrote:
Try the queries here to check locks:
https://wiki.postgresql.org/wiki/Lock_Monitoring
On Wed, Oct 11, 2017 at 7:35 PM, Neto pr wrote:
> Dear,
> With alternative, I tested the creation using concurrency
> (CREATE INDEX CONCURRENCY NAME_IDX ON TABLE USING HASH (COLUMN);
>
> from what I saw the ind
On Mon, Sep 4, 2017 at 2:14 AM, 우성민 wrote:
> Hi team,
>
> I'm trying to configure postgres and pgbouncer to handle many inserts from
> many connections.
>
> Here's some details about what i want to achieve :
>
> We have more than 3000 client connections, and my server program forks
> backend pro
On Tue, Aug 15, 2017 at 4:06 AM, Mariel Cherkassky
wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
So 26G takes 95 minutes, or 27 MB/minute or 456k/second? Sound about
right (it's early, I have
On Mon, Aug 14, 2017 at 5:10 PM, Jerry Sievers wrote:
> Peter Geoghegan writes:
>
>> On Mon, Aug 14, 2017 at 12:53 PM, Jeremy Finzel wrote:
>>
>>> This particular db is on 9.3.15. Recently we had a serious performance
>>> degradation related to a batch job that creates 4-5 temp tables and 5
>>>
Oh yeah, sorry. Was looking at a different system where we were using
a tablespace for temp tables.
On Tue, Aug 15, 2017 at 10:00 AM, Jeremy Finzel wrote:
>> > Not so.
>> >
>> > This system has no defined temp_tablespace however spillage due to
>> > sorting/hashing that exceeds work_mem goes to b
On Mon, Aug 14, 2017 at 4:16 PM, Jerry Sievers wrote:
> Scott Marlowe writes:
>
>> You do know that temp tables go into the default temp table space,
>> just like sorts, right?
>
> Not so.
>
> This system has no defined temp_tablespace however spillage due to
Also if you're using newly loaded data the db could be setting hint
bits on the first select etc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Mon, Aug 14, 2017 at 2:46 PM, Jeremy Finzel wrote:
> On Mon, Aug 14, 2017 at 3:01 PM, Scott Marlowe
> wrote:
>>
>> On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
>> >
>> > Any insights would be greatly appreciated, as we are concerned not
>>
On Mon, Aug 14, 2017 at 1:53 PM, Jeremy Finzel wrote:
> This particular db is on 9.3.15. Recently we had a serious performance
> degradation related to a batch job that creates 4-5 temp tables and 5
> indexes. It is a really badly written job but what really confuses us is
> that this job has be
On Tue, Aug 1, 2017 at 8:45 AM, Sumeet Shukla wrote:
> It seems that it is happening because of the way the database is created. On
> an old database it runs perfectly fine or if I use the old DB as template to
> create the new one, it runs fine. But if I create a new DB with same
> settings and
On Tue, Aug 1, 2017 at 6:41 AM, Sumeet Shukla wrote:
> Hi,
>
> I have 2 PG servers with same h/w and configuration and they are not in
> replication.
>
> On server A it takes 20 minutes to execute the script.
> On server B it takes more than 20 hours. (Seems to be stuck with create
> index and and
On Tue, Dec 27, 2016 at 3:50 PM, Flávio Henrique wrote:
> Hi there, fellow experts!
>
> I need an advice with query that became slower after 9.3 to 9.6 migration.
>
> First of all, I'm from the dev team.
>
> Before migration, we (programmers) made some modifications on query bring
> it's average t
On Tue, Jul 18, 2017 at 3:20 AM, Charles Nadeau
wrote:
> Claudio,
>
> Find attached the iostat measured while redoing the query above
> (iostat1.txt). sda holds my temp directory (noop i/o scheduler), sdb the
> swap partition (cfq i/o scheduler) only and sdc (5 disks RAID0, noop i/o
> scheduler) h
On Sat, Jul 15, 2017 at 11:53 AM, Charles Nadeau
wrote:
> Mark,
>
> I increased the read ahead to 16384 and it doesn't improve performance. My
> RAID 0 use a stripe size of 256k, the maximum size supported by the
> controller.
Are your queries still spilling to disk for sorts? If this is the
case
On Wed, Jul 5, 2017 at 3:26 AM, Daulat Ram wrote:
> Hi experts,
>
> We have configured a replication environment in Windows 10. But I am getting
> below the error messages while starting slave instance.
>
>
>
> Error:
>
>
>
> 2017-07-05 00:00:02 IST LOG: restored log file "0001002
On Thu, Jun 29, 2017 at 1:11 PM, Yevhenii Kurtov
wrote:
> Hi Jeff,
>
> That is just a sample data, we are going live in Jun and I don't have
> anything real so far. Right now it's 9.6 and it will be a latest stable
> available release on the date that we go live.
Trust me on this one, you want to
On Fri, Jun 9, 2017 at 9:12 AM, Frits Jalvingh wrote:
> Hi John,
>
> Yes, I was aware and amazed by that ;) It is actually the fetch size in
> combination with autocommit being on; that dies the sweet OOM death as soon
> as the table gets big.
>
> But Postgres read performance, with autocommit off
On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not for
> inserting; for that the speed to w
On Wed, Jun 7, 2017 at 5:33 AM, Dinesh Chandra 12108
wrote:
> Dear Expert,
>
>
>
> Is there any way to rollback table data in PostgreSQL?
You really need to give us more details. PostgreSQL has the ability,
through continuous archiving, to roll back to a previous point in
time. This is for the wh
On Tue, May 23, 2017 at 2:14 PM, Jarek wrote:
> Dnia 2017-05-23, wto o godzinie 11:39 -0700, Steve Crawford pisze:
>> The answer, as always, is "it depends."
>>
>>
>> Can you give us an overview of your setup? The appropriate setup for
>> small numbers of long-running analytical queries (typically
On Tue, May 9, 2017 at 7:08 AM, Vincent Veyron wrote:
>
> Well, the response to the ticket was quite fast :
>
> https://status.online.net/index.php?do=details&task_id=720
>
> Here's the stated cause :
>
>>Our tests have confirmed an issue caused by the fans of the power supplies
>>installed in se
On Mon, May 8, 2017 at 5:06 PM, Scott Marlowe wrote:
> On Mon, May 8, 2017 at 4:24 PM, Vincent Veyron wrote:
>> On Mon, 8 May 2017 12:48:29 -0600
>> Scott Marlowe wrote:
>>> > -Why are regular queries much faster on this same server?
>>>
>>> T
On Mon, May 8, 2017 at 4:24 PM, Vincent Veyron wrote:
> On Mon, 8 May 2017 12:48:29 -0600
> Scott Marlowe wrote:
>
> Hi Scott,
>
> Thank you for your input.
>
>>
>> The most likely cause of the difference would be that one server IS
>> honoring fsync reque
On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
> Hello Guys,
>
> We are facing problem related to performance of Postgres. Indexes are not
> being utilized and Postgres is giving priority to seq scan. I read many
> articles of Postgres performance and found that we need to set the
> randome_p
On Mon, May 8, 2017 at 11:49 AM, Vincent Veyron wrote:
> Hello,
>
> I use two dedicated bare metal servers (Online and Kimsufi). The first one
> takes much longer to execute a procedure that recreates a database by
> truncating its tables, then copying the data from a set of text files; it is
>
On Thu, May 4, 2017 at 8:36 AM, Scott Marlowe wrote:
> On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
>> Hello Guys,
>>
>> We are facing problem related to performance of Postgres. Indexes are not
>> being utilized and Postgres is giving priority to seq scan.
On Thu, May 4, 2017 at 8:10 AM, Junaid Malik wrote:
> Hello Guys,
>
> We are facing problem related to performance of Postgres. Indexes are not
> being utilized and Postgres is giving priority to seq scan. I read many
> articles of Postgres performance and found that we need to set the
> randome_p
On Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE wrote:
> I come from an Oracle background and am porting an application to postgres.
> App has a table that will contain 100 million rows and has to be loaded by a
> process that reads messages off a SQS queue and makes web service calls to
> insert re
On Fri, Mar 24, 2017 at 3:58 AM, Pietro Pugni wrote:
> Hi there,
> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel
> 4.4.0-66-generic). Hardware is:
> - 2 x Intel Xeon E5-2690
> - 96GB RAM
> - Software mdadm RAID10 (6 x SSDs)
>
> Postgres is used in a sort of DWH application, so all
Suggestion #1 is to turn off any write caching on the RAID controller.
Using LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write
caching. Basically it just got in the way.
On Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras wrote:
> Hello,
>
> I'm investigating options for an environment which has about a dozen servers
> and several dozen databases on each, and they occasionally need to run huge
> reports which slow down other services. This is of course "legacy code".
> Af
On Wed, Nov 2, 2016 at 5:46 PM, Jim Nasby wrote:
> On 10/28/16 2:33 PM, Joshua D. Drake wrote:
>>
>> * A very high shared_buffers (in newer releases, it is not uncommon to
>> have many, many GB of)
>
>
> Keep in mind that you might get very poor results if shared_buffers is
> large, but not large
On Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl wrote:
> Hi
>
> Yes this makes both the update and both selects much faster. We are now down
> to 3000 ms. for select, but then I get a problem with another SQL where I
> only use epoch in the query.
>
> SELECT count(o.*) FROM met_vaer_wisline.
On Thu, Jul 7, 2016 at 10:27 AM, Merlin Moncure wrote:
> On Wed, Jul 6, 2016 at 4:48 PM, Scott Marlowe wrote:
>> On Wed, Jul 6, 2016 at 12:13 PM, Merlin Moncure wrote:
>>> Disabling write back cache for write heavy database loads will will
>>> destroy it in short ord
On Wed, Jul 6, 2016 at 12:13 PM, Merlin Moncure wrote:
> Disabling write back cache for write heavy database loads will will
> destroy it in short order due to write amplication and will generally
> cause it to underperform hard drives in my experience.
Interesting. We found our best performance
On Sat, Mar 19, 2016 at 4:29 AM, Scott Marlowe wrote:
> Given the size of your bonnie test set and the fact that you're using
> RAID-10, the cache should make little or no difference. The RAID
> controller may or may not interleave reads between all four drives.
> Some do, som
On Thu, Mar 17, 2016 at 2:45 PM, Dave Stibrany wrote:
> I'm pretty new to benchmarking hard disks and I'm looking for some advice on
> interpreting the results of some basic tests.
>
> The server is:
> - Dell PowerEdge R430
> - 1 x Intel Xeon E5-2620 2.4GHz
> - 32 GB RAM
> - 4 x 600GB 10k SAS Seag
On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo wrote:
> Il 02/03/2016 16:49, Scott Marlowe ha scritto:
>>
>> On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe
>> wrote:
>>>
>>> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk
>>> wrote:
>>>>
On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe wrote:
> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk wrote:
>> Hi.
>>
>> I've noticed that autovac. process worked more than 10 minutes, during this
>> zabbix logged more than 90% IO disk utilization on db volume...
On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk wrote:
> Hi.
>
> I've noticed that autovac. process worked more than 10 minutes, during this
> zabbix logged more than 90% IO disk utilization on db volume
>
> ===>29237 2016-03-02 15:17:23 EET 0 [24-1]LOG: automatic
> vacuum of table
On Fri, Jan 8, 2016 at 10:07 AM, Nicolas Paris wrote:
>
> 2016-01-08 17:37 GMT+01:00 Mark Zealey :
>> Hi all, I just wrote an article about the postgres performance optimizations
>> I've been working on recently especially compared to our old MongoDB
>> platform
>>
>> https://mark.zealey.org/2016/
On Fri, Nov 13, 2015 at 7:15 AM, Merlin Moncure wrote:
> On Thu, Nov 12, 2015 at 4:26 PM, Tom Lane wrote:
>> Merlin Moncure writes:
>>> On Thu, Nov 12, 2015 at 9:48 AM, Craig James wrote:
What about a warning on creation?
db=> create table foo(i integer primary key);
db=> cr
On Wed, Oct 7, 2015 at 4:29 AM, FattahRozzaq wrote:
> Response from you all are very precious.
>
> @Merlin,
> I'm misunderstood the question.
> Yes, I didn't measure it. I only monitor RAM and CPU using htop (I also use
> nmon for disk IO, iftop for the network utilization).
> Did 1 connection nee
On Tue, Oct 6, 2015 at 11:08 PM, ajaykbs wrote:
> I have checked the transparent huge pages and zone reclaim mode and those are
> already disabled.
>
> As a trial and error method, I have reduced the shared buffer size from
> 8500MB to 3000MB.
> The CPU i/o wait is icreased a little. But the perio
On Tue, Oct 6, 2015 at 3:33 AM, FattahRozzaq wrote:
> @Merlin Moncure, I got the calculation using pg_tune. And I modified
> the shared_buffers=24GB and the effective_cache_size=64GB
>
> @Igor Neyman,
> Yes, I had performance problem which sometimes the response time took
> 11ms, with the exactly
On Tue, Jul 7, 2015 at 11:43 AM, Graeme B. Bell wrote:
>
> The comment on HDDs is true and gave me another thought.
>
> These new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on
> tracks that overlap your data, any time you change the data. Result: disks
> 8-20x slower during wr
Note that if you still have the settings you showed in your original
post you're just moving the goal posts a few feet further back. Any
heavy load can still trigger this kind of behaviour.
On Tue, Jul 7, 2015 at 5:29 AM, eudald_v wrote:
> Hello guys!
>
> I finally got rid of it.
> It looks that
On Tue, Jun 30, 2015 at 8:52 AM, eudald_v wrote:
> Hello all,
> This is my very first message to the Postgresql community, and I really hope
> you can help me solve the trouble I'm facing.
>
> I've an 80 core server (multithread) with close to 500GB RAM.
>
> My configuration is:
> MaxConn: 1500 (w
On Wed, Jul 1, 2015 at 5:06 PM, Craig James wrote:
> We're buying a new server in the near future to replace an aging system. I'd
> appreciate advice on the best SSD devices and RAID controller cards
> available today.
>
> The database is about 750 GB. This is a "warehouse" server. We load supplie
On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe wrote:
> On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake
> wrote:
>>
>> On 06/03/2015 03:16 PM, Tomas Vondra wrote:
>>
>>> What is more important, though, is the amount of memory. OP reported the
>>>
On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake wrote:
>
> On 06/03/2015 03:16 PM, Tomas Vondra wrote:
>
>> What is more important, though, is the amount of memory. OP reported the
>> query writes ~95GB of temp files (and dies because of full disk, so
>> there may be more). The on-disk format is u
On Wed, Jun 3, 2015 at 1:24 PM, Tomas Vondra
wrote:
>
>
> On 06/03/15 17:09, Scott Marlowe wrote:
>>
>> On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra
>>>
>>>
>>>
>>> I don't see why you think you have less than 3GB used. The output yo
On Wed, Jun 3, 2015 at 8:56 AM, Tomas Vondra
wrote:
>
> On 06/03/15 16:06, ben.play wrote:
>>
>> The query is (unfortunately) generated by Doctrine 2 (Symfony 2).
>> We can’t change the query easily.
>
>
> Well, then you'll probably have to buy more RAM, apparently.
>
>> This is my config :
>>
>>
On Mon, Jun 1, 2015 at 7:20 AM, Merlin Moncure wrote:
> On Mon, Jun 1, 2015 at 12:38 AM, Ashik S L wrote:
>>> On 05/30/2015 09:46 AM, Ashik S L wrote:
We are using postgres SQL version 8.4.17..
Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
each row is of 60
On Thu, Apr 9, 2015 at 7:35 AM, Graeme B. Bell wrote:
> ext4 settings
>
> ext4, nobarrier
> noatime+nodatime,
> stripe&stride aligned between raid10 & ext4 correctly.
>
>
> Some other useful things to know
>
> -- h710p
> readahead disabled on H710P
> writeback cache enabled on H710P
> Direct IO en
On Mon, Mar 16, 2015 at 6:59 AM, Robert Kaye wrote:
>
> 4. Linux 3.2 apparently has some less than desirable swap behaviours. Once
> we started swapping, everything went nuts.
On older machines I used to just turn off swap altogether. Esp if I
wasn't running out of memory but swap was engaging an
On Sun, Mar 15, 2015 at 11:46 AM, Andres Freund wrote:
> On 2015-03-15 20:42:51 +0300, Ilya Kosmodemiansky wrote:
>> On Sun, Mar 15, 2015 at 8:20 PM, Andres Freund
>> wrote:
>> > On 2015-03-15 11:09:34 -0600, Scott Marlowe wrote:
>> >> shared_mem of 12G is
On Sun, Mar 15, 2015 at 11:09 AM, Scott Marlowe wrote:
Clarification:
> 64MB work mem AND max_connections = 500 is a recipe for disaster. No
> db can actively process 500 queries at once without going kaboom, ad
> having 64MB work_mem means it will go kaboom long before it reaches
>
On Sun, Mar 15, 2015 at 10:43 AM, Scott Marlowe wrote:
> On Sun, Mar 15, 2015 at 7:50 AM, Andres Freund wrote:
>> On 2015-03-15 13:07:25 +0100, Robert Kaye wrote:
>>>
>>> > On Mar 15, 2015, at 12:13 PM, Josh Krupka wrote:
>>> >
>>> > It
On Sun, Mar 15, 2015 at 7:50 AM, Andres Freund wrote:
> On 2015-03-15 13:07:25 +0100, Robert Kaye wrote:
>>
>> > On Mar 15, 2015, at 12:13 PM, Josh Krupka wrote:
>> >
>> > It sounds like you've hit the postgres basics, what about some of the
>> > linux check list items?
>> >
>> > what does free
On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan
wrote:
>> We might even consider taking experts advice on how to tune queries and
>> server, but if postgres is going to behave like this, I am not sure we would
>> be able to continue with it.
>>
>> Having said that, I would day again that I
On Tue, Nov 25, 2014 at 8:59 PM, Tom Lane wrote:
> I wrote:
>>> Hmm, I don't like the trend here. For the repeat-1000x query, I get
>>> these reported execution times:
>
>>> 8.4 360 ms
>>> 9.0 365 ms
>>> 9.1 440 ms
>>> 9.2 510 ms
>>> 9.3 550 ms
>>> 9.4 570 ms
>>> head 570 ms
>
>> I made a q
On Tue, Nov 25, 2014 at 1:58 PM, Heikki Linnakangas
wrote:
> On 11/25/2014 10:36 PM, Scott Marlowe wrote:
>>
>> OK so there's a simple set of tree functions we use at work. They're
>> quite fast in 8.4 and they've gotten about 40% slower in 9.2. They
OK so there's a simple set of tree functions we use at work. They're
quite fast in 8.4 and they've gotten about 40% slower in 9.2. They're
a simple mix of sql and plpgsql functions which are at
http://pastebin.com/SXTnNhd5 and which I've attached.
Here's a test query:
select tree_ancestor_keys('0
On Thu, Aug 21, 2014 at 5:29 PM, Josh Berkus wrote:
> On 08/21/2014 04:08 PM, Steve Crawford wrote:
>> On 08/21/2014 03:51 PM, Josh Berkus wrote:
>>> On 08/21/2014 02:26 PM, Scott Marlowe wrote:
>>>> I'm running almost the exact same setup in production as a
On Thu, Aug 21, 2014 at 3:26 PM, Scott Marlowe wrote:
> On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus wrote:
>> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
>>
>>> I am also
>>> unclear exactly what you tested, as I didn't see it mentioned in the
>>&g
On Thu, Aug 21, 2014 at 3:02 PM, Josh Berkus wrote:
> On 08/20/2014 07:40 PM, Bruce Momjian wrote:
>
>> I am also
>> unclear exactly what you tested, as I didn't see it mentioned in the
>> email --- CPU type, CPU count, and operating system would be the minimal
>> information required.
>
> Ooops!
On Thu, Jun 26, 2014 at 5:49 PM, Mark Kirkwood
wrote:
> I have a nice toy to play with: Dell R920 with 60 cores and 1TB ram [1].
>
> The context is the current machine in use by the customer is a 32 core one,
> and due to growth we are looking at something larger (hence 60 cores).
>
> Some initial
Thanks we'll give that a try.
On Fri, Jun 6, 2014 at 7:38 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> Well it's me again, with another performance regression. We have this query:
>> SELECT *
>> FROM users u
>> WHERE (u.user_group_id IN
>>
Well it's me again, with another performance regression. We have this query:
SELECT *
FROM users u
WHERE (u.user_group_id IN
(SELECT ug.id
FROM user_groups ug, pro_partners p
WHERE ug.pro_partner_id = p.id
AND p.tree_sortkey BETWEEN
E'000101010100010100011
On Thu, May 15, 2014 at 10:52 AM, Tom Lane wrote:
> Scott Marlowe writes:
>> OK so we have a query that does OK in 8.4, goes to absolute crap in
>> 9.2 and then works great in 9.3. Thing is we've spent several months
>> regression testing 9.2 and no time testing 9.3,
OK so we have a query that does OK in 8.4, goes to absolute crap in
9.2 and then works great in 9.3. Thing is we've spent several months
regression testing 9.2 and no time testing 9.3, so we can't just "go
to 9.3" in an afternoon. But we might have to. 9.2 seems hopelessly
broken here.
The query l
On Mon, Mar 31, 2014 at 8:24 AM, Niels Kristian Schjødt
wrote:
>
> Thanks, this seems to persist after a reboot of the server though, and I have
> never in my server's 3 months life time experienced anything like it.
Could it be overheating and therefore throttling the cores?
Also another thing
On Wed, Mar 26, 2014 at 6:21 AM, Alexey Vasiliev wrote:
> I read from several sources, what maximum shared_buffers is 8GB.
>
> Does this true? If yes, why exactly this number is maximum number of
> shared_buffers for good performance (on Linux 64-bits)?
On most machines the limit is higher than y
On Wed, Feb 19, 2014 at 6:10 PM, Tomas Vondra wrote:
> On 19.2.2014 19:09, Scott Marlowe wrote:
>> Right now I'm testing on a machine with 2x Intel E5-2690s
>> (http://ark.intel.com/products/64596/intel-xeon-processor-e5-2690-20m-cache-2_90-ghz-8_00-gts-intel-qpi)
>> 5
On Wed, Feb 19, 2014 at 8:13 AM, Merlin Moncure wrote:
> On Tue, Feb 18, 2014 at 2:41 PM, Tomas Vondra wrote:
>> On 18.2.2014 02:23, KONDO Mitsumasa wrote:
>>> Hi,
>>>
>>> I don't have PERC H710 raid controller, but I think he would like to
>>> know raid striping/chunk size or read/write cache ra
On Mon, Feb 17, 2014 at 8:03 AM, Niels Kristian Schjødt
wrote:
> Hi,
>
> I'm kind of a noob when it comes to setting up RAID controllers and tweaking
> them so I need some advice here.
>
> I'm just about to setup my newly rented DELL R720 12. gen server. It's
> running a single Intel Xeon E5-262
On Thu, Dec 19, 2013 at 4:40 PM, Steve Crawford
wrote:
> On 12/18/2013 12:12 PM, prashant Pandey wrote:
>>
>> Could you tell me each and every hardware parameters and OS parameters the
>> performance depends on.
>> I need the complete list of all the required parameters and how to extract
>> them
On Thu, Dec 19, 2013 at 9:53 AM, Dave Johansen wrote:
>>
> On Sat, Dec 7, 2013 at 1:37 PM, Scott Marlowe
> wrote:
>> I'll add that you can use assymetric partitioning if you tend to do a
>> lot of more fine grained queries on recent data and more big roll up
>&g
On Sat, Dec 7, 2013 at 10:09 AM, desmodemone wrote:
> Hi Dave,
> About the number of partitions , I didn't have so much
> problems with hundreds of partitions ( like 360 days in a year ).
> Moreover you could bypass the overhead of trigger with a direct insert on
> the partition, als
On Thu, Dec 5, 2013 at 9:13 AM, Skarsol wrote:
> On Thu, Dec 5, 2013 at 9:50 AM, Scott Marlowe
> wrote:
>>
>> On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
>> > psql (PostgreSQL) 9.2.5
>> > Red Hat Enterprise Linux Server release 6.4 (Santiago)
>> &g
On Thu, Dec 5, 2013 at 8:16 AM, Skarsol wrote:
> psql (PostgreSQL) 9.2.5
> Red Hat Enterprise Linux Server release 6.4 (Santiago)
> Linux 2.6.32-358.6.1.el6.x86_64 #1 SMP Fri Mar 29 16:51:51 EDT 2013 x86_64
> x86_64 x86_64 GNU/Linux
> All relevant filesystems are ext4
>
> Changes from defaults:
>
On Fri, Nov 29, 2013 at 3:07 PM, Jan Michel wrote:
> Dear all,
> I have a quite strange problem running an extensive query on geo data
> checking for crossing ways. I don't know if this is a postgres or postgis
> problem, but I hope you can help. Running one thread is no problem, it
> finishes wit
On Wed, Nov 27, 2013 at 7:58 PM, Dong Ye wrote:
> As Heikki commented, VMware recently compared Postgres performance in
> an ESX (5.1) VM versus in a comparable native Linux. We saw 1.
> ESX-level locking causes no vertical scalability degradation, 2.
FYI Vmware has an optimized version of Postg
On Mon, Nov 11, 2013 at 1:09 AM, Евгений Селявка wrote:
> Scott hi, i calculate all of my jdbc pool size. Maximum is 300 connections
> from components wich use jdbc. I don't think that this is a good idea use
> pgbouncer, because our application using spring framework which using jdbc
> and prepar
Also also, the definitive page for postgres and dirty pages etc is here:
http://www.westnet.com/~gsmith/content/linux-pdflush.htm
Not sure if it's out of date with more modern kernels. Maybe Greg will chime in.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To ma
As a followup to my previous message, here's a response curve on a 48
core server I used at my last job.
https://picasaweb.google.com/lh/photo/aPYHPWPivPsS79fG3AKtZNMTjNZETYmyPJy0liipFm0?feat=directlink
Note the peak at around 38 to 48 cores. This is the sweetspot on this
server for connections.
On Wed, Nov 6, 2013 at 1:53 AM, Евгений Селявка wrote:
> Thank you for advice.
>
> 1)
> First off all, we use java app with jdbc driver wich can pool connection,
> thats why i don't think that this is good decision to put one more pooler
> between app and DB. May be someone have an experience with
On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка wrote:
SNIP
> max_connections = 350
SNIP
> work_mem = 256MB
These two settings together are quite dangerous.
1: Look into a db pooler to get your connections needed down to no
more than 2x # of cores in your machine. I recommend pgbouncer
2: You
On Thu, Aug 29, 2013 at 6:14 AM, bsreejithin wrote:
>
> I am *expecting 1000+ hits to my PostgreSQL DB* and I doubt my standalone DB
> will be able to handle it.
>
> So I want to *scale out by adding more servers to share the load*. For this,
> I want to do clustering.
>
> I am *curious to know ho
On Tue, Aug 13, 2013 at 4:50 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> OK I'm bumping this one last time in the hopes that someone has an
>> idea what to do to fix it.
>
>> Query plan: http://explain.depesz.com/s/kJ54
>
>> This query takes 180 seconds.
OK I'm bumping this one last time in the hopes that someone has an
idea what to do to fix it.
Query plan: http://explain.depesz.com/s/kJ54
This query takes 180 seconds. It loops 17391 times across the lower
index using entries from the upper index. That seems buggy to me.
While the exact estimate
On Fri, Aug 2, 2013 at 3:27 PM, Scott Marlowe wrote:
> On Fri, Aug 2, 2013 at 2:51 PM, Tom Lane wrote:
>> Scott Marlowe writes:
>>> Yep. Added the indexes and performance went right into the dumper. New
>>> plan on new table with old data added in random order now l
On Fri, Aug 2, 2013 at 2:51 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> Yep. Added the indexes and performance went right into the dumper. New
>> plan on new table with old data added in random order now looks like
>> the old table, only worse because it's on
On Fri, Aug 2, 2013 at 1:58 PM, Scott Marlowe wrote:
> On Fri, Aug 2, 2013 at 1:31 PM, Tom Lane wrote:
>> Scott Marlowe writes:
>>> I extracted all the data like so:
>>> select * into dba.pp_test_wide from original table;
>>> and get this query
On Fri, Aug 2, 2013 at 1:31 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> I extracted all the data like so:
>> select * into dba.pp_test_wide from original table;
>> and get this query plan from explain analyze:
>> http://explain.depesz.com/s/EPx which takes 20 minu
On Thu, Aug 1, 2013 at 7:22 PM, Scott Marlowe wrote:
> On Thu, Aug 1, 2013 at 5:44 PM, Tom Lane wrote:
>> Scott Marlowe writes:
>>> I am running 8.4.15 and can try 8.4.17 if some patch has been applied
>>> to it to address this issue. I just want to know should I
On Thu, Aug 1, 2013 at 5:44 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> I am running 8.4.15 and can try 8.4.17 if some patch has been applied
>> to it to address this issue. I just want to know should I
>
>> A: upgrade to 8.4.17
>> or
>> B: create a self c
Howdy. I seem to have inherited this problem:
http://www.postgresql.org/message-id/ef524e81-b815-4122-a337-7e50bced5...@silentmedia.com
Basically a subselect with no offset is resulting in really poor
performance with 120s queries but adding an offset 0 to the inner sub
select results in 0.5s que
1 - 100 of 1533 matches
Mail list logo