Postgresql rsync backups require the DB to be shutdown during the 'second'
rsync.
1. rsync the DB onto the backup filesystem (produces e.g. 95-99.99% consistent
DB on the backup filesystem)
2. shut down the DB
3. rsync the shut down DB onto the backup filesystem(synchronises the last
few
On 04 Apr 2014, at 18:29, Nicolas Paris nipari...@gmail.com wrote:
Hello,
My question is about multiprocess and materialized View.
http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html
I (will) have something like 3600 materialised views, and I would like to
know the
Nicolas PARIS
2014-04-07 12:29 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
On 04 Apr 2014, at 18:29, Nicolas Paris nipari...@gmail.com wrote:
Hello,
My question is about multiprocess and materialized View.
http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html
this routines will be at night, and need to be finished
quickly.
Thanks
Nicolas PARIS
2014-04-07 14:59 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
Hi again Nick.
Glad it helped.
Generally, I would expect that doing all the A's first, then all the B's, and
so on, would
HT off is common knowledge for better benchmarking result
It's wise to use the qualifer 'for better benchmarking results'.
It's worth keeping in mind here that a benchmark is not the same as normal
production use.
For example, where I work we do lots of long-running queries in parallel over
= 48GB
From: Graeme B. Bell [g...@skogoglandskap.no]
Sent: Friday, September 26, 2014 9:55 AM
To: Burgess, Freddie
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query
A good way to start would be to introduce
The existing cost estimation
code effectively assumes that they're perfectly uniformly distributed;
which is a good average-case assumption but can be horribly wrong in
the worst case.
Sorry, just an outsider jumping in with a quick comment.
Every year or two the core count goes up.
like.
Graeme
On 30 Sep 2014, at 18:32, Tom Lane t...@sss.pgh.pa.us wrote:
Graeme B. Bell g...@skogoglandskap.no writes:
Every year or two the core count goes up. Can/should/does postgres ever
attempt two strategies in parallel, in cases where strategy A is generally
good but strategy B
Hi Roberto,
Hardware etc. is a solution; but you have not yet characterised the problem.
You should investigate if the events are mostly...
- reads
- writes
- computationally intensive
- memory intensive
- I/O intensive
- network I/O intensive
- independent? (e.g. does it matter if you
Very much agree with this. Because SSD is fast doesn't make it suited for
certain things, and a streaming sequential 100% write workload is one of
them. I've worked with everything from local disk to high-end SAN and even
at the high end we've always put any DB logs on spinning disk.
I have a beast of a Dell server with the following specifications:
• 4x Xeon E5-4657LV2 (48 cores total)
• 196GB RAM
• 2x SCSI 900GB in RAID1 (for the OS)
• 8x Intel S3500 SSD 240GB in RAID10
• H710p RAID controller, 1GB cache
Centos 6.6, RAID10 SSDs uses XFS
I don't understand the logic behind using drives,
which are best for random io, for sequent io workloads.
Because they are also best for sequential IO. I get 1.3-1.4GB/second from 4
SSDs in RAID or 500MB/s for single disk systems, even with cheap models.
Are you getting more than that from
I'd suggest you run it on a large ramdisk with fsync turned off on a 32 core
computer, see what you get, that will be a good indication of a maximum.
Keep in mind though that 'postgres' with fsync (vs. without) is such a
different creature that the comparison isn't meaningful.
Similarly
Hi Nico,
No one has mentioned the elephant in the room, but a database can
be very I/O intensive and you may not be getting the performance
you need from your virtual disk running on your VMware disk subsystem.
What do IOmeter or other disk performance evaluation software report?
1. O/S
Under O/S, don't forget to mention linux kernel version.
We saw a MASSIVE increase in TPS (I think it was a doubling? Don't have the
data to hand right now) on our multicore RHEL6 servers, when moving from a
stock RHEL6 kernel to an ELREPO 3.18 series kernel. That's what 10 years of
Josh, there seems to be an inconsistency in your blog. You say 3.10.X is
safe, but the graph you show with the poor performance seems to be from
3.13.X which as I understand it is a later kernel. Can you clarify which
3.X kernels are good to use and which are not?
Sorry to cut in -
So
141791601
Mail Attachment.jpeg
2015-04-09 13:01 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
From a measurement I took back when we did the upgrade:
performance with 2.6: (pgbench, size 100, 32 clients)
48 651 transactions per second (read only)
6 504 transactions per second
faster it was?
Przemek Deć
2015-04-09 11:04 GMT+02:00 Graeme B. Bell g...@skogoglandskap.no:
Josh, there seems to be an inconsistency in your blog. You say 3.10.X is
safe, but the graph you show with the poor performance seems to be from
3.13.X which as I understand it is a later
A tangent to the performance testing thread here, but an important issue that
you will see come up in your work this year or next.
PCIe SSD may include AHCI PCI SSD or NVMe PCI SSD.
AHCI = old style, basically it's faster than SATA3 but quite similar in terms
of how the operating system
I believe yes / 0 are the default settings for synchronous commit and
commit_delay. ** (Interestingly the manual pages do not specify.) **
Sorry, I've just spotted the settings in the text. The statement (marked **) is
incorrect.
Defaults are yes/0.
On Sun, May 31, 2015 at 7:53 PM, Yves Dorfsman y...@zioup.com wrote:
That's the thing, even on an old laptop with a slow IDE disk, 273
individual
inserts should not take more than a second.
I think that would depend on settings such as synchronous_commit, commit_delay,
or whether 2-phase
I previously mentioned on the list that nvme drives are going to be a very big
thing this year for DB performance.
This video shows what happens if you get an 'enthusiast'-class motherboard and
5 of the 400GB intel 750 drives.
https://www.youtube.com/watch?v=-hE8Vg1qPSw
Total transfer speed:
Images/data here
http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked
On 04 Jun 2015, at 13:07, Graeme Bell g...@skogoglandskap.no wrote:
I previously mentioned on the list that nvme drives are going to be a very
big thing this
with unlogged than to just get faster drives + logged tables?)
On Thu, Jun 4, 2015 at 1:23 PM, Graeme B. Bell g...@skogoglandskap.no wrote:
Images/data here
http://www.pcper.com/reviews/Storage/Five-Intel-SSD-750s-Tested-Two-Million-IOPS-and-10-GBsec-Achievement-Unlocked
On 04 Jun 2015
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts in
postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem that I've also seen in other postgres projects
involving high degrees of parallelisation in the last 12
Technology
-Original Message-
From: Graeme B. Bell [mailto:graeme.b...@nibio.no]
Sent: Tuesday, July 07, 2015 8:26 AM
To: Merlin Moncure
Cc: Wes Vaske (wvaske); Craig James; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] New server: SSD/RAID recommendations?
As I have
, at 18:56, Wei Shan weishan@gmail.com wrote:
Hi Graeme,
Why would you think that you don't need RAID for ZFS?
Reason I'm asking if because we are moving to ZFS on FreeBSD for our future
projects.
Regards,
Wei Shan
On 8 July 2015 at 00:46, Graeme B. Bell graeme.b...@nibio.no
RAID controllers are completely unnecessary for SSD as they currently
exist.
Agreed. The best solution is not to buy cheap disks and not to buy RAID
controllers now, imho.
In my own situation, I had a tight budget, high performance demand and a newish
machine with RAID controller and HDDs
The comment on HDDs is true and gave me another thought.
These new 'shingled' HDDs (the 8TB ones) rely on rewriting all the data on
tracks that overlap your data, any time you change the data. Result: disks
8-20x slower during writes, after they fill up.
Do they have power loss protection
Yikes. I would not be able to sleep tonight if it were not for the BBU cache in
front of these disks...
diskchecker.pl consistently reported several examples of corruption
post-power-loss (usually 10 - 30 ) on unprotected M500s/M550s, so I think it's
pretty much open to debate what types of
On 07 Jul 2015, at 19:47, Scott Marlowe scott.marl...@gmail.com wrote:
[I know that using a shingled disk sounds crazy (it sounds crazy to me) but
you can bet there are people that just want to max out the disk bays in
their server... ]
Let's just say no online backup companies are using
Cache flushing isn't an atomic operation though. Even if the ordering is right,
you are likely to have a partial fsync on the disk when the lights go out -
isn't your FS still corrupt?
On 07 Jul 2015, at 21:53, Heikki Linnakangas hlinn...@iki.fi wrote:
On 07/07/2015 09:01 PM, Wes Vaske
On 28 Jul 2015, at 22:29, Graeme B. Bell graeme.b...@nibio.no wrote:
Entering production, availability 2016
1000x faster than nand flash/ssd , eg dram-latency
10x denser than dram
1000x write endurance of nand
Priced between flash and dram
Manufactured by intel/micron
Non-volatile
http
QUERY
SELECT COUNT(*) FROM occurrences WHERE (lat = -27.91550355958 AND lat
= -27.015680440420002 AND lng = 152.13307044728307 AND lng =
153.03137355271693 AND category_id = 1 AND (ST_Intersects(
ST_Buffer(ST_PointFromText('POINT(152.58 -27.465592)')::geography,
Some of you may have had annoying problems in the past with autofreeze or
autovacuum running at unexpected moments and dropping the performance of your
server randomly.
On our SSD-RAID10 based system we found a 20GB table finished it's vacuum
freeze in about 100 seconds. There were no
, Graeme B. Bell graeme.b...@nibio.no wrote:
Some of you may have had annoying problems in the past with autofreeze or
autovacuum running at unexpected moments and dropping the performance of your
server randomly.
On our SSD-RAID10 based system we found a 20GB table finished it's vacuum
Entering production, availability 2016
1000x faster than nand flash/ssd , eg dram-latency
10x denser than dram
1000x write endurance of nand
Priced between flash and dram
Manufactured by intel/micron
Non-volatile
Guess what's going in my 2016 db servers :-)
Please, don't be vapourware...
Hi all,
1. For those that don't like par_psql (http://github.com/gbb/par_psql), here's
an alternative approach that uses the Gnu Parallel command to organise
parallelism for queries that take days to run usually. Short script and
GIS-focused, but may give you a few ideas about how to
On 23 Jul 2015, at 13:37, domenico febbo mimmopastic...@gmail.com wrote:
is the problem also in PostgreSQL 9.4.x?
I'm going to buy a production's server with 4 sockets E7-4850 12 cores
so 12*4 = 48 cores (and 96 threads using HT).
What do you suggest?
Using or not HT?
BR
1. If you
Sounds like a locking problem, but assuming you aren’t sherlock holmes and
simply want to get the thing working as soon as possible:
Stick a fast SSD in there (whether you stay on VM or physical). If you have
enough I/O, you may be able to solve the problem with brute force.
SSDs are a lot
>> First the database was on a partition where compression was enabled, I
>> changed it to an uncompressed one to see if it makes a difference thinking
>> maybe the cpu couldn't handle the load.
> It made little difference in my case.
>
> My regular gmirror partition seems faster:
> dd bs=8k
Seems a bit slow.
1. Can you share the script (the portion that does the file transfer) to the
list? Maybe you’re doing something unusual there by mistake.
Similarly the settings you’re using for scp.
2. What’s the network like?
For example, what if the underlying network is only capable of
>>
>>
> Like this ?
>
> gmirror (iozone -s 4 -a /dev/mirror/gm0s1e) = 806376 (faster drives)
> zfs uncompressed (iozone -s 4 -a /datapool/data) = 650136
> zfs compressed (iozone -s 4 -a /datapool/data) = 676345
If you can get the complete tables (as in the images on the blog post) with
> On 08 Oct 2015, at 11:17, Bram Van Steenlandt wrote:
>
> The database (9.2.9) on the server (freebsd10) runs on a zfs mirror.
> If I copy a file to the mirror using scp I get 37MB/sec
> My script achieves something like 7 or 8MB/sec on large (+100MB) files.
This may help -
> On 08 Oct 2015, at 13:50, Bram Van Steenlandt wrote:
>>> 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there
>>> is anything there
Re: lobject
http://initd.org/psycopg/docs/usage.html#large-objects
"Psycopg large object support *efficient*
>>
>> http://initd.org/psycopg/docs/usage.html#large-objects
>>
>>
>> "Psycopg large object support *efficient* import/export with file system
>> files using the lo_import() and lo_export() libpq functions.”
>>
>> See *
>>
> I was under the impression they meant that the lobject was using
> I don't think inserts can cause contention on the server. Insert do not lock
> tables during the transaction. You may have contention on sequence but it
> won't vary with transaction size.
Perhaps there could be a trigger on inserts which creates some lock contention?
--
Sent via
https://medium.com/@c2c/nodejs-a-quick-optimization-advice-7353b820c92e
100% performance boost, for mysterious reasons that may be worth knowing about…
Graeme Bell
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On 07 Jul 2015, at 22:52, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Jul 7, 2015 at 3:33 PM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi Merlin,
Long story short - thanks for the reply, but you're not measuring anything
about the parallelism of code running in a pl/pgsql
On 07/07/2015 08:05 PM, Craig James wrote:
No ideas, but I ran into the same thing. I have a set of C/C++ functions
that put some chemistry calculations into Postgres as extensions (things
like, calculate the molecular weight of this molecule). As SQL
functions, the whole thing bogged
suppose any RAID
controller removes data from BBU cache after it was fsynced by the drive. As
I know, there is no other magic command for drive to tell controller that
the data is safe now and can be removed from BBU cache.
Вт, 7 лип. 2015 11:59 Graeme B. Bell graeme.b...@nibio.no пише
features in some marketing material that were only present on the H710P)
And I see UBER (unrecoverable bit error) rates for SSDs and HDDs, but has
anyone ever seen them for the flash-based cache on their raid controller?
Sleep well, friends.
Graeme.
On 07 Jul 2015, at 18:54, Graeme B. Bell graeme.b
://github.com/gbb/t,
and I'm going to submit it as a bug to the pg bugs list.
Graeme.
On 06 Jul 2015, at 18:40, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jul 3, 2015 at 9:48 AM, Graeme B. Bell graeme.b...@nibio.no wrote:
Hi everyone,
I've written a new open source tool
No, of course it doesn't. It appears that you didn't look at the repo or
read my previous mail before you wrote this.
FFS, I *ran* some of the tests and reported on results. With you in CC.
Just checked back. So you did. I'm sorry, I made the mistake I accused you of.
But... why then
On 09 Jul 2015, at 15:22, Thomas Kellerer spam_ea...@gmx.net wrote:
Graeme B. Bell schrieb am 09.07.2015 um 11:44:
I don't recall seeing a clear statement telling me I should mark pl/pgsql
functions nonvolatile wherever possible or throw all performance and
scalability out the window
3. I don't disagree that the benchmark code is objectively 'bad' in the
sense that it is missing an important optimisation.
Particularly with regards documentation, a patch improving things is
much more likely to improve the situation than griping. Also,
conversation on this list gets
On 09 Jul 2015, at 17:42, Merlin Moncure mmonc...@gmail.com wrote:
The community maintains it's own mailing list archives in
postgresql.org. Short of an array of tactical nuclear strikes this is
going to be preserved
Good to know, I've seen a lot of dead software projects throughout my
, Mkrtchyan, Tigran tigran.mkrtch...@desy.de wrote:
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Mkrtchyan, Tigran tigran.mkrtch...@desy.de
Cc: Graeme B. Bell graeme.b...@nibio.no, Steve Crawford
scrawf...@pinpointresearch.com, Wes Vaske (wvaske)
wva
Hi Karl,
Great post, thanks.
Though I don't think it's against conventional wisdom to aggregate writes into
larger blocks rather than rely on 4k performance on ssds :-)
128kb blocks + compression certainly makes sense. But it might make less sense
I suppose if you had some incredibly high
, Mkrtchyan, Tigran tigran.mkrtch...@desy.de wrote:
Thanks for the Info.
So if RAID controllers are not an option, what one should use to build
big databases? LVM with xfs? BtrFs? Zfs?
Tigran.
- Original Message -
From: Graeme B. Bell graeme.b...@nibio.no
To: Steve Crawford scrawf
Completely agree with Steve.
1. Intel NVMe looks like the best bet if you have modern enough hardware for
NVMe. Otherwise e.g. S3700 mentioned elsewhere.
2. RAID controllers.
We have e.g. 10-12 of these here and e.g. 25-30 SSDs, among various machines.
This might give people idea about
Hi everyone,
I've written a new open source tool for easily parallelising SQL scripts in
postgres. [obligatory plug: https://github.com/gbb/par_psql ]
Using it, I'm seeing a problem I've seen in other postgres projects involving
parallelisation in the last 12 months.
Basically:
- I
Thanks, this is very useful to know about the 730. When you say 'tested it with
plug-pulls', you were using diskchecker.pl, right?
Graeme.
On 07 Jul 2015, at 14:39, Karl Denninger k...@denninger.net wrote:
Incidentally while there are people who have questioned the 730 series power
loss
As I have warned elsewhere,
The M500/M550 from $SOME_COMPANY is NOT SUITABLE for postgres unless you have a
RAID controller with BBU to protect yourself.
The M500/M550 are NOT plug-pull safe despite the 'power loss protection'
claimed on the packaging. Not all fsync'd data is preserved in the
On 08 Jul 2015, at 22:27, Andres Freund and...@anarazel.de wrote:
On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:
On Wed, Jul 8, 2015 at 12:48 PM, Craig James cja...@emolecules.com wrote:
Well, right, which is why I mentioned even with dozens of clients.
Shouldn't that scale to at least
On 08 Jul 2015, at 13:20, Andres Freund and...@anarazel.de wrote:
On 2015-07-08 11:13:04 +, Graeme B. Bell wrote:
I'm guessing you are maybe pressed for time at the moment because I
already clearly included this on the last email, as well as the links
to the alternative benchmarks
On 09 Jul 2015, at 05:38, Tom Lane t...@sss.pgh.pa.us wrote:
If you
write your is_prime function purely in plpgsql, and don't bother to mark
it nonvolatile, *it will not scale*.
much for properly written plpgsql; but there's an awful lot of bad plpgsql
code out there, and it can make a
This is a reply to to Andreas's post on the #13495 documentation thread in
-bugs.
I am responding to it here because it relates to #13493 only.
Andres wrote, re: #13493
This issue is absolutely critical for performance and scalability of code,
Pft. In most cases it doesn't actually matter
I previously posted about par_psql, but I recently found another PG parallelism
project which can do a few extra things that par_psql can’t:
https://github.com/moat/pmpp
pmpp: Poor Man's Parallel Processing.
Corey Huinker had the idea of using dblink async as a foundation for
distributing
69 matches
Mail list logo