Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Jeff Janes
On Fri, Jul 24, 2015 at 12:13 PM, Chris Withers wrote: > On 24/07/2015 19:21, Jan Lentfer wrote: > >> >> I've been doing some lightweight load testing with “pgbench -c8 -j8 -T10” When run locally on the postgres server I've testing, this gives around 5000tps When I

Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Chris Withers
On 24/07/2015 19:21, Jan Lentfer wrote: I've been doing some lightweight load testing with “pgbench -c8 -j8 -T10” When run locally on the postgres server I've testing, this gives around 5000tps When I do it from a server that has a 13ms ping latency, it drops to 37tps. This is using the defa

Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Jan Lentfer
>> Am 24.07.2015 um 18:59 schrieb Chris Withers : >> >> Hi all, >> >> I've been doing some lightweight load testing with >> “pgbench -c8 -j8 -T10” >> >> When run locally on the postgres server I've testing, this gives around >> 5000tps >> >> When I do it from a server that has a 13ms ping la

Re: [GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Jan Lentfer
That seems to be a large drop. On the other hand 13 ms is also like a very large network latency. On LAN your usually in the sub ms area. So going from e.g. 0.2 ms to 13ms is 65 fold decrease. What is the network toplogy like? Jan Von meinem iPad gesendet > Am 24.07.2015 um 18:59 schrieb Chris

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Joshua D. Drake
On 07/24/2015 02:32 AM, Andres Freund wrote: On 2015-07-24 10:29:21 +0100, Tim Smith wrote: That's not the point. Backups are important, but so is the concept of various layers of anti-fat-finger protection. Restoring off backups should be last resort, not first. Oh, comeon. Install a TRU

Re: [GENERAL] Using the database to validate data

2015-07-24 Thread Jeff Janes
On Fri, Jul 24, 2015 at 5:17 AM, Zdeněk Bělehrádek < zdenek.belehra...@superhosting.cz> wrote: > What about creating a SAVEPOINT before each INSERT, and if the INSERT > returns > an error, then ROLLBACK TO SAVEPOINT? Make sure you release the savepoint if there was no error. Otherwise you will

[GENERAL] pgbench tps drop from 5000 to 37 going from localhost to a server 13ms away

2015-07-24 Thread Chris Withers
Hi all, I've been doing some lightweight load testing with “pgbench -c8 -j8 -T10” When run locally on the postgres server I've testing, this gives around 5000tps When I do it from a server that has a 13ms ping latency, it drops to 37tps. This is using the default pgbench script, is it to be

Re: [GENERAL] Using the database to validate data

2015-07-24 Thread Jeff Janes
On Thu, Jul 23, 2015 at 5:55 AM, JPLapham wrote: > Hello, > > I have an application that occasionally performs large batch inserts of > user > hand-generated data. Input is a tab delimited file with typically hundreds > to a thousand lines of data. > > Because the data is generated by hand, there

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Scott Marlowe
On Fri, Jul 24, 2015 at 3:27 AM, Tim Smith wrote: > On 23 July 2015 at 19:25, Scott Marlowe wrote: >> stick to triggers, they're faster > > > Erm, not according to your beloved manual !!! > > 38.7. Rules Versus Triggers > > " a rule issuing one extra command is likely to be faster than a trigger"

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Adrian Klaver
On 07/24/2015 02:24 AM, Tim Smith wrote: Just in case it has not been made obvious yet, rules are silently deprecated. They still exist because views depend on them, but it is generally considered best practices to not use them outside that realm. Well, the manual doesn't seem to reflect that

Re: [GENERAL] Using the database to validate data

2015-07-24 Thread Zdeněk Bělehrádek
What about creating a SAVEPOINT before each INSERT, and if the INSERT returns an error, then ROLLBACK TO SAVEPOINT? This way you will have all the insertable data in your table, and you can still ROLLBACK the whole transaction, or COMMIT it if there were no errors. It will probably be quite slo

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Andrew Sullivan
On Fri, Jul 24, 2015 at 10:24:55AM +0100, Tim Smith wrote: > > If that's how the developers feel about rules, i.e they can't be > bothered to make any changes to the rules code any more, no matter how > minor (e.g. this TRUNCATE issue) Who is this "they"? As I think I suggested in another mail,

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Andres Freund
On 2015-07-24 10:29:21 +0100, Tim Smith wrote: > That's not the point. Backups are important, but so is the concept of > various layers of anti-fat-finger protection. Restoring off backups > should be last resort, not first. Oh, comeon. Install a TRUNCATE trigger and let this thread die. Andre

Re: [GENERAL] Delete rule does not prevent truncate

2015-07-24 Thread Tim Smith
On 24 July 2015 at 01:37, Rob Sargent wrote: > Fair enough but both blackhats and the authorized are just as likely to drop > the database as truncate something (intentionally or not) and backups > stashed everywhere is the first order of business. That's not the point. Backups are important,

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
Thanks for sharing the results of your experiment !

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
In fact I did not partition by *fk_job* but by the *id* (primary key) instead thoughpartitioning by *fk_job* was my first idea. I use various columns when querying the table, *fk_job* is not always there. -- View this message in context: http://postgresql.nabble.com/The-fastest-way-to-update-

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
Yes, partitioning by fk_job can significantly improve performance of this update. And all the SELECTs with definited fk_job can be faster. All you should check carefully is those SELECTs without definited fk_job. 2015-07-24 17:18 GMT+09:00 twoflower : > Thank you, I will look into those sugges

[GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread twoflower
Thank you, I will look into those suggestions. Meanwhile, I started experimenting with partitioning the table into smaller tables, each holding rows with ID spanning 1 million values and using this approach, the UPDATE takes 300ms. I have to check if all the SELECTs I am issuing against the origi

Re: [GENERAL] Re: The fastest way to update thousands of rows in moderately sized table

2015-07-24 Thread 林士博
OK. In your case, I can not see any reasons that using a temp table to do joint-update is faster than directly updating. And from the execution plan, index scan just takes very little time. Most of the time is doing insert/delete. As you mentioned, fk_assignmentwhere is updated frequently, and di