Re: effective_io_concurrency on EBS/gp2

2018-02-23 Thread Vitaliy Garnashevich
I noticed that the recent round of tests being discussed never 
mentioned the file system used.  Was it XFS?  Does changing the 
agcount change the behaviour?


It was ext4.

Regards,
Vitaliy




Re: effective_io_concurrency on EBS/gp2

2018-02-23 Thread Rick Otten
On Thu, Feb 8, 2018 at 11:40 AM, Vitaliy Garnashevich <
vgarnashev...@gmail.com> wrote:

> Anyway, there are still some strange things happening when
> effective_io_concurrency is non-zero.
>
> ...
>


> Vitaliy
>
>
I was researching whether I could optimize a concatenated lvm2 volume when
I have disks of different speeds (concatenated - not striped - and I think
I can if I concatenate them in the right order - still testing on that
front), when I came across this article from a few years ago:
http://www.techforce.com.br/content/lvm-raid-xfs-and-ext3-file-systems-tuning-small-files-massive-heavy-load-concurrent-parallel

In the article he talks about the performance of parallel io on different
file systems.

Since I am already running XFS that led me to this tunable:
http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure/tmp/en-US/html/Allocation_Groups.html

Which brought me back to this discussion about effective_io_concurrency
from a couple of weeks ago.  I noticed that the recent round of tests being
discussed never mentioned the file system used.  Was it XFS?  Does changing
the agcount change the behaviour?


Re: effective_io_concurrency on EBS/gp2

2018-02-05 Thread Claudio Freire
On Mon, Feb 5, 2018 at 8:26 AM, Vitaliy Garnashevich
 wrote:
>> I mean, that the issue is indeed affected by the order of rows in the
>> table. Random heap access patterns result in sparse bitmap heap scans,
>> whereas less random heap access patterns result in denser bitmap heap
>> scans. Dense scans have large portions of contiguous fetches, a
>> pattern that is quite adversely affected by the current prefetch
>> mechanism in linux.
>>
>
> Thanks for your input.
>
> How can I test a sparse bitmap scan? Can you think of any SQL commands which
> would generate data and run such scans?
>
> Would a bitmap scan over expression index ((aid%1000)=0) do a sparse bitmap
> scan?

If you have a minimally correlated index (ie: totally random order),
and suppose you have N tuples per page, you need to select less (much
less) than 1/Nth of the table.



Re: effective_io_concurrency on EBS/gp2

2018-02-05 Thread Vitaliy Garnashevich

I mean, that the issue is indeed affected by the order of rows in the
table. Random heap access patterns result in sparse bitmap heap scans,
whereas less random heap access patterns result in denser bitmap heap
scans. Dense scans have large portions of contiguous fetches, a
pattern that is quite adversely affected by the current prefetch
mechanism in linux.



Thanks for your input.

How can I test a sparse bitmap scan? Can you think of any SQL commands 
which would generate data and run such scans?


Would a bitmap scan over expression index ((aid%1000)=0) do a sparse 
bitmap scan?


Regards,
Vitaliy



Re: effective_io_concurrency on EBS/gp2

2018-02-04 Thread Claudio Freire
On Sat, Feb 3, 2018 at 8:05 PM, Vitaliy Garnashevich
 wrote:
> Looks like this behavior is not caused by, and does not depend on:
> - variable performance in the cloud
> - order of rows in the table
> - whether the disk is EBS (backed by SSD or HDD), or ordinary SSD
> - kernel version
>
> Does this mean that the default setting for eic on Linux is just inadequate
> for how the modern kernels behave? Or am I missing something else in the
> tests?
>
> Regards,
> Vitaliy

I have analyzed this issue quite extensively in the past, and I can
say with high confidence that you're analysis on point 2 is most
likely wrong.

Now, I don't have all the information to make that a categorical
assertion, you might have a point, but I believe you're
misinterpreting the data.

I mean, that the issue is indeed affected by the order of rows in the
table. Random heap access patterns result in sparse bitmap heap scans,
whereas less random heap access patterns result in denser bitmap heap
scans. Dense scans have large portions of contiguous fetches, a
pattern that is quite adversely affected by the current prefetch
mechanism in linux.

This analysis does point to the fact that I should probably revisit
this issue. There's a rather simple workaround for this, pg should
just avoid issuing prefetch orders for sequential block patterns,
since those are already much better handled by the kernel itself.



Re: effective_io_concurrency on EBS/gp2

2018-02-02 Thread Vitaliy Garnashevich
I did some more tests. I've made an SQL dump of the table. Then used 
head/tail commands to cut the data part. Then used shuf command to 
shuffle rows, and then joined the pieces back and restored the table 
back into DB.


Before:
select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;

{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}

effective_io_concurrency=0 Execution time: 1455.336 ms
effective_io_concurrency=1 Execution time: 8365.070 ms
effective_io_concurrency=2 Execution time: 4791.961 ms
effective_io_concurrency=4 Execution time: 4113.713 ms
effective_io_concurrency=8 Execution time: 1584.862 ms
effective_io_concurrency=16 Execution time: 1533.096 ms
effective_io_concurrency=8 Execution time: 1494.494 ms
effective_io_concurrency=4 Execution time: 3235.892 ms
effective_io_concurrency=2 Execution time: 4624.334 ms
effective_io_concurrency=1 Execution time: 7831.310 ms
effective_io_concurrency=0 Execution time: 1422.203 ms

After:
select array_agg(aid) from (select aid from pgbench_accounts order by 
ctid limit 20)_;

{6861090,18316007,2361004,11880097,5079470,9859942,13776329,12687163,3793362,18312052,15912971,9928864,10179242,9307499,2737986,13911147,5337329,12582498,3019085,4631617}

effective_io_concurrency=0 Execution time: 71321.723 ms
effective_io_concurrency=1 Execution time: 180230.742 ms
effective_io_concurrency=2 Execution time: 98635.566 ms
effective_io_concurrency=4 Execution time: 91464.375 ms
effective_io_concurrency=8 Execution time: 91048.939 ms
effective_io_concurrency=16 Execution time: 97682.475 ms
effective_io_concurrency=8 Execution time: 91262.404 ms
effective_io_concurrency=4 Execution time: 90945.560 ms
effective_io_concurrency=2 Execution time: 97019.504 ms
effective_io_concurrency=1 Execution time: 180331.474 ms
effective_io_concurrency=0 Execution time: 71469.484 ms

The numbers are not directly comparable with the previous tests, because 
this time I used scale factor 200.


Regards,
Vitaliy

On 2018-02-01 20:39, Claudio Freire wrote:

On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi
 wrote:

HI

  I think this parameter will be usefull when the storage using RAID
stripe , otherwise turn up this parameter is meaningless when only has one
device。

Not at all. Especially on EBS, where keeping a relatively full queue
is necessary to get max thoughput out of the drive.

Problem is, if you're scanning a highly correlated index, the
mechanism is counterproductive. I had worked on some POC patches for
correcting that, I guess I could work something out, but it's
low-priority for me. Especially since it's actually a kernel "bug" (or
shortcoming), that could be fixed in the kernel rather than worked
around by postgres.




effective_io_concurrency=0
   QUERY PLAN   


 Bitmap Heap Scan on pgbench_accounts  (cost=12838.24..357966.26 rows=1 
width=97) (actual time=1454.570..1454.570 rows=0 loops=1)
   Recheck Cond: ((aid >= 1000) AND (aid <= 100))
   Filter: (abalance <> 0)
   Rows Removed by Filter: 999001
   Heap Blocks: exact=16378
   Buffers: shared hit=2 read=19109
   ->  Bitmap Index Scan on pgbench_accounts_pkey  (cost=0.00..12838.24 
rows=986230 width=0) (actual time=447.877..447.877 rows=999001 loops=1)
 Index Cond: ((aid >= 1000) AND (aid <= 100))
 Buffers: shared hit=2 read=2731
 Planning time: 15.782 ms
 Execution time: 1455.336 ms
(11 rows)

effective_io_concurrency=1
   QUERY PLAN   


 Bitmap Heap Scan on pgbench_accounts  (cost=12838.24..357966.26 rows=1 
width=97) (actual time=8364.272..8364.272 rows=0 loops=1)
   Recheck Cond: ((aid >= 1000) AND (aid <= 100))
   Filter: (abalance <> 0)
   Rows Removed by Filter: 999001
   Heap Blocks: exact=16378
   Buffers: shared hit=2 read=19109
   ->  Bitmap Index Scan on pgbench_accounts_pkey  (cost=0.00..12838.24 
rows=986230 width=0) (actual time=448.043..448.043 rows=999001 loops=1)
 Index Cond: ((aid >= 1000) AND (aid <= 100))
 Buffers: shared hit=2 read=2731
 Planning time: 15.036 ms
 Execution time: 8365.070 ms
(11 rows)

effective_io_concurrency=2
   QUERY PLAN   


 Bitmap Heap Scan on 

Re: effective_io_concurrency on EBS/gp2

2018-02-01 Thread Claudio Freire
On Wed, Jan 31, 2018 at 11:21 PM, hzzhangjiazhi
 wrote:
> HI
>
>  I think this parameter will be usefull when the storage using RAID
> stripe , otherwise turn up this parameter is meaningless when only has one
> device。

Not at all. Especially on EBS, where keeping a relatively full queue
is necessary to get max thoughput out of the drive.

Problem is, if you're scanning a highly correlated index, the
mechanism is counterproductive. I had worked on some POC patches for
correcting that, I guess I could work something out, but it's
low-priority for me. Especially since it's actually a kernel "bug" (or
shortcoming), that could be fixed in the kernel rather than worked
around by postgres.



Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread hzzhangjiazhi






HI      I think this parameter will be usefull when the storage using RAID stripe , otherwise turn up this parameter is meaningless when only has one device。






发自网易邮箱大师


On 2/1/2018 04:29,Vitaliy Garnashevich wrote: 


I've done some more tests. Here they are all:io1, 100 GB SSD, 1000 IOPSeffective_io_concurrency=0 Execution time: 40333.626 mseffective_io_concurrency=1 Execution time: 163840.500 mseffective_io_concurrency=2 Execution time: 162606.330 mseffective_io_concurrency=4 Execution time: 163670.405 mseffective_io_concurrency=8 Execution time: 161800.478 mseffective_io_concurrency=16 Execution time: 161962.319 mseffective_io_concurrency=32 Execution time: 160451.435 mseffective_io_concurrency=64 Execution time: 161763.632 mseffective_io_concurrency=128 Execution time: 161687.398 mseffective_io_concurrency=256 Execution time: 160945.066 mseffective_io_concurrency=256 Execution time: 161226.440 mseffective_io_concurrency=128 Execution time: 161977.954 mseffective_io_concurrency=64 Execution time: 159122.006 mseffective_io_concurrency=32 Execution time: 154923.569 mseffective_io_concurrency=16 Execution time: 160922.819 mseffective_io_concurrency=8 Execution time: 160577.122 mseffective_io_concurrency=4 Execution time: 157509.481 mseffective_io_concurrency=2 Execution time: 161806.713 mseffective_io_concurrency=1 Execution time: 164026.708 mseffective_io_concurrency=0 Execution time: 40196.182 msgp2, 100 GB SSDeffective_io_concurrency=0 Execution time: 40262.781 mseffective_io_concurrency=1 Execution time: 98125.987 mseffective_io_concurrency=2 Execution time: 55343.776 mseffective_io_concurrency=4 Execution time: 52505.638 mseffective_io_concurrency=8 Execution time: 54954.024 mseffective_io_concurrency=16 Execution time: 54346.455 mseffective_io_concurrency=32 Execution time: 55196.626 mseffective_io_concurrency=64 Execution time: 55057.956 mseffective_io_concurrency=128 Execution time: 54963.510 mseffective_io_concurrency=256 Execution time: 54339.258 msio1, 1 TB SSD, 3000 IOPSeffective_io_concurrency=0 Execution time: 40691.396 mseffective_io_concurrency=1 Execution time: 87524.939 mseffective_io_concurrency=2 Execution time: 54197.982 mseffective_io_concurrency=4 Execution time: 55082.740 mseffective_io_concurrency=8 Execution time: 54838.161 mseffective_io_concurrency=16 Execution time: 52561.553 mseffective_io_concurrency=32 Execution time: 54266.847 mseffective_io_concurrency=64 Execution time: 54683.102 mseffective_io_concurrency=128 Execution time: 54643.874 mseffective_io_concurrency=256 Execution time: 42944.938 msgp2, 1 TB SSDeffective_io_concurrency=0 Execution time: 40072.880 mseffective_io_concurrency=1 Execution time: 83528.679 mseffective_io_concurrency=2 Execution time: 55706.941 mseffective_io_concurrency=4 Execution time: 55664.646 mseffective_io_concurrency=8 Execution time: 54699.658 mseffective_io_concurrency=16 Execution time: 54632.291 mseffective_io_concurrency=32 Execution time: 54793.305 mseffective_io_concurrency=64 Execution time: 55227.875 mseffective_io_concurrency=128 Execution time: 54638.744 mseffective_io_concurrency=256 Execution time: 54869.761 msst1, 500 GB HDDeffective_io_concurrency=0 Execution time: 40542.583 mseffective_io_concurrency=1 Execution time: 119996.892 mseffective_io_concurrency=2 Execution time: 51137.998 mseffective_io_concurrency=4 Execution time: 42301.922 mseffective_io_concurrency=8 Execution time: 42081.877 mseffective_io_concurrency=16 Execution time: 42253.782 mseffective_io_concurrency=32 Execution time: 42087.216 mseffective_io_concurrency=64 Execution time: 42112.105 mseffective_io_concurrency=128 Execution time: 42271.850 mseffective_io_concurrency=256 Execution time: 42213.074 msRegards,Vitaliy




Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Jeff Janes
On Wed, Jan 31, 2018 at 4:03 AM, Vitaliy Garnashevich <
vgarnashev...@gmail.com> wrote:

>
> The results look really confusing to me in two ways. The first one is that
> I've seen recommendations to set effective_io_concurrency=256 (or more) on
> EBS.


I would not expect this to make much of a difference on a table which is
perfectly correlated with the index.  You would have to create an accounts
table which is randomly ordered to have a meaningful benchmark of the eic
parameter.

I don't know why the default for eic is 1.  It seems like that just turns
on the eic mechanism, without any hope of benefiting from it.

Cheers,

Jeff


Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Vitaliy Garnashevich

I've done some more tests. Here they are all:

io1, 100 GB SSD, 1000 IOPS
effective_io_concurrency=0 Execution time: 40333.626 ms
effective_io_concurrency=1 Execution time: 163840.500 ms
effective_io_concurrency=2 Execution time: 162606.330 ms
effective_io_concurrency=4 Execution time: 163670.405 ms
effective_io_concurrency=8 Execution time: 161800.478 ms
effective_io_concurrency=16 Execution time: 161962.319 ms
effective_io_concurrency=32 Execution time: 160451.435 ms
effective_io_concurrency=64 Execution time: 161763.632 ms
effective_io_concurrency=128 Execution time: 161687.398 ms
effective_io_concurrency=256 Execution time: 160945.066 ms
effective_io_concurrency=256 Execution time: 161226.440 ms
effective_io_concurrency=128 Execution time: 161977.954 ms
effective_io_concurrency=64 Execution time: 159122.006 ms
effective_io_concurrency=32 Execution time: 154923.569 ms
effective_io_concurrency=16 Execution time: 160922.819 ms
effective_io_concurrency=8 Execution time: 160577.122 ms
effective_io_concurrency=4 Execution time: 157509.481 ms
effective_io_concurrency=2 Execution time: 161806.713 ms
effective_io_concurrency=1 Execution time: 164026.708 ms
effective_io_concurrency=0 Execution time: 40196.182 ms

gp2, 100 GB SSD
effective_io_concurrency=0 Execution time: 40262.781 ms
effective_io_concurrency=1 Execution time: 98125.987 ms
effective_io_concurrency=2 Execution time: 55343.776 ms
effective_io_concurrency=4 Execution time: 52505.638 ms
effective_io_concurrency=8 Execution time: 54954.024 ms
effective_io_concurrency=16 Execution time: 54346.455 ms
effective_io_concurrency=32 Execution time: 55196.626 ms
effective_io_concurrency=64 Execution time: 55057.956 ms
effective_io_concurrency=128 Execution time: 54963.510 ms
effective_io_concurrency=256 Execution time: 54339.258 ms

io1, 1 TB SSD, 3000 IOPS
effective_io_concurrency=0 Execution time: 40691.396 ms
effective_io_concurrency=1 Execution time: 87524.939 ms
effective_io_concurrency=2 Execution time: 54197.982 ms
effective_io_concurrency=4 Execution time: 55082.740 ms
effective_io_concurrency=8 Execution time: 54838.161 ms
effective_io_concurrency=16 Execution time: 52561.553 ms
effective_io_concurrency=32 Execution time: 54266.847 ms
effective_io_concurrency=64 Execution time: 54683.102 ms
effective_io_concurrency=128 Execution time: 54643.874 ms
effective_io_concurrency=256 Execution time: 42944.938 ms

gp2, 1 TB SSD
effective_io_concurrency=0 Execution time: 40072.880 ms
effective_io_concurrency=1 Execution time: 83528.679 ms
effective_io_concurrency=2 Execution time: 55706.941 ms
effective_io_concurrency=4 Execution time: 55664.646 ms
effective_io_concurrency=8 Execution time: 54699.658 ms
effective_io_concurrency=16 Execution time: 54632.291 ms
effective_io_concurrency=32 Execution time: 54793.305 ms
effective_io_concurrency=64 Execution time: 55227.875 ms
effective_io_concurrency=128 Execution time: 54638.744 ms
effective_io_concurrency=256 Execution time: 54869.761 ms

st1, 500 GB HDD
effective_io_concurrency=0 Execution time: 40542.583 ms
effective_io_concurrency=1 Execution time: 119996.892 ms
effective_io_concurrency=2 Execution time: 51137.998 ms
effective_io_concurrency=4 Execution time: 42301.922 ms
effective_io_concurrency=8 Execution time: 42081.877 ms
effective_io_concurrency=16 Execution time: 42253.782 ms
effective_io_concurrency=32 Execution time: 42087.216 ms
effective_io_concurrency=64 Execution time: 42112.105 ms
effective_io_concurrency=128 Execution time: 42271.850 ms
effective_io_concurrency=256 Execution time: 42213.074 ms

Regards,
Vitaliy




Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Claudio Freire
On Wed, Jan 31, 2018 at 1:57 PM, Vitaliy Garnashevich
 wrote:
> More tests:
>
> io1, 100 GB:
>
> effective_io_concurrency=0
>  Execution time: 40333.626 ms
> effective_io_concurrency=1
>  Execution time: 163840.500 ms

In my experience playing with prefetch, e_i_c>0 interferes with kernel
read-ahead. What you've got there would make sense if what postgres
thinks will be random I/O ends up being sequential. With e_i_c=0, the
kernel will optimize the hell out of it, because it's a predictable
pattern. But with e_i_c=1, the kernel's optimization gets disabled but
postgres isn't reading much ahead, so you get the worst possible case.



Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Vitaliy Garnashevich

More tests:

io1, 100 GB:

effective_io_concurrency=0
 Execution time: 40333.626 ms
effective_io_concurrency=1
 Execution time: 163840.500 ms
effective_io_concurrency=2
 Execution time: 162606.330 ms
effective_io_concurrency=4
 Execution time: 163670.405 ms
effective_io_concurrency=8
 Execution time: 161800.478 ms
effective_io_concurrency=16
 Execution time: 161962.319 ms
effective_io_concurrency=32
 Execution time: 160451.435 ms
effective_io_concurrency=64
 Execution time: 161763.632 ms
effective_io_concurrency=128
 Execution time: 161687.398 ms
effective_io_concurrency=256
 Execution time: 160945.066 ms

effective_io_concurrency=256
 Execution time: 161226.440 ms
effective_io_concurrency=128
 Execution time: 161977.954 ms
effective_io_concurrency=64
 Execution time: 159122.006 ms
effective_io_concurrency=32
 Execution time: 154923.569 ms
effective_io_concurrency=16
 Execution time: 160922.819 ms
effective_io_concurrency=8
 Execution time: 160577.122 ms
effective_io_concurrency=4
 Execution time: 157509.481 ms
effective_io_concurrency=2
 Execution time: 161806.713 ms
effective_io_concurrency=1
 Execution time: 164026.708 ms
effective_io_concurrency=0
 Execution time: 40196.182 ms


st1, 500 GB:

effective_io_concurrency=0
 Execution time: 40542.583 ms
effective_io_concurrency=1
 Execution time: 119996.892 ms
effective_io_concurrency=2
 Execution time: 51137.998 ms
effective_io_concurrency=4
 Execution time: 42301.922 ms
effective_io_concurrency=8
 Execution time: 42081.877 ms
effective_io_concurrency=16
 Execution time: 42253.782 ms
effective_io_concurrency=32
 Execution time: 42087.216 ms
effective_io_concurrency=64
 Execution time: 42112.105 ms
effective_io_concurrency=128
 Execution time: 42271.850 ms
effective_io_concurrency=256
 Execution time: 42213.074 ms

effective_io_concurrency=256
 Execution time: 42255.568 ms
effective_io_concurrency=128
 Execution time: 42030.515 ms
effective_io_concurrency=64
 Execution time: 41713.753 ms
effective_io_concurrency=32
 Execution time: 42035.436 ms
effective_io_concurrency=16
 Execution time: 42221.581 ms
effective_io_concurrency=8
 Execution time: 42203.730 ms
effective_io_concurrency=4
 Execution time: 42236.082 ms
effective_io_concurrency=2
 Execution time: 49531.558 ms
effective_io_concurrency=1
 Execution time: 117160.222 ms
effective_io_concurrency=0
 Execution time: 40059.259 ms

Regards,
Vitaliy

On 31/01/2018 15:46, Gary Doades wrote:


> I've tried to re-run the test for some specific values of 
effective_io_concurrency. The results were the same.


 > That's why I don't think the order of tests or variability in 
"hardware" performance affected the results.


We run many MS SQL server VMs in AWS with more than adequate performance.

AWS EBS performance is variable and depends on various factors, mainly 
the size of the volume and the size of the VM it is attached to. The 
bigger the VM, the more EBS “bandwidth” is available, especially if 
the VM is EBS Optimised.


The size of the disk determines the IOPS available, with smaller disks 
naturally getting less. However, even a small disk with (say) 300 IOPS 
is allowed to burst up to 3000 IOPS for a while and then gets 
clobbered. If you want predictable performance then get a bigger disk! 
If you really want maximum, predictable performance get an EBS 
Optimised VM and use Provisioned IOPS EBS volumes…. At a price!


Cheers,

Gary.

On 31/01/2018 15:01, Rick Otten wrote:

We moved our stuff out of AWS a little over a year ago because the
performance was crazy inconsistent and unpredictable.  I think
they do a lot of oversubscribing so you get strange sawtooth
performance patterns depending on who else is sharing your
infrastructure and what they are doing at the time.

The same unit of work would take 20 minutes each for several
hours, and then take 2 1/2 hours each for a day, and then back to
20 minutes, and sometimes anywhere in between for hours or days at
a stretch.  I could never tell the business when the processing
would be done, which made it hard for them to set expectations
with customers, promise deliverables, or manage the business. 
Smaller nodes seemed to be worse than larger nodes, I only have
theories as to why.  I never got good support from AWS to help me
figure out what was happening.

My first thought is to run the same test on different days of the
week and different times of day to see if the numbers change
radically.  Maybe spin up a node in another data center and
availability zone and try the test there too.

My real suggestion is to move to Google Cloud or Rackspace or
Digital Ocean or somewhere other than AWS.   (We moved to Google
Cloud and have been very happy there.  The performance is much
more consistent, the management UI is more intuitive, AND the cost
for equivalent infrastructure is lower too.)

On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich

RE: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Gary Doades
 

 

>  I've tried to re-run the test for some specific values of 
> effective_io_concurrency. The results were the same. 

 > That's why I don't think the order of tests or variability in "hardware" 
 > performance affected the results.



We run many MS SQL server VMs in AWS with more than adequate performance.

 

AWS EBS performance is variable and depends on various factors, mainly the size 
of the volume and the size of the VM it is attached to. The bigger the VM, the 
more EBS “bandwidth” is available, especially if the VM is EBS Optimised.

 

The size of the disk determines the IOPS available, with smaller disks 
naturally getting less. However, even a small disk with (say) 300 IOPS is 
allowed to burst up to 3000 IOPS for a while and then gets clobbered. If you 
want predictable performance then get a bigger disk! If you really want 
maximum, predictable performance get an EBS Optimised VM and use Provisioned 
IOPS EBS volumes…. At a price!

 

Cheers,

Gary.

On 31/01/2018 15:01, Rick Otten wrote:

We moved our stuff out of AWS a little over a year ago because the performance 
was crazy inconsistent and unpredictable.  I think they do a lot of 
oversubscribing so you get strange sawtooth performance patterns depending on 
who else is sharing your infrastructure and what they are doing at the time. 

 

The same unit of work would take 20 minutes each for several hours, and then 
take 2 1/2 hours each for a day, and then back to 20 minutes, and sometimes 
anywhere in between for hours or days at a stretch.  I could never tell the 
business when the processing would be done, which made it hard for them to set 
expectations with customers, promise deliverables, or manage the business.  
Smaller nodes seemed to be worse than larger nodes, I only have theories as to 
why.  I never got good support from AWS to help me figure out what was 
happening.

 

My first thought is to run the same test on different days of the week and 
different times of day to see if the numbers change radically.  Maybe spin up a 
node in another data center and availability zone and try the test there too.

 

My real suggestion is to move to Google Cloud or Rackspace or Digital Ocean or 
somewhere other than AWS.   (We moved to Google Cloud and have been very happy 
there.  The performance is much more consistent, the management UI is more 
intuitive, AND the cost for equivalent infrastructure is lower too.)

 

 

On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich  > wrote:

Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azs...@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set enable_indexscan=off; 
explain (analyze, buffers)  select * from pgbench_accounts where aid between 
1000 and 1000 and abalance != 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one is that I've 
seen recommendations to set effective_io_concurrency=256 (or more) on EBS. The 
other one is that effective_io_concurrency=1 (the worst case) is actually the 
default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy

 

 



Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Pavel Stehule
2018-01-31 14:15 GMT+01:00 Vitaliy Garnashevich :

> I've tried to re-run the test for some specific values of
> effective_io_concurrency. The results were the same.
>
> That's why I don't think the order of tests or variability in "hardware"
> performance affected the results.
>

AWS uses some intelligent throttling, so it can be related to hardware.


> Regards,
> Vitaliy
>
>
> On 31/01/2018 15:01, Rick Otten wrote:
>
> We moved our stuff out of AWS a little over a year ago because the
> performance was crazy inconsistent and unpredictable.  I think they do a
> lot of oversubscribing so you get strange sawtooth performance patterns
> depending on who else is sharing your infrastructure and what they are
> doing at the time.
>
> The same unit of work would take 20 minutes each for several hours, and
> then take 2 1/2 hours each for a day, and then back to 20 minutes, and
> sometimes anywhere in between for hours or days at a stretch.  I could
> never tell the business when the processing would be done, which made it
> hard for them to set expectations with customers, promise deliverables, or
> manage the business.  Smaller nodes seemed to be worse than larger nodes, I
> only have theories as to why.  I never got good support from AWS to help me
> figure out what was happening.
>
> My first thought is to run the same test on different days of the week and
> different times of day to see if the numbers change radically.  Maybe spin
> up a node in another data center and availability zone and try the test
> there too.
>
> My real suggestion is to move to Google Cloud or Rackspace or Digital
> Ocean or somewhere other than AWS.   (We moved to Google Cloud and have
> been very happy there.  The performance is much more consistent, the
> management UI is more intuitive, AND the cost for equivalent infrastructure
> is lower too.)
>
>
> On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich <
> vgarnashev...@gmail.com> wrote:
>
>> Hi,
>>
>> I've tried to run a benchmark, similar to this one:
>>
>> https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9
>> cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#
>> CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azs...@mail.gmail.com
>>
>> CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';
>>
>> pgbench -i -s 1000 --tablespace=test pgbench
>>
>> echo "" >test.txt
>> for i in 0 1 2 4 8 16 32 64 128 256 ; do
>>   sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
>>   echo "effective_io_concurrency=$i" >>test.txt
>>   psql pgbench -c "set effective_io_concurrency=$i; set
>> enable_indexscan=off; explain (analyze, buffers)  select * from
>> pgbench_accounts where aid between 1000 and 1000 and abalance != 0;"
>> >>test.txt
>> done
>>
>> I get the following results:
>>
>> effective_io_concurrency=0
>>  Execution time: 40262.781 ms
>> effective_io_concurrency=1
>>  Execution time: 98125.987 ms
>> effective_io_concurrency=2
>>  Execution time: 55343.776 ms
>> effective_io_concurrency=4
>>  Execution time: 52505.638 ms
>> effective_io_concurrency=8
>>  Execution time: 54954.024 ms
>> effective_io_concurrency=16
>>  Execution time: 54346.455 ms
>> effective_io_concurrency=32
>>  Execution time: 55196.626 ms
>> effective_io_concurrency=64
>>  Execution time: 55057.956 ms
>> effective_io_concurrency=128
>>  Execution time: 54963.510 ms
>> effective_io_concurrency=256
>>  Execution time: 54339.258 ms
>>
>> The test was using 100 GB gp2 SSD EBS. More detailed query plans are
>> attached.
>>
>> PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
>> 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit
>>
>> The results look really confusing to me in two ways. The first one is
>> that I've seen recommendations to set effective_io_concurrency=256 (or
>> more) on EBS. The other one is that effective_io_concurrency=1 (the worst
>> case) is actually the default for PostgreSQL on Linux.
>>
>> Thoughts?
>>
>> Regards,
>> Vitaliy
>>
>>
>
>


Re: effective_io_concurrency on EBS/gp2

2018-01-31 Thread Vitaliy Garnashevich
I've tried to re-run the test for some specific values of 
effective_io_concurrency. The results were the same.


That's why I don't think the order of tests or variability in "hardware" 
performance affected the results.


Regards,
Vitaliy

On 31/01/2018 15:01, Rick Otten wrote:
We moved our stuff out of AWS a little over a year ago because the 
performance was crazy inconsistent and unpredictable.  I think they do 
a lot of oversubscribing so you get strange sawtooth performance 
patterns depending on who else is sharing your infrastructure and what 
they are doing at the time.


The same unit of work would take 20 minutes each for several hours, 
and then take 2 1/2 hours each for a day, and then back to 20 minutes, 
and sometimes anywhere in between for hours or days at a stretch.  I 
could never tell the business when the processing would be done, which 
made it hard for them to set expectations with customers, promise 
deliverables, or manage the business.  Smaller nodes seemed to be 
worse than larger nodes, I only have theories as to why.  I never got 
good support from AWS to help me figure out what was happening.


My first thought is to run the same test on different days of the week 
and different times of day to see if the numbers change radically.  
Maybe spin up a node in another data center and availability zone and 
try the test there too.


My real suggestion is to move to Google Cloud or Rackspace or Digital 
Ocean or somewhere other than AWS.   (We moved to Google Cloud and 
have been very happy there.  The performance is much more consistent, 
the management UI is more intuitive, AND the cost for equivalent 
infrastructure is lower too.)



On Wed, Jan 31, 2018 at 7:03 AM, Vitaliy Garnashevich 
> wrote:


Hi,

I've tried to run a benchmark, similar to this one:


https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azs...@mail.gmail.com



CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set
enable_indexscan=off; explain (analyze, buffers)  select * from
pgbench_accounts where aid between 1000 and 1000 and abalance
!= 0;" >>test.txt
done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans
are attached.

PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit

The results look really confusing to me in two ways. The first one
is that I've seen recommendations to set
effective_io_concurrency=256 (or more) on EBS. The other one is
that effective_io_concurrency=1 (the worst case) is actually the
default for PostgreSQL on Linux.

Thoughts?

Regards,
Vitaliy






effective_io_concurrency on EBS/gp2

2018-01-31 Thread Vitaliy Garnashevich

Hi,

I've tried to run a benchmark, similar to this one:

https://www.postgresql.org/message-id/flat/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com#CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azs...@mail.gmail.com

CREATE TABLESPACE test OWNER postgres LOCATION '/path/to/ebs';

pgbench -i -s 1000 --tablespace=test pgbench

echo "" >test.txt
for i in 0 1 2 4 8 16 32 64 128 256 ; do
  sync; echo 3 > /proc/sys/vm/drop_caches; service postgresql restart
  echo "effective_io_concurrency=$i" >>test.txt
  psql pgbench -c "set effective_io_concurrency=$i; set 
enable_indexscan=off; explain (analyze, buffers)  select * from 
pgbench_accounts where aid between 1000 and 1000 and abalance != 0;" 
>>test.txt

done

I get the following results:

effective_io_concurrency=0
 Execution time: 40262.781 ms
effective_io_concurrency=1
 Execution time: 98125.987 ms
effective_io_concurrency=2
 Execution time: 55343.776 ms
effective_io_concurrency=4
 Execution time: 52505.638 ms
effective_io_concurrency=8
 Execution time: 54954.024 ms
effective_io_concurrency=16
 Execution time: 54346.455 ms
effective_io_concurrency=32
 Execution time: 55196.626 ms
effective_io_concurrency=64
 Execution time: 55057.956 ms
effective_io_concurrency=128
 Execution time: 54963.510 ms
effective_io_concurrency=256
 Execution time: 54339.258 ms

The test was using 100 GB gp2 SSD EBS. More detailed query plans are 
attached.


PostgreSQL 9.6.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609, 64-bit


The results look really confusing to me in two ways. The first one is 
that I've seen recommendations to set effective_io_concurrency=256 (or 
more) on EBS. The other one is that effective_io_concurrency=1 (the 
worst case) is actually the default for PostgreSQL on Linux.


Thoughts?

Regards,
Vitaliy


effective_io_concurrency=0
  QUERY 
PLAN  
--
 Bitmap Heap Scan on pgbench_accounts  (cost=137192.84..1960989.89 rows=1 
width=97) (actual time=40261.322..40261.322 rows=0 loops=1)
   Recheck Cond: ((aid >= 1000) AND (aid <= 1000))
   Rows Removed by Index Recheck: 23
   Filter: (abalance <> 0)
   Rows Removed by Filter: 001
   Heap Blocks: exact=97869 lossy=66050
   Buffers: shared hit=3 read=191240
   ->  Bitmap Index Scan on pgbench_accounts_pkey  (cost=0.00..137192.84 
rows=10540117 width=0) (actual time=3366.623..3366.623 rows=001 loops=1)
 Index Cond: ((aid >= 1000) AND (aid <= 1000))
 Buffers: shared hit=3 read=27321
 Planning time: 17.285 ms
 Execution time: 40262.781 ms
(12 rows)

effective_io_concurrency=1
  QUERY 
PLAN  
--
 Bitmap Heap Scan on pgbench_accounts  (cost=137192.84..1960989.89 rows=1 
width=97) (actual time=98124.607..98124.607 rows=0 loops=1)
   Recheck Cond: ((aid >= 1000) AND (aid <= 1000))
   Rows Removed by Index Recheck: 23
   Filter: (abalance <> 0)
   Rows Removed by Filter: 001
   Heap Blocks: exact=97869 lossy=66050
   Buffers: shared hit=3 read=191240
   ->  Bitmap Index Scan on pgbench_accounts_pkey  (cost=0.00..137192.84 
rows=10540117 width=0) (actual time=3373.380..3373.380 rows=001 loops=1)
 Index Cond: ((aid >= 1000) AND (aid <= 1000))
 Buffers: shared hit=3 read=27321
 Planning time: 18.110 ms
 Execution time: 98125.987 ms
(12 rows)

effective_io_concurrency=2
  QUERY 
PLAN  
--
 Bitmap Heap Scan on pgbench_accounts  (cost=137192.84..1960989.89 rows=1 
width=97) (actual time=55340.663..55340.663 rows=0 loops=1)
   Recheck Cond: ((aid >= 1000) AND (aid <= 1000))
   Rows Removed by Index Recheck: 23
   Filter: (abalance <> 0)
   Rows Removed by Filter: 001
   Heap Blocks: exact=97869 lossy=66050
   Buffers: shared hit=3 read=191240
   ->  Bitmap Index Scan on pgbench_accounts_pkey  (cost=0.00..137192.84 
rows=10540117 width=0) (actual time=3306.896..3306.896 rows=001 loops=1)
 Index Cond: ((aid >= 1000) AND (aid <= 1000))
 Buffers: shared hit=3 read=27321
 Planning time: 30.986 ms
 Execution time: 55343.776 ms
(12 rows)

effective_io_concurrency=4