according to this

http://lwn.net/Articles/269327/

amd64 is fastest for aes128. so that would explain why we see a
performance decrease with blowfish.

On Wed, Dec 30, 2009 at 1:19 AM, Andreas Schuldei
<[email protected]> wrote:
> http://marc.info/?l=linux-kernel&m=126155699817914&w=2
>
> but i dont understand what the bottleneck is. can someone help me out?
>
> On Tue, Dec 29, 2009 at 9:31 PM, Andreas Schuldei
> <[email protected]> wrote:
>> now i switch the cipher to blowfish. as a result the percentage the
>> server spends in the kernel went down to 3.5%-4.9%. my guess is that
>> this is due to the quicker cipher.
>> the apache process which is serving the file, however,  bounces
>> between 20-95% cpu usage. how come? how does ipsec change user-space
>> so that the application needs more cpu?
>> the used bandwidth is even worse then before: the server manages to
>> push only 27.9M/s, which is slightly more then a quarter of its rated
>> network throughput.
>>
>> i cant see any bottelnecks in the server (like 100%cpu somewhere,
>> swapping because of full RAM, etc).
>>
>> i start to suspect package fragmentation, windowsize/flow control or
>> similar issues.  are there /proc or /sys files to look at to get more
>> information about such issues when doing ipsec? tcpdump is less
>> helpful.
>>
>> btw: this is what i get when tcpdumping:
>>
>> 20:25:03.313692 IP 78.31.14.86 > 78.31.14.93:
>> ESP(spi=0xc929dbe8,seq=0x100a8c), length 1476
>> 20:25:03.313699 IP 78.31.14.93 > 78.31.14.86:
>> ESP(spi=0xc4967810,seq=0x7bcd3), length 68
>> 20:25:03.313734 IP 78.31.14.86 > 78.31.14.93:
>> ESP(spi=0xc929dbe8,seq=0x100a8d), length 1332
>> 20:25:03.313788 IP 78.31.14.86 > 78.31.14.93:
>> ESP(spi=0xc929dbe8,seq=0x100a8e), length 1476
>> 20:25:03.313812 IP 78.31.14.93 > 78.31.14.86:
>> ESP(spi=0xc4967810,seq=0x7bcd4), length 68
>> 20:25:03.313847 IP 78.31.14.86 > 78.31.14.93:
>> ESP(spi=0xc929dbe8,seq=0x100a8f), length 1476
>> 20:25:03.313887 IP 78.31.14.86 > 78.31.14.93:
>> ESP(spi=0xc929dbe8,seq=0x100a90), length 1332
>>
>> why the odd package size? MTU is 1500. there seems to be plenty of
>> space. this is ONE huge file transmitted via http.
>>
>> On Tue, Dec 29, 2009 at 5:34 PM, Andreas Schuldei
>> <[email protected]> wrote:
>>> i suspect there is some ipsec internal bottleneck there. with an other
>>> long-running test we were not able to push more then 40Mbyte/s over
>>> the net, and cpu did not max out. there were no massiv kernel threats
>>> and disks were far from their limit.
>>>
>>> can someone with more insight into ipsec explain what happens?
>>>
>>> this is an importan point. we need to get more performance (about
>>> twice as much) out of these boxes then this.
>>>
>>>
>>>
>>> On Tue, Dec 29, 2009 at 2:55 AM, Andreas Schuldei
>>> <[email protected]> wrote:
>>>> so i configured ssh to bypass ipsec, set up ssh to use blowfish
>>>> encryption and set up rshd on the test machine (which gave me
>>>> goosebumps).
>>>>
>>>> r...@krista:~# time rcp bigfile teagan:
>>>>
>>>> real    0m8.738s
>>>> user    0m0.008s
>>>> sys     0m7.188s
>>>> r...@krista:~# time scp bigfile teagan:
>>>> bigfile
>>>>                                      100%  325MB  65.1MB/s   00:05
>>>>
>>>> real    0m4.945s
>>>> user    0m3.716s
>>>> sys     0m0.980s
>>>>
>>>> there is a real chance that rcp sucks on the security AND the
>>>> performance level. so i retried with http (apache and wget):
>>>>
>>>> 100%[=====================================================================================================>]
>>>> 341,218,664 36.8M/s   in 8.8s
>>>>
>>>> 2009-12-29 01:53:20 (37.1 MB/s) - `bigfile.1' saved [341218664/341218664]
>>>>
>>>> that is the same result. i use a heavier cipher for ipsec then for ssh:
>>>>
>>>> AES_CBC-128/HMAC_SHA1_96, rekeying in 92 seconds, last use: 136s_i 145s_o
>>>>
>>>> what ciphers are faster that i could use? is it plausible that AES is
>>>> that much slower then blowfish on an idle server with an up to date
>>>> xeon cpu? these servers are connected to each other via a switch.
>>>>
>>>> /andreas
>>>>
>>>
>>
>
_______________________________________________
Users mailing list
[email protected]
https://lists.strongswan.org/mailman/listinfo/users

Reply via email to