[
https://issues.apache.org/jira/browse/PROTON-220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17094467#comment-17094467
]
ASF GitHub Bot commented on PROTON-220:
---------------------------------------
jiridanek commented on pull request #211:
URL: https://github.com/apache/qpid-proton/pull/211#issuecomment-620584826
I am adding the ability to run the tests with previous versions of Proton,
and being able to use the benchmark code with separately compiled proton
installation. Currently I am aiming at Proton 0.19.0+.
It is helpful to compile Proton with
{{`-DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE -DENABLE_WARNING_ERROR=OFF`}} for
this, if you use new compiler (warnings) or if you have openssl and cyrus in
nonstandard location (rpath).
My CMakeLists.txt for this usecase looks like this
```
cmake_minimum_required(VERSION 3.0)
project(benchmarks)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
# -DCMAKE_INSTALL_RPATH_USE_LINK_PATH=TRUE
find_package(Proton 0.18 REQUIRED COMPONENTS Core Proactor)
find_package(ProtonCpp 0.19 REQUIRED)
include_directories(${Proton_Core_INCLUDE_DIRS})
# link everything everywhere
link_libraries(${Proton_LIBRARIES})
link_libraries(${ProtonCpp_LIBRARIES})
# workaround, proton cpp .so does not set rpath for $ORIGIN, so these
wouldn't be found otherwise
link_libraries(${Proton_Core_LIBRARIES})
link_libraries(${Proton_Proactor_LIBRARIES})
# in-tree benchmarks link these libs, create dummy empty ones
add_library(qpid-proton OBJECT IMPORTED)
add_library(qpid-proton-cpp OBJECT IMPORTED)
add_subdirectory(../c/benchmarks cbenchbin)
add_subdirectory(../cpp/benchmarks cppbenchbin)
```
With this, I can have historical charts (raw version, simply drawn in online
spreadsheet). Also, I have set fixed CPU clock to 1.9 GHz, so that results are
stable, but generally less than half of what can be achieved on maxed-out clock
speeds. I am still not very confident about intel pstate cpu freq settings. I
think what I got there now works ok...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Create a set of "glass box" tests to quantify the performance of the proton
> codebase
> ------------------------------------------------------------------------------------
>
> Key: PROTON-220
> URL: https://issues.apache.org/jira/browse/PROTON-220
> Project: Qpid Proton
> Issue Type: Test
> Components: proton-c, proton-j
> Reporter: Ken Giusti
> Assignee: Jiri Daněk
> Priority: Major
> Labels: perf, testing
> Fix For: proton-j-0.24.0, proton-c-0.32.0
>
>
> The goal of these tests would be to detect any performance degradation
> inadvertently introduced during development. These tests would not be
> intended to provide any metrics regarding the "real world" behavior of
> proton-based applications. Rather, these tests are targeted for use by the
> proton developers to help gauge the effect their code changes may have on
> performance.
> These tests should require no special configuration or setup in order to run.
> It should be easy to run these test as part of the development process. The
> intent would be to have developer run the tests prior to making any code
> changes, and record the metrics for comparison against the results obtained
> after making changes to the code base.
> As described by Rafi:
> "I think it would be good to include some performance metrics that isolate
> the various components of proton. For example having a metric that simply
> repeatedly encodes/decodes a message would be quite useful in isolating the
> message implementation. Setting up two engines in memory and using them to
> blast zero sized messages back and forth as fast as possible would tell us
> how much protocol overhead the engine is adding. Using the codec directly
> to encode/decode data would also be a useful measure. Each of these would
> probably want to have multiple profiles, different message content,
> different acknowledgement/flow control patterns, and different kinds of
> data.
> I think breaking out the different dimensions of the implementation as
> above would provide a very useful tool to run before/after any performance
> sensitive changes to detect and isolate regressions, or to test potential
> improvements."
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]