Erixonich commented on code in PR #7080:
URL: https://github.com/apache/ignite-3/pull/7080#discussion_r2568234427
##########
modules/platforms/cpp/ignite/client/detail/node_connection.h:
##########
@@ -119,9 +139,14 @@ class node_connection : public
std::enable_shared_from_this<node_connection> {
buffer.write_length_header();
}
+ std::optional<std::chrono::time_point<std::chrono::steady_clock>>
timeout{};
+ if (m_configuration.get_operation_timeout().count() > 0) {
+ timeout = std::chrono::steady_clock::now() +
m_configuration.get_operation_timeout();
Review Comment:
TLDR:
I was under impression that java's System.currentTimeMillis() and
steady_clock::now() have similar implementation and drawbacks. But learning a
bit deeper revealed to me that system_clock::now() should be analogous to
System.currentTimeMillis().
******************************************
as I said I don't know any performance issues of this method. In java case
there were some issues with analogous System.currentTimeMillis() on windows
platform. That could be a reason why in ignite codebase we have
`org.apache.ignite.internal.util.FastTimestamps#coarseCurrentTimeMillis`.
what optimization could be (if calling std::chrono::steady_clock::now()
would be slower than we can afford). We spawn a thread which updates atomic
value every X seconds. Instead of calling ::now() we read from atomic variable.
As I googled a bit calling steady_clock::now() doesn't switch context from
userspace to kernelspace (on opposing of calling system_clock::now()).
I am pretty sure that we don't want to do it right now, but maybe small task
to research performance impact of calling ::now on each request.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]