[ 
https://issues.apache.org/jira/browse/IGNITE-22544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-22544:
-----------------------------------
    Description: 
We should benchmark the way we marshal commands using optimized marshaller and 
make it faster. Some obvious places:
 * byte buffers pool - we can replace queue with a manual implementation of 
Treiber stack, it's trivial and doesn't use as many CAS/volatile operations
 * new serializers are allocated every time, but they can be put into static 
final constants instead, or cached in fields of corresponding factories
 * we can create a serialization factory per group, not per message, this way 
we will remove unnecessary indirection. Group factory can use {{{}switch{}}}, 
like in Ignite 2, which would basically lead to static dispatch of deserializer 
constructors and static access to serializers, instead of dynamic dispatch 
(virtual call), which should be noticeably faster
 * profiler might show other simple places, we must also compare 
{{OptimizedMarshaller}} against other serialization algorithms in benchmarks

EDIT: quick draft attached, it addresses points 1 and 2.

  was:
We should benchmark the way we marshal commands using optimized marshaller and 
make it faster. Some obvious places:
 * byte buffers pool - we can replace queue with a manual implementation of 
Treiber stack, it's trivial and doesn't use as many CAS/volatile operations
 * new serializers are allocated every time, but they can be put into static 
final constants instead, or cached in fields of corresponding factories
 * we can create a serialization factory per group, not per message, this way 
we will remove unnecessary indirection. Group factory can use {{{}switch{}}}, 
like in Ignite 2, which would basically lead to static dispatch of deserializer 
constructors and static access to serializers, instead of dynamic dispatch 
(virtual call), which should be noticeably faster
 * profiler might show other simple places, we must also compare 
{{OptimizedMarshaller}} against other serialization algorithms in benchmarks


> Commands marshalling appears to be slow
> ---------------------------------------
>
>                 Key: IGNITE-22544
>                 URL: https://issues.apache.org/jira/browse/IGNITE-22544
>             Project: Ignite
>          Issue Type: Improvement
>            Reporter: Ivan Bessonov
>            Priority: Major
>              Labels: ignite-3
>         Attachments: IGNITE-22544.patch
>
>
> We should benchmark the way we marshal commands using optimized marshaller 
> and make it faster. Some obvious places:
>  * byte buffers pool - we can replace queue with a manual implementation of 
> Treiber stack, it's trivial and doesn't use as many CAS/volatile operations
>  * new serializers are allocated every time, but they can be put into static 
> final constants instead, or cached in fields of corresponding factories
>  * we can create a serialization factory per group, not per message, this way 
> we will remove unnecessary indirection. Group factory can use {{{}switch{}}}, 
> like in Ignite 2, which would basically lead to static dispatch of 
> deserializer constructors and static access to serializers, instead of 
> dynamic dispatch (virtual call), which should be noticeably faster
>  * profiler might show other simple places, we must also compare 
> {{OptimizedMarshaller}} against other serialization algorithms in benchmarks
> EDIT: quick draft attached, it addresses points 1 and 2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to