Github user franz1981 commented on a diff in the pull request:
https://github.com/apache/qpid-proton-j/pull/20#discussion_r234719950
--- Diff:
proton-j/src/main/java/org/apache/qpid/proton/codec/CompositeReadableBuffer.java
---
@@ -834,22 +834,39 @@ public boolean equals(Object
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-proton-j/pull/20
@gemmellr np Robbie! I believe that's better to have it handled separately
:+1:
---
-
To unsubscribe, e-mail: dev
Github user franz1981 commented on a diff in the pull request:
https://github.com/apache/qpid-proton-j/pull/20#discussion_r234421816
--- Diff:
proton-j/src/main/java/org/apache/qpid/proton/codec/CompositeReadableBuffer.java
---
@@ -825,33 +825,67 @@ public int hashCode
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-proton-j/pull/20
PROTON-1965 Optimize CompositeReadableBuffer::equals with single chunk
Using the single chunk directly while performing the byte comparison
increase the performance.
Master
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/26
I have TBH that I'm not getting such a big boost as I was expecting, but
probably because there are other bottlenecks (the consumer side on the broker)
that are not' helping to measure
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/26
The failing test
`JmsConnectionCloseVariationsTest.testCloseBeforeBrokerStoppedRepeated` seems
that is not allocating any `FifoMessageQueue`: i suppose its failure is
indipendent by this PR
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/26
Just as a reference: I'm getting about 80 M msg/sec with the new
`FifoMessageQueue` while near 2.5 M msg/sec with the original one.
On a end-to-end test I'm getting a 20% more throughput
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/26
It won't work with JDK < 8 and is specific of oracle AFAIK (I could be
wrong of course!)
---
-
To unsubscribe, e-mail:
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/26
The reason behind the abstract classes for padding between fields to avoid
false sharing (that could lead to a 1/10 of performance):
```
OFFSET SIZE
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-jms/pull/26
QPIDJMS-430 Lock-Free FifoMessageQueue
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/franz1981/qpid-jms lock_free_fifo_q
Alternatively
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/22
@gemmellr
> Can you elaborate on the benefits you measured here? I'd like to
understand the extent to consider against the downside of exposing dep impl
types throughout the code b
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/22
@gemmellr @tabish121 I'm not very proud to have exposed directly the
ByteBuf streams, but I have tried to use the right type on the facade and it
makes the code really unreadable (and confuse
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-jms/pull/22
QPIDJMS-417 Reduce GC pressure while using BytesMessage
Using directly ByteBuf-based streams allows to avoid
unnecessary creations of intermediate instances to
operate on the underline
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-proton-j/pull/15
PROTON-1916: Makes StringsBenchmark::encodeStringMessage GC free
It includes a perf improvement on string encoding to simplify
the JVM work to compute bounds checking by using a specific
Github user franz1981 commented on the pull request:
https://github.com/apache/qpid-jms/commit/c66d888114021da31d9032c841c08903dd31cc89#commitcomment-29578801
In
qpid-jms-client/src/main/java/org/apache/qpid/jms/provider/ProviderFuture.java:
In
qpid-jms-client/src/main/java/org
Github user franz1981 commented on the pull request:
https://github.com/apache/qpid-jms/commit/7750a1c27589261b197d7b746506e19d8771b145#commitcomment-29578656
In
qpid-jms-client/src/main/java/org/apache/qpid/jms/provider/amqp/AmqpProvider.java:
In
qpid-jms-client/src/main/java
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/19
The main source of this optimization is coming from
http://normanmaurer.me/presentations/2014-facebook-eng-netty/slides.html#8.0
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-jms/pull/19
@tabish121 @gemmellr I have rebased it in order to make this PR compatible
with the last version :+1:
---
-
To unsubscribe
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-jms/pull/19
Save allocation of new promise on each writeAndFlush
Using a void promise on Netty writeAndFlush is possible to save
the allocation of a new one on each call.
You can merge this pull request
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-proton-j/pull/12
@gemmellr @tabish121 I've used the [JMH
visualizer](http://jmh.morethan.io/) to compare the performances of Symbols
before/after the last improvements from @tabish121 and that's what I've
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-proton-j/pull/12
Thanks @gemmellr I've tried to fix the PR using your good advices :+1:
And I ask your and @tabish121 help to check if the content of each
benchmark respect somehow what kind of baseline
Github user franz1981 commented on the issue:
https://github.com/apache/qpid-proton-j/pull/12
@gemmellr @tabish121 Guys, let me know if it seems reasonable (in form and
content) and do not hesitate to propose other benchmarks too: I've added the
first that I've found looking to some
GitHub user franz1981 opened a pull request:
https://github.com/apache/qpid-proton-j/pull/12
PROTON-1690 JMH Benchmarks for baseline performance of Message
encoding/decoding
It adds a module to perform reliable (and repeatible) benchmarks on a basic
Message encoding/decoding
23 matches
Mail list logo