[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855105#comment-16855105
 ] 

ASF GitHub Bot commented on DISPATCH-1354:
--

ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance 
improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#issuecomment-498443098
 
 
   Informal quiver tests show about a 10% improvement.
   * Linear three router network
   * qpid-proton-c arrows
   * Message routed transfers
   * 1,000,000 message with 100 byte payload
   * No user annotations
   * All routers and quiver arrows run on single laptop
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance improvements

2019-06-03 Thread GitBox
ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance 
improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#issuecomment-498443098
 
 
   Informal quiver tests show about a 10% improvement.
   * Linear three router network
   * qpid-proton-c arrows
   * Message routed transfers
   * 1,000,000 message with 100 byte payload
   * No user annotations
   * All routers and quiver arrows run on single laptop


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855101#comment-16855101
 ] 

ASF GitHub Bot commented on DISPATCH-1354:
--

ChugR commented on pull request #518: DISPATCH-1354: Annotation processing 
performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518
 
 
   Message annotation processing on received messages stages key names
   byte by byte into a flat buffer and then uses strcmp to check them.
   
   Easy improvements are:
   
* Use name in raw buffer if it does not cross a buffer boundary
* If name crosses a boundary then use memmoves to get the name in chunks
* Check the name prefix only once and then check variable parts of name 
strings
* Don't create unnecessary qd_iterators and qd_parsed_fields
* Don't check names whose lengths differ from the given keys
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR opened a new pull request #518: DISPATCH-1354: Annotation processing performance improvements

2019-06-03 Thread GitBox
ChugR opened a new pull request #518: DISPATCH-1354: Annotation processing 
performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518
 
 
   Message annotation processing on received messages stages key names
   byte by byte into a flat buffer and then uses strcmp to check them.
   
   Easy improvements are:
   
* Use name in raw buffer if it does not cross a buffer boundary
* If name crosses a boundary then use memmoves to get the name in chunks
* Check the name prefix only once and then check variable parts of name 
strings
* Don't create unnecessary qd_iterators and qd_parsed_fields
* Don't check names whose lengths differ from the given keys


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2063) [go] Sender auto settlement should only happen after receiving a settlement from the receiver

2019-06-03 Thread Andrew Stitcher (JIRA)
Andrew Stitcher created PROTON-2063:
---

 Summary: [go] Sender auto settlement should only happen after 
receiving a settlement from the receiver
 Key: PROTON-2063
 URL: https://issues.apache.org/jira/browse/PROTON-2063
 Project: Qpid Proton
  Issue Type: Bug
  Components: cpp-binding
Reporter: Andrew Stitcher






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2063) [go] Sender auto settlement should only happen after receiving a settlement from the receiver

2019-06-03 Thread Andrew Stitcher (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Stitcher updated PROTON-2063:

Component/s: (was: cpp-binding)
 go-binding

> [go] Sender auto settlement should only happen after receiving a settlement 
> from the receiver
> -
>
> Key: PROTON-2063
> URL: https://issues.apache.org/jira/browse/PROTON-2063
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Reporter: Andrew Stitcher
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2062) [ruby] Sender auto settlement should only happen after receiving a settlement from the receiver

2019-06-03 Thread Andrew Stitcher (JIRA)
Andrew Stitcher created PROTON-2062:
---

 Summary: [ruby] Sender auto settlement should only happen after 
receiving a settlement from the receiver
 Key: PROTON-2062
 URL: https://issues.apache.org/jira/browse/PROTON-2062
 Project: Qpid Proton
  Issue Type: Bug
  Components: cpp-binding
Reporter: Andrew Stitcher






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-2062) [ruby] Sender auto settlement should only happen after receiving a settlement from the receiver

2019-06-03 Thread Andrew Stitcher (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-2062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Stitcher updated PROTON-2062:

Component/s: (was: cpp-binding)
 ruby-binding

> [ruby] Sender auto settlement should only happen after receiving a settlement 
> from the receiver
> ---
>
> Key: PROTON-2062
> URL: https://issues.apache.org/jira/browse/PROTON-2062
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: ruby-binding
>Reporter: Andrew Stitcher
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2061) [C++] Sender auto settlement should only happen after receiving a settlement from the receiver

2019-06-03 Thread Andrew Stitcher (JIRA)
Andrew Stitcher created PROTON-2061:
---

 Summary: [C++] Sender auto settlement should only happen after 
receiving a settlement from the receiver
 Key: PROTON-2061
 URL: https://issues.apache.org/jira/browse/PROTON-2061
 Project: Qpid Proton
  Issue Type: Bug
  Components: cpp-binding
Reporter: Andrew Stitcher






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-2060) [Documentation] The documentation for the sender auto-settle option needs clarification

2019-06-03 Thread Andrew Stitcher (JIRA)
Andrew Stitcher created PROTON-2060:
---

 Summary: [Documentation] The documentation for the sender 
auto-settle option needs clarification
 Key: PROTON-2060
 URL: https://issues.apache.org/jira/browse/PROTON-2060
 Project: Qpid Proton
  Issue Type: Improvement
Reporter: Andrew Stitcher


The current documentation is silent about the details of when the sender 
automatically settles. It is significant that sender settlement only happens 
when the sender receives a receiver settle.

The sender *should not* settle after only receiving a terminal status 
disposition with no settle flag, as then there would then be no way to receive 
any further events for that delivery (such as the subsequent on_settle message 
that might be expected when the receiver finally settles the message).

So the documentation for each language binding (and for the Proton API itself) 
should specify that  sender auto-settlement only occurs for a delivery after 
the sender receives a settled disposition for that delivery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2056) [proton-python] on_settled callback not called when disposition arrives in 2 frames

2019-06-03 Thread Andrew Stitcher (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16855052#comment-16855052
 ] 

Andrew Stitcher commented on PROTON-2056:
-

[~gemmellr] Thank you for the constructive (and  persistent) dialogue. You've 
spurred me to really understand this aspect of AMQP message delivery!

I realise that I was misreading the proton-c code and it makes *no* sender side 
settlements by itself - thinking that it did tripped me up for some while. This 
does leave open the explanation for the sender side disposition in 
[~ganeshmurthy]'s trace in 
https://issues.apache.org/jira/browse/PROTON-2056?focusedCommentId=16852243=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16852243.

So I now agree that the simple fix is actually the correct fix, however.
{quote}I think the patch you proposed is the correct thing to do, its whats 
being done in other cases.
{quote}
I've checked and all the proton-c based bindings (actually python, C++, ruby, 
go) have exactly the same behaviour that the python binding does. So I'm not 
sure what "in other cases" means here?

Another important thing we need to rectify is the documentation so that we 
explicitly call out what "automatic sender settlement"  means. We should be 
explicit and say that the sender will settle automatically when it receives a 
settlement from the receiver with no intervention - Previously the conditions 
which provoked the automatic settlement were unspecified (but clearly 
important!).

I will raise some extra JIRAs to cover the non python fixes and doc improvement.

> [proton-python]  on_settled callback not called when disposition arrives in 2 
> frames
> 
>
> Key: PROTON-2056
> URL: https://issues.apache.org/jira/browse/PROTON-2056
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c, python-binding
>Affects Versions: proton-c-0.28.0
>Reporter: Ganesh Murthy
>Priority: Major
> Attachments: proton-2056.patch
>
>
> When very large anonymous messages are sent to the router and these messages 
> have no receiver, they are immediately released. The router waits for the 
> entire large message to arrive in the router before settling it. Due to this, 
> in some cases, two disposition frames are sent for the same delivery, the 
> first has state=released and the second has settled=true as seen below
>  
> {noformat}
> 0x56330c891430]:0 <- @disposition(21) [role=true, first=315, 
> state=@released(38) []]
> [0x56330c891430]:0 <- @disposition(21) [role=true, first=315, settled=true, 
> state=@released(38) []]{noformat}
>  
> When this case happens, the on_settled is not called for the python binding. 
> The on_released is called. The on_settled must be called when a settlement 
> arrives for every delivery. I observed this behavior in a python system test 
> in Dispatch Router. The test called
> test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior can be 
> found in tests/system_tests_edge_router.py
> The test does not fail all the time but when it does it is due to the 
> on_settled not being called for deliveries that have this two part 
> disposition.
>  
> I tried in vain to write a standalone python reproducer. I could not do it.
>  
> To run the specific system test run the following from the 
> qpid-dispatch/build folder
>  
> {noformat}
>  /usr/bin/python "/home/gmurthy/opensource/qpid-dispatch/build/tests/run.py" 
> "-m" "unittest" "-v" 
> "system_tests_edge_router.RouterTest.test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior"{noformat}
>  
> The following is the test failure
> {noformat}
> test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior 
> (system_tests_edge_router.RouterTest) ... FAIL
> ==
> FAIL: test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior 
> (system_tests_edge_router.RouterTest)
> --
> Traceback (most recent call last):
>   File 
> "/home/gmurthy/opensource/qpid-dispatch/tests/system_tests_edge_router.py", 
> line 964, in 
> test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior
>     self.assertEqual(None, test.error)
> AssertionError: None != u'Timeout Expired - n_sent=350 n_accepted=300 
> n_modified=0 n_released=48'
> --
> Ran 1 test in 17.661s
> FAILED (failures=1)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-03 Thread Chuck Rolke (JIRA)
Chuck Rolke created DISPATCH-1354:
-

 Summary: Interrouter annotation processing uses slow methods
 Key: DISPATCH-1354
 URL: https://issues.apache.org/jira/browse/DISPATCH-1354
 Project: Qpid Dispatch
  Issue Type: Improvement
  Components: Router Node
Affects Versions: 1.7.0
Reporter: Chuck Rolke
Assignee: Chuck Rolke


Message annotation processing on received messages stages key names byte by 
byte into a flat buffer and then uses strcmp to check them.

Easy improvements are:
 * Use name in raw buffer if it does not cross a buffer boundary
 * If name crosses a boundary then use memmoves to get the name in chunks
 * Check the name prefix only once and then check variable parts of name strings
 * Don't create unnecessary qd_iterators and qd_parsed_fields
 * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1353) Document how to configure access policy control on router-initiated connections

2019-06-03 Thread Ben Hardesty (JIRA)
Ben Hardesty created DISPATCH-1353:
--

 Summary: Document how to configure access policy control on 
router-initiated connections
 Key: DISPATCH-1353
 URL: https://issues.apache.org/jira/browse/DISPATCH-1353
 Project: Qpid Dispatch
  Issue Type: Improvement
  Components: Documentation
Reporter: Ben Hardesty


When the router opens a connection to an external AMQP container, you can now 
define policies that restrict the resources that the external container can 
access on the router. This capability should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2044) Azure IoT Hub local-idle-timeout expired

2019-06-03 Thread Andrew Stitcher (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854965#comment-16854965
 ] 

Andrew Stitcher commented on PROTON-2044:
-

It seems that you're trying to use the BlockingConnection when you really want 
to just use a regular event style application - see the python example code for 
how to use the event handling code.

But as you have observed if you want to use BlockingConnection you must use it 
synchronously. It can be used in an application thread, but no other thread can 
be using proton for the same connection at the same time. So it violates one of 
protons threading rules to send messages in a different thread from the one 
running the processing loop.

Another possibility is just to recognise the disconnect exception and reconnect 
and send the message - as I said if you are connected to servicebus for long 
enough without sending a message you will get disconnected anyway - so you have 
to handle a case very like it anyway.

> Azure IoT Hub local-idle-timeout expired
> 
>
> Key: PROTON-2044
> URL: https://issues.apache.org/jira/browse/PROTON-2044
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: python-binding
>Affects Versions: proton-c-0.24.0
> Environment: Operating System: Windows
> Python: 3.6.4
> qpid-proton: 0.24.0
>Reporter: Andreas Fendt
>Priority: Major
>
> I'm using following python code to send messages to the devices 
> (*/messages/devicebound*) which are connected to the *azure iot hub*:
> {code}
> import json
> from base64 import b64encode, b64decode
> from hashlib import sha256
> from hmac import HMAC
> from time import time
> from urllib.parse import quote_plus, urlencode
> from proton import ProtonException, Message
> from proton.utils import BlockingConnection
> class IotHub:
>     def __init__(self):
>     self._hostname = f"example-hub.azure-devices.net"
>     self._username = f"iothubow...@sas.root.example-hub.azure-devices.net"
>     self._blocking_connection = None
>     self._sender = None
>     self.connect()
>     @staticmethod
>     def generate_sas_token(uri: str, policy: str, key: str, expiry: float = 
> None):
>     if not expiry:
>     expiry = time() + 3600  # Default to 1 hour.
>     encoded_uri = quote_plus(uri)
>     ttl = int(expiry)
>     sign_key = f"{encoded_uri}\n{ttl}"
>     signature = b64encode(HMAC(b64decode(key), sign_key.encode(), 
> sha256).digest())
>     result = {"sr": uri, "sig": signature, "se": str(ttl)}
>     if policy:
>     result["skn"] = policy
>     return f"SharedAccessSignature {urlencode(result)}"
>     def connect(self):
>     # create credentials
>     password = self.generate_sas_token(self._hostname,
>    "iothubowner", "key",
>    time() + 31557600)  # ttl = 1 Year
>     # establish connection
>     self._blocking_connection = 
> BlockingConnection(f"amqps://{self._hostname}", allowed_mechs="PLAIN",
>    user=self._username, 
> password=password,
>    heartbeat=30)
>     self._sender = 
> self._blocking_connection.create_sender("/messages/devicebound")
>     def send(self, message: dict, serial_number: str):
>     message = 
> Message(address="/devices/{serial_number}/messages/devicebound".format(serial_number=serial_number),
>   body=bytes(json.dumps(message, separators=(",", 
> ":")), "utf-8"))
>     message.inferred = True  # disable message encoding
>     self._sender.send(message, timeout=20)
> {code}
> The problem is now that when I don't send any message for some seconds I get 
> following exepction while sending a message:
> {code:java}
> Connection amqps://example-hub.azure-devices.net:amqps disconnected: 
> Condition('amqp:resource-limit-exceeded', 'local-idle-timeout expired')
> {code}
> Whats the reason for that? How can I solve that?
> Thank you for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (PROTON-2044) Azure IoT Hub local-idle-timeout expired

2019-06-03 Thread Andreas Fendt (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840291#comment-16840291
 ] 

Andreas Fendt edited comment on PROTON-2044 at 6/3/19 6:37 PM:
---

Hallo, thank you for the answer.

I have now refactored the code, so that another Thread just calls the *run()* 
function of *BlockingConnection*:

{code:python}
import ctypes
import inspect
import json
import threading
from asyncio import AbstractEventLoop
from base64 import b64encode, b64decode
from hashlib import sha256
from hmac import HMAC
from threading import Lock
from time import time
from urllib.parse import quote_plus, urlencode

from proton import ProtonException, Message
from proton.utils import BlockingConnection


def _async_raise(tid, exctype):
"""raises the exception, performs cleanup if needed"""
if not inspect.isclass(exctype):
raise TypeError("Only types can be raised (not instances)")
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 
ctypes.py_object(exctype))
if res == 0:
raise ValueError("invalid thread id")
elif res != 1:
# """if it returns a number greater than one, you're in trouble,
# and you should call it again with exc=NULL to revert the effect"""
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, 0)
raise SystemError("PyThreadState_SetAsyncExc failed")


class Thread(threading.Thread):
def _get_my_tid(self):
"""determines this (self's) thread id"""
if not self.isAlive():
raise threading.ThreadError("the thread is not active")

# do we have it cached?
if hasattr(self, "_thread_id"):
return self._thread_id

# no, look for it in the _active dict
for tid, tobj in threading._active.items():
if tobj is self:
self._thread_id = tid
return tid

raise AssertionError("could not determine the thread's id")

def raise_exc(self, exctype):
"""raises the given exception type in the context of this thread"""
_async_raise(self._get_my_tid(), exctype)

def terminate(self):
"""raises SystemExit in the context of the given thread, which should
cause the thread to exit silently (unless caught)"""
self.raise_exc(SystemExit)


class IotHub:
def __init__(self):
self._hostname = f"example-hub.azure-devices.net"
self._username = f"iothubow...@sas.root.example-hub.azure-devices.net"

self._blocking_connection = None
self._sender = None
self._thread = None

self.connect()

@staticmethod
def generate_sas_token(uri: str, policy: str, key: str, expiry: float = 
None):
if not expiry:
expiry = time() + 3600  # Default to 1 hour.
encoded_uri = quote_plus(uri)
ttl = int(expiry)
sign_key = f"{encoded_uri}\n{ttl}"
signature = b64encode(HMAC(b64decode(key), sign_key.encode(), 
sha256).digest())
result = {"sr": uri, "sig": signature, "se": str(ttl)}
if policy:
result["skn"] = policy

return f"SharedAccessSignature {urlencode(result)}"

def connect(self):
# create credentials
password = self.generate_sas_token(self._hostname,
   "iothubowner", "key",
   time() + 31557600)  # ttl = 1 Year

# establish connection
self._blocking_connection = 
BlockingConnection(f"amqps://{self._hostname}", allowed_mechs="PLAIN",
   user=self._username, 
password=password,
   heartbeat=30)
self._sender = 
self._blocking_connection.create_sender("/messages/devicebound")

# keep connection active
if self._thread and self._thread.is_alive():
self._thread.terminate()
self._thread = Thread(target=self.worker, daemon=True)
self._thread.start()

def worker(self):
self._blocking_connection.run()

def send(self, message: dict, serial_number: str):
message = 
Message(address="/devices/{serial_number}/messages/devicebound".format(serial_number=serial_number),
  body=bytes(json.dumps(message, separators=(",", 
":")), "utf-8"))
message.inferred = True  # disable message encoding
self._sender.send(message, timeout=20)

{code}

-Now I have the problem, as you allready mentioned, that the Service Bus now 
disconnectes after about 10 - 15 minutes, which is ok and I have to live with 
that.-


was (Author: andreas.fendt):
Hallo, thank you for the answer.

I have now refactored the code, so that another Thread just calls the *run()* 
function of *BlockingConnection*:

{code:python}
import ctypes
import inspect
import json
import threading
from asyncio 

[jira] [Commented] (PROTON-2044) Azure IoT Hub local-idle-timeout expired

2019-06-03 Thread Andreas Fendt (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854931#comment-16854931
 ] 

Andreas Fendt commented on PROTON-2044:
---

[~astitcher] 

I have now observed the whole thing a bit further and have found out that it is 
*not possible* to call

{code:python}
self._blocking_connection.run()
{code}

from another thread. Proton or the complete Python Runtime crashes when I try 
to send a message:

{code:python}
self._sender = self._blocking_connection.create_sender(self._queue)
self._sender.send(message_to_send, timeout=20)
{code}

How can I keep the connection active (send heartbeats) and send asynchronous 
messages at the same time?

> Azure IoT Hub local-idle-timeout expired
> 
>
> Key: PROTON-2044
> URL: https://issues.apache.org/jira/browse/PROTON-2044
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: python-binding
>Affects Versions: proton-c-0.24.0
> Environment: Operating System: Windows
> Python: 3.6.4
> qpid-proton: 0.24.0
>Reporter: Andreas Fendt
>Priority: Major
>
> I'm using following python code to send messages to the devices 
> (*/messages/devicebound*) which are connected to the *azure iot hub*:
> {code}
> import json
> from base64 import b64encode, b64decode
> from hashlib import sha256
> from hmac import HMAC
> from time import time
> from urllib.parse import quote_plus, urlencode
> from proton import ProtonException, Message
> from proton.utils import BlockingConnection
> class IotHub:
>     def __init__(self):
>     self._hostname = f"example-hub.azure-devices.net"
>     self._username = f"iothubow...@sas.root.example-hub.azure-devices.net"
>     self._blocking_connection = None
>     self._sender = None
>     self.connect()
>     @staticmethod
>     def generate_sas_token(uri: str, policy: str, key: str, expiry: float = 
> None):
>     if not expiry:
>     expiry = time() + 3600  # Default to 1 hour.
>     encoded_uri = quote_plus(uri)
>     ttl = int(expiry)
>     sign_key = f"{encoded_uri}\n{ttl}"
>     signature = b64encode(HMAC(b64decode(key), sign_key.encode(), 
> sha256).digest())
>     result = {"sr": uri, "sig": signature, "se": str(ttl)}
>     if policy:
>     result["skn"] = policy
>     return f"SharedAccessSignature {urlencode(result)}"
>     def connect(self):
>     # create credentials
>     password = self.generate_sas_token(self._hostname,
>    "iothubowner", "key",
>    time() + 31557600)  # ttl = 1 Year
>     # establish connection
>     self._blocking_connection = 
> BlockingConnection(f"amqps://{self._hostname}", allowed_mechs="PLAIN",
>    user=self._username, 
> password=password,
>    heartbeat=30)
>     self._sender = 
> self._blocking_connection.create_sender("/messages/devicebound")
>     def send(self, message: dict, serial_number: str):
>     message = 
> Message(address="/devices/{serial_number}/messages/devicebound".format(serial_number=serial_number),
>   body=bytes(json.dumps(message, separators=(",", 
> ":")), "utf-8"))
>     message.inferred = True  # disable message encoding
>     self._sender.send(message, timeout=20)
> {code}
> The problem is now that when I don't send any message for some seconds I get 
> following exepction while sending a message:
> {code:java}
> Connection amqps://example-hub.azure-devices.net:amqps disconnected: 
> Condition('amqp:resource-limit-exceeded', 'local-idle-timeout expired')
> {code}
> Whats the reason for that? How can I solve that?
> Thank you for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854892#comment-16854892
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

[~kgiusti][~ganeshmurthy] The first POC I've implemented hasn't worked very 
well (i've pooled 3*64 byte long qd_buffer_t  for each qd_message_pvt_t), 
because now each qd_message is allocating 256 bytes more for the original 
message (that's not using the embedded buffers to populate qd_buffer_list_t). I 
will try to do the same for the src buffer and will tell you which improvements 
it brings.

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854746#comment-16854746
 ] 

Francesco Nigro edited comment on DISPATCH-1352 at 6/3/19 4:00 PM:
---

[~kgiusti] If the POC will work "maybe" we have the chance to do the same while 
allocating the original qd_message_pvt_t too. 
Annotations are being extracted by fields AFAIK: if they have a small-ish size 
(< 512 bytes) maybe it would be more beneficial to just copy them into the ones 
embedded into qd_message_pvt_t instead of "moving" into qd_message_pvt_t. 
It would allow the allocated qd_buffer_t from the field to be short living and 
quick released on the same allocating thread, improving the effectiveness of 
the thread local qd_buffer_t pool.


was (Author: nigro@gmail.com):
[~kgiusti] If the POC will work "maybe" we have the chance to do the same while 
allocating the original qd_message_pvt_t too: annotations are being extracted 
by fields AFAIK: if they have a smal size (< 512 bytes) maybe it would be more 
beneficial to just copy them into the ones embedded into qd_message_pvt_t: it 
would allow the qd_buffer_t allocated from the field to be short living on the 
same allocating thread, improving the effectiveness of the thread local 
qd_buffer_t pool.

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854746#comment-16854746
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

[~kgiusti] If the POC will work "maybe" we have the chance to do the same while 
allocating the original qd_message_pvt_t too: annotations are being extracted 
by fields AFAIK: if they have a smal size (< 512 bytes) maybe it would be more 
beneficial to just copy them into the ones embedded into qd_message_pvt_t: it 
would allow the qd_buffer_t allocated from the field to be short living on the 
same allocating thread, improving the effectiveness of the thread local 
qd_buffer_t pool.

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854743#comment-16854743
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

[~kgiusti] Uh, nice one...now I got it...that's nice!

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Ken Giusti (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854733#comment-16854733
 ] 

Ken Giusti commented on DISPATCH-1352:
--

{quote}About the code impact; that's a whole different story: we need to 
recognize the "embedded" qd_buffer_t while freeing qd_buffer_list_t to avoid 
deallocating them.
{quote}
That's where the "beauty" of the extra refcount come in - right now the 
qd_message_send/qd_message_receive code will recognize that and avoid the free. 
  Poke around the "add fanout" logic and let me know if you have any Q's

 

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854704#comment-16854704
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

[~kgiusti] 

{quote}Instead of allocating a qd_message_pvt_t structure we allocate a single 
block of memory large enuff to hold the qd_message_pv_t structure and N 
qd_buffer_t structures and lay them down "cheek to jowl" in the buffer, linking 
the qd_buffer_ts as normal, but incrementing the refcount to prevent freeing 
them individually.{quote}

If we allocate upfront (3*N with N >=1) qd_buffer_t it would help: considering 
the context where such clone list will happen, they seems to always contain *at 
least* 1 qd_buffer_it, so allocating them upfront makes totally sense.
And the lifecycle of such qd_buffer_t is already bounded to qd_message_pvt_t 
one.
About the code impact; that's a whole different story: we need to recognize the 
"embedded" qd_buffer_t while freeing qd_buffer_list_t to avoid deallocating 
them.

{quote}That would avoid the extra calls to qd_buffer_t allocate, make better 
use of the cache (fingers crossed) all without having to touch the iterator 
code (which is everywhere and expects qd_buffer_t based data).{quote}

That's my bet too :) 
Fingers crossed!

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Ken Giusti (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854666#comment-16854666
 ] 

Ken Giusti commented on DISPATCH-1352:
--

Further discussion with [~ganeshmurthy] on this - possible impl detail:

Instead of allocating a qd_message_pvt_t structure we allocate a single block 
of memory large enuff to hold the qd_message_pv_t structure and N qd_buffer_t 
structures and lay them down "cheek to jowl" in the buffer, linking the 
qd_buffer_ts as normal, but incrementing the refcount to prevent freeing them 
individually.

That would avoid the extra calls to qd_buffer_t allocate, make better use of 
the cache (fingers crossed) all without having to touch the iterator code 
(which is everywhere and expects qd_buffer_t based data).

 

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Ken Giusti (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854647#comment-16854647
 ] 

Ken Giusti commented on DISPATCH-1352:
--

The first bullet point is similar to something I wanted to do for awhile: 
separate the buffer(s) containing the header info (annotations, properties, 
etc) from the message body.

The need relates to multi-frame large messages.  In that case we (rightly) free 
inbound buffers once all data has been sent out all (fanout) links so we don't 
have to buffer a potentially huge message.

But even in that case the router still references header info while the message 
is being forwarded, so we have to keep those buffers that contain header data 
around until we're done with the message.

Maybe we provide some buffer space in the message private structure itself, 
large enough to hold a "reasonable" amount of message header stuff (and perhaps 
some body as well) and only use qd_buffer_t's when the message size goes beyond 
that threshold?

It's Monday and I can't confidently say my caffeine/blood ratio is at the 
recommended level so I'm probably overlooking some dreadfully obvious detail 
that makes it infeasible caveat emptor 

 

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (DISPATCH-1349) qd_buffer_list_clone is wasting dst buffers available capacity

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed DISPATCH-1349.
-
Resolution: Not A Problem

The current usage context of execution of such method won't make possible waste 
dst space

> qd_buffer_list_clone is wasting dst buffers available capacity
> --
>
> Key: DISPATCH-1349
> URL: https://issues.apache.org/jira/browse/DISPATCH-1349
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Minor
>
> qd_buffer_list_clone is not filling completely destination buffers available 
> space (if any) after the first memcpy, allocating unnecessary qd_buffer(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-2056) [proton-python] on_settled callback not called when disposition arrives in 2 frames

2019-06-03 Thread Robbie Gemmell (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854499#comment-16854499
 ] 

Robbie Gemmell commented on PROTON-2056:


{quote}I think you are confusing sender and receiver semantics.
{quote}
I'm not.
{quote}For a sender receiving a terminal state disposition is notification that 
it can forget all about the delivery (in other words settle). There is nothing 
further it can do irrespective of the delivery mode. So even though this case 
is presumable at-least-once the very fact that we've received a terminal state 
update means we can forget the delivery even though the receiver has not 
actually told us it has settled yet.
{quote}
For non-transacted at-least-once cases I can agree that in practical terms 
thats likely effectively true. However its still going against what the 
protocol describes (settlement is a separate thing) plus what the link has been 
negotiated as doing (receiver settles first), and with no real benefit that I 
can see, just adding requirement for a more complicated local implementation 
without real need that might have side effects.
{quote}So presuming that the purpose of the on_settled callback is for 
application delivery state management, the bug here is that the on_settled 
callback isn't happening when the sender receives the terminal state update.
{quote}
I presume the on_settled callback is for notification of the delivery being 
settled by the receiver, which if it hasnt yet been, means it shouldnt be 
called. If the app explicitly settles it locally then it also shouldnt be 
called (since we just said we already forgot about it so what good what that 
be?).

If we start 'making up' local settlements upon receiving unsettled terminal 
states we would then have to distinguish these implicit ones and explicit local 
settlements which seems like unnecessary complication.
{quote}As for transaction state updates - they aren't terminal state updates in 
general (unless I misunderstand them) and so receiving an update for 
transaction state shouldn't have any implication for settlement.
{quote}
Transacted deliverys tend never to have a terminal state directly applied, 
since the to-be-effective outcome is conveyed inside a non-terminal 
transactional state so that the transaction it relates to can be identified. 
The delivery's settlement plays into the effective behaviour of the state 
application in relation to discharge of the transaction and can significantly 
change the behaviour.

 

I continue to see no reason for adding such complication versus making the 
simple 'add indent' change, which apparently fixes the issue without having the 
sender effectively violate the protocol in the rare cases Dispatch should do 
the multiple-disposition (which could be avoided by fixing the other actual 
bug), and without needing to make distinctions between terminal and non 
terminal outcomes, or explicit and implicit local settlements, or considering 
any unintended impact on other areas such as transactions.

> [proton-python]  on_settled callback not called when disposition arrives in 2 
> frames
> 
>
> Key: PROTON-2056
> URL: https://issues.apache.org/jira/browse/PROTON-2056
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-c, python-binding
>Affects Versions: proton-c-0.28.0
>Reporter: Ganesh Murthy
>Priority: Major
> Attachments: proton-2056.patch
>
>
> When very large anonymous messages are sent to the router and these messages 
> have no receiver, they are immediately released. The router waits for the 
> entire large message to arrive in the router before settling it. Due to this, 
> in some cases, two disposition frames are sent for the same delivery, the 
> first has state=released and the second has settled=true as seen below
>  
> {noformat}
> 0x56330c891430]:0 <- @disposition(21) [role=true, first=315, 
> state=@released(38) []]
> [0x56330c891430]:0 <- @disposition(21) [role=true, first=315, settled=true, 
> state=@released(38) []]{noformat}
>  
> When this case happens, the on_settled is not called for the python binding. 
> The on_released is called. The on_settled must be called when a settlement 
> arrives for every delivery. I observed this behavior in a python system test 
> in Dispatch Router. The test called
> test_51_anon_sender_mobile_address_large_msg_edge_to_edge_two_interior can be 
> found in tests/system_tests_edge_router.py
> The test does not fail all the time but when it does it is due to the 
> on_settled not being called for deliveries that have this two part 
> disposition.
>  
> I tried in vain to write a standalone python reproducer. I could not do it.
>  
> To run the specific system test run the following from the 
> qpid-dispatch/build 

[jira] [Updated] (DISPATCH-1348) Avoid qdr_error_t allocation if not necessary

2019-06-03 Thread Ted Ross (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Ross updated DISPATCH-1348:
---
Summary: Avoid qdr_error_t allocation if not necessary  (was: Save 
qdr_error_t allocation if not necessary)

> Avoid qdr_error_t allocation if not necessary
> -
>
> Key: DISPATCH-1348
> URL: https://issues.apache.org/jira/browse/DISPATCH-1348
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.8.0
>
>
> qdr_error_from_pn on error.c is allocating qdr_error_t on the hot path ie 
> AMQP_disposition_handler: saving those allocations would reduce CPU usage 
> (and cache misses) on both core and worker threads, making the router able to 
> scale better while under load.
> Initial tests has shown some improvements under load (ie with core CPU thread 
> ~97% with the new version and ~99% with master):
> 12 pairs master (no lock-free queues, no qdr_error_t fix): 285 K msg/sec
> 12 pairs master (no lock-free queues, yes qdr_error_t fix): 402 K msg/sec
> 12 pairs lock-free q (no qdr_error_t fix):  311 K msg/sec
> 12 pairs lock-free q (yes qdr_error_t fix):  510 K msg/sec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (DISPATCH-1348) Save qdr_error_t allocation if not necessary

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro closed DISPATCH-1348.
-

> Save qdr_error_t allocation if not necessary
> 
>
> Key: DISPATCH-1348
> URL: https://issues.apache.org/jira/browse/DISPATCH-1348
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.8.0
>
>
> qdr_error_from_pn on error.c is allocating qdr_error_t on the hot path ie 
> AMQP_disposition_handler: saving those allocations would reduce CPU usage 
> (and cache misses) on both core and worker threads, making the router able to 
> scale better while under load.
> Initial tests has shown some improvements under load (ie with core CPU thread 
> ~97% with the new version and ~99% with master):
> 12 pairs master (no lock-free queues, no qdr_error_t fix): 285 K msg/sec
> 12 pairs master (no lock-free queues, yes qdr_error_t fix): 402 K msg/sec
> 12 pairs lock-free q (no qdr_error_t fix):  311 K msg/sec
> 12 pairs lock-free q (yes qdr_error_t fix):  510 K msg/sec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854410#comment-16854410
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

[~ganeshmurthy] [~tedross] [~kgiusti] 
I see different solutions to this issue:
* embedd some node of qd_buffer_list_t until a specified depth directly into 
qd_message to save allocation/freeing them and pointer chasing
* make qd_buffer_t able to create reference counted slices sharing the original 
content (but exclusive fanout, next, prev, size fileds) so the cloned 
qd_buffer_list_t could contain qd_buffer_t nodes whose content is owned by the 
original list and won't be freeing until the last reference of them will be 
alive

wdyt?

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854399#comment-16854399
 ] 

Francesco Nigro commented on DISPATCH-1352:
---

 !screenshot-1.png! 

This flame graph show that qd_alloc and qd_buffer_size respectively on the dst 
and src qd_buffer_list_t's qd_buffer_t elements are a dominant factors that 
steal precious cpu time.

The overall picture of this is (in violet):
 !screenshot-2.png! 

It shows that a non maxed out core thread spent a significant amount of cpu 
resources to perform such copy: further analysis has shown that this time is 
not actually spent doing somthing, but waiting cache misses.


> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated DISPATCH-1352:
--
Attachment: screenshot-2.png

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated DISPATCH-1352:
--
Attachment: screenshot-1.png

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated DISPATCH-1352:
--
Description: 
qd_buffer_list_clone on qd_message_copy for 
qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
misses costs:
* to "allocate" new qd_buffer_t
* to reference any qd_buffer_t from the source qd_buffer_list_t

Such cost is the main reason why the core thread is having a very low IPC (< 1 
istr/cycle) and given the single threaded nature of the router while dealing 
with it, by solving it will bring a huge performance improvement to make the 
router able to scale better.

  was:
qd_buffer_list_clone on qd_message_copy for 
qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
misses costs:
* to "allocate" new qd_buffer_t
* to reference any qd_buffer_t from the source qd_buffer_list_t

Such cost is the main reason why the core thread is having a very low IPC (< 1 
istr/cycle) and given the single threaded nature of the router while dealing 
with it; solving it will bring a huge performance improvement to make the 
router able to scale better.


> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
> Attachments: screenshot-1.png
>
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it, by solving it will bring a huge performance improvement to 
> make the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)
Francesco Nigro created DISPATCH-1352:
-

 Summary: qd_buffer_list_clone cost is dominated by cache misses
 Key: DISPATCH-1352
 URL: https://issues.apache.org/jira/browse/DISPATCH-1352
 Project: Qpid Dispatch
  Issue Type: Improvement
Affects Versions: 1.7.0
Reporter: Francesco Nigro


qd_buffer_list_clone on qd_message_copy for 
qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
misses costs:
* to "allocate" new qd_buffer_t
* to reference any qd_buffer_t from the source qd_buffer_list_t

Such cost is the main reason why the core thread is having a very low IPC (< 1 
istr/cycle) and given the single threaded nature of the router while dealing 
with it; solving it will bring a huge performance improvement to make the 
router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1352) qd_buffer_list_clone cost is dominated by cache misses

2019-06-03 Thread Francesco Nigro (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francesco Nigro updated DISPATCH-1352:
--
Component/s: Routing Engine

> qd_buffer_list_clone cost is dominated by cache misses
> --
>
> Key: DISPATCH-1352
> URL: https://issues.apache.org/jira/browse/DISPATCH-1352
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Major
>
> qd_buffer_list_clone on qd_message_copy for 
> qd_message_pvt_t.ma_to_override/ma_trace/ma_ingress is dominated by cache 
> misses costs:
> * to "allocate" new qd_buffer_t
> * to reference any qd_buffer_t from the source qd_buffer_list_t
> Such cost is the main reason why the core thread is having a very low IPC (< 
> 1 istr/cycle) and given the single threaded nature of the router while 
> dealing with it; solving it will bring a huge performance improvement to make 
> the router able to scale better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1349) qd_buffer_list_clone is wasting dst buffers available capacity

2019-06-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854380#comment-16854380
 ] 

ASF GitHub Bot commented on DISPATCH-1349:
--

franz1981 commented on issue #514: DISPATCH-1349 qd_buffer_list_clone is 
wasting dst buffers available capacity
URL: https://github.com/apache/qpid-dispatch/pull/514#issuecomment-498169206
 
 
   @ganeshmurthy totally agree: I can close this one and will open another 
issue to discuss together a better solution to the origianl problem ie 
qd_buffer_list_clone has to be improved 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> qd_buffer_list_clone is wasting dst buffers available capacity
> --
>
> Key: DISPATCH-1349
> URL: https://issues.apache.org/jira/browse/DISPATCH-1349
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Minor
>
> qd_buffer_list_clone is not filling completely destination buffers available 
> space (if any) after the first memcpy, allocating unnecessary qd_buffer(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] franz1981 commented on issue #514: DISPATCH-1349 qd_buffer_list_clone is wasting dst buffers available capacity

2019-06-03 Thread GitBox
franz1981 commented on issue #514: DISPATCH-1349 qd_buffer_list_clone is 
wasting dst buffers available capacity
URL: https://github.com/apache/qpid-dispatch/pull/514#issuecomment-498169206
 
 
   @ganeshmurthy totally agree: I can close this one and will open another 
issue to discuss together a better solution to the origianl problem ie 
qd_buffer_list_clone has to be improved 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1349) qd_buffer_list_clone is wasting dst buffers available capacity

2019-06-03 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16854381#comment-16854381
 ] 

ASF GitHub Bot commented on DISPATCH-1349:
--

franz1981 commented on pull request #514: DISPATCH-1349 qd_buffer_list_clone is 
wasting dst buffers available capacity
URL: https://github.com/apache/qpid-dispatch/pull/514
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> qd_buffer_list_clone is wasting dst buffers available capacity
> --
>
> Key: DISPATCH-1349
> URL: https://issues.apache.org/jira/browse/DISPATCH-1349
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Priority: Minor
>
> qd_buffer_list_clone is not filling completely destination buffers available 
> space (if any) after the first memcpy, allocating unnecessary qd_buffer(s)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] franz1981 closed pull request #514: DISPATCH-1349 qd_buffer_list_clone is wasting dst buffers available capacity

2019-06-03 Thread GitBox
franz1981 closed pull request #514: DISPATCH-1349 qd_buffer_list_clone is 
wasting dst buffers available capacity
URL: https://github.com/apache/qpid-dispatch/pull/514
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org