[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758163#comment-16758163
 ] 

ASF GitHub Bot commented on PROTON-1998:


gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459672396
 
 
   Though I may have at the outset, I don't think I'd go this way with 
adjustment of TransportFrame at this point.
   
   Given that the ProtocolTracer adds new methods to convey SASL details, and 
the non-tracer output specifically handles SASL output separately to begin 
with, I think the original bits are probably best left as is and simple 
addition made for the SASL bits (e.g a new package private transport log 
method). SASL frames also dont have a channel or payload so in this case a 
TransportFrame object is always just being used to carry the performative as 
the only detail, so the new ProtocolTracer methods could just pass that.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] jdanekrh opened a new pull request #174: NO-JIRA: [c] Fix typo in sizeof (wrong, bigger, type used)

2019-02-01 Thread GitBox
jdanekrh opened a new pull request #174: NO-JIRA: [c] Fix typo in sizeof 
(wrong, bigger, type used)
URL: https://github.com/apache/qpid-proton/pull/174
 
 
   This is a clang-analyzer warning, and it does look suspicious. Consider this 
more like a suspected bug report (diff makes it easier to explain than verbal 
description) than a fix.
   
   > Result of 'calloc' is converted to a pointer of type 'pn_netaddr_t', which 
is incompatible with sizeof operand type 'lsocket_t'
   
   CC @alanconway @astitcher 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459672396
 
 
   Though I may have at the outset, I don't think I'd go this way with 
adjustment of TransportFrame at this point.
   
   Given that the ProtocolTracer adds new methods to convey SASL details, and 
the non-tracer output specifically handles SASL output separately to begin 
with, I think the original bits are probably best left as is and simple 
addition made for the SASL bits (e.g a new package private transport log 
method). SASL frames also dont have a channel or payload so in this case a 
TransportFrame object is always just being used to carry the performative as 
the only detail, so the new ProtocolTracer methods could just pass that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758390#comment-16758390
 ] 

ASF GitHub Bot commented on PROTON-1998:


k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459752253
 
 
   Thanks for the feedback, I'll try the suggestions.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459752253
 
 
   Thanks for the feedback, I'll try the suggestions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1999) [c] Crash in pn_connection_finalize

2019-02-01 Thread Jeremy (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758319#comment-16758319
 ] 

Jeremy commented on PROTON-1999:


Hello [~ODelbeke] and [~cliffjansen],

[~ODelbeke]: Before going into my analysis, can you please attach the gdb 
stacks for the other threads as well? Specifically, what is happening in the 
main thread.

In fact, we are facing the same randomness problem, even though we are using a 
pointer to a work queue. I've been debugging it for a couple of days now, and I 
suspect the problem comes from proton's memory management. When we don't have 
exceptions, everything runs smoothly. As soon as we start having exceptions we 
start having segfaults. On proton container errors, we stop the container and 
join the thread, and in the mean time, the main thread will propagate the 
proton error by throwing it as an exception (interrupting the normal flow 
rolling back). We took care of ensuring the following order of 
construction/destruction of proton objects through a RAII object we created:

Construction:
 * Create the handler
 * Create the container
 * Run the container in a new thread (we only call run in the new thread)
 * Use the handler, which can store proton objects (sender, receiver, trackers, 
and pointers to deliveries)

Destruction:
 * Release stored proton objects from the handler(sender.close(), 
receiver.close(), empty the queue of trackers and deliveries)
 * Join the thread, meaning, wait for the run method to exit
 * Destroy the container
 * Destroy the handler

Even then, the segfaults persisted.

Scenario:

We have 3 threads: the Main thread, the proton container thread for the sender, 
and a proton container thread for a broker.

In the proton handler, we have a send message method which looks just like the 
above examples, with the additional twist that our send message can throw an 
exception in the Main thread. We want to keep the tracker for further 
processing later. The code looks like this:
{code}
void SenderHandler::send(proton::message m)
{
...
   std::promise messageWillBeSent;
   m_senderWorkQueue->add([&]{
  messageWillBeSent.set_value(m_sender.send(m_messageToSend));
   });
   auto tracker = messageWillBeSent.get_future().get();

   waitForTrackerSettle(timeout); // checks for errors in proton, and throws an 
exception if an error did occur in proton
}
{code}
 In our case, we are simulating a problem with the broker. Therefore, the send 
will take an exception in the waitForTrackerSettle method.

The main thread will start to unroll, starting by the destruction of the 
tracker. The proton container thread, which took an error and propagated it to 
the main thread, in the mean time was finalizing the run method and exiting its 
thread. Both threads are manipulating reference counts of objects, and I 
suspect a race condition. Taking a look at the reference counting mechanism in 
proton 
([object.c|https://github.com/apache/qpid-proton/blob/0.26.0/c/src/core/object/object.c]),
 I see that the operations on reference counters are not atomic. In c++, 
shared_ptr reference counter operations are known to be 
atomic([shared_ptr_base.h|[https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/shared_ptr_base.h]).]
 I strongly suspect that this is not safe.

These cores, we get randomly, along with stacks that look exactly like the one 
you attached (with the main thread waiting on the thread.join()). Replying to  
[~ODelbeke]: "However, I still don't really understand why it solves the 
problem." We noticed that the smallest change in the code results in a 
different stack (sometimes destructor of connection, other times destructor of 
trackers, senders, ...). I'm not sure the result you're getting now is not 
random.

[~cliffjansen] I think you might better understand the inner workings of the 
memory management model of proton. Were race conditions on the reference 
counter factored in the design of the proton's memory management?

I will be testing a proton patch locally that substitutes the proton int 
reference counter, by std::atomic.

> [c] Crash in pn_connection_finalize
> ---
>
> Key: PROTON-1999
> URL: https://issues.apache.org/jira/browse/PROTON-1999
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding, proton-c
>Affects Versions: proton-c-0.26.0
> Environment: Linux 64-bits (Ubuntu 16.04 and Oracle Linux 7.4)
>Reporter: Olivier Delbeke
>Assignee: Cliff Jansen
>Priority: Major
> Attachments: call_stack.txt, example2.cpp, log.txt, main.cpp, 
> run_qpid-broker.sh
>
>
> Here is my situation : I have several proton::containers (~20). 
> Each one has its own proton::messaging_handler, and handles one 
> proton::connection to a local qpid-broker (everything runs on the same 

[jira] [Resolved] (PROTON-1917) [proton-c] the c++ proton consumer with retry should continue to retry if virtual host not active

2019-02-01 Thread Jeremy (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy resolved PROTON-1917.

Resolution: Fixed

> [proton-c] the c++ proton consumer with retry should continue to retry if 
> virtual host not active
> -
>
> Key: PROTON-1917
> URL: https://issues.apache.org/jira/browse/PROTON-1917
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: cpp-binding
>Affects Versions: proton-c-0.22.0
>Reporter: Jeremy
>Priority: Major
>
> I have a running broker, and a configured queue containing messages.
> I also have a consumer, where I configured the max number of attempts to 0 
> (infinite retry).
> I kill the broker (ctrl-c) and start it on the same port.
>  Upon reconnection, I get the following error message randomly:
> {code:java}
> receive_with_retry on_connection_open
> receive_with_retry: on_error: amqp:not-found: Virtual host 'localhost' is not 
> active
> main: end{code}
> In the case where the consumer is able to connect, the consumer continues to 
> consume the messages normally.
> However, in the broker web interface, I see upon each re-connection, an extra 
> connection (same ip and port) to the queue. As if, the old connection is not 
> killed. I wasn't expecting this behavior. This might be a separate issue.
> I was able to reproduce with the following code, on windows 7 (msvc 12 2013)
> {code:java}
> class receive_with_retry : public proton::messaging_handler {
> private:
>std::string url;
>std::string queueName;
> public:
>receive_with_retry(const std::string , const std::string& q) : url(u), 
> queueName(q) {}
>void on_container_start(proton::container ) override {
>   std::cout << "receive_with_retry on_container_start" << std::endl;
>   c.connect(
>  url,
>  proton::connection_options()
> .idle_timeout(proton::duration(2000))
> .reconnect(proton::reconnect_options()
> .max_attempts(0)
> .delay(proton::duration(3000))
> .delay_multiplier(1)
> .max_delay(proton::duration(3001;
>}
>void on_connection_open(proton::connection& c) override {
>   std::cout << "receive_with_retry on_connection_open " << std::endl;
>   c.open_receiver(queueName, 
> proton::receiver_options().auto_accept(false));
>}
>void on_session_open(proton::session& session) override {
>   std::cout << "receive_with_retry on_session_open " << std::endl;
>}
>void on_receiver_open(proton::receiver& receiver) override {
>   std::cout << "receive_with_retry on_receiver_open " << std::endl;
>   receiver.open();
>}
>void on_message(proton::delivery& delivery, proton::message ) 
> override {
>   std::cout << "receive_with_retry on_message " << message.body() << 
> std::endl;
>   // Can be used for throttling
>   // std::this_thread::sleep_for(std::chrono::milliseconds(100));
>   // commented out in order not to exit immediately, but continue on 
> consuming the messages.
>   // delivery.receiver().close();
>   // delivery.receiver().connection().close();
>}
>void on_transport_error(proton::transport& error) override {
>   std::cout << "receive_with_retry: on_transport_error: " << 
> error.error().what() << std::endl;
>   error.connection().close();
>}
>void on_error(const proton::error_condition& error) override {
>   std::cout << "receive_with_retry: on_error: " << error.what() << 
> std::endl;
>}
> };
> void receiveWithRetry(const std::string& url, const std::string& queueName){
>try {
>   std::cout << "main: start" << std::endl;
>   receive_with_retry receiveWithRetry(url, queueName);
>   proton::container(receiveWithRetry).run();
>   std::cout << "main: end" << std::endl;
>}
>catch (const std::exception& cause) {
>   std::cout << "main: caught exception: " << cause.what() << std::endl;
>}
> }
> int main() {
>try {
>   receiveWithRetry("amqp://localhost:5673", "test_queue");
>   return 0;
>}
>catch (const std::exception& e) {
>   std::cerr << e.what() << std::endl;
>}
>return 1;
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (PROTON-1999) [c] Crash in pn_connection_finalize

2019-02-01 Thread Jeremy (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758319#comment-16758319
 ] 

Jeremy edited comment on PROTON-1999 at 2/1/19 1:55 PM:


Hello [~ODelbeke] and [~cliffjansen],

[~ODelbeke]: Before going into my analysis, can you please attach the gdb 
stacks for the other threads as well? Specifically, what is happening in the 
main thread.

In fact, we are facing the same randomness problem, even though we are using a 
pointer to a work queue. I've been debugging it for a couple of days now, and I 
suspect the problem comes from proton's memory management. When we don't have 
exceptions, everything runs smoothly. As soon as we start having exceptions we 
start having segfaults. On proton container errors, we stop the container and 
join the thread, and in the mean time, the main thread will propagate the 
proton error by throwing it as an exception (interrupting the normal flow 
rolling back). We took care of ensuring the following order of 
construction/destruction of proton objects through a RAII object we created:

Construction:
 * Create the handler
 * Create the container
 * Run the container in a new thread (we only call run in the new thread, to 
ensure the container is destroyed after the run finishes and the thread joins)
 * Use the handler, which can store proton objects (sender, receiver, trackers, 
and pointers to deliveries)

Destruction:
 * Release stored proton objects from the handler(sender.close(), 
receiver.close(), empty the queue of trackers and deliveries)
 * Join the thread, meaning, wait for the run method to exit
 * Destroy the container
 * Destroy the handler

Even then, the segfaults persisted.

Scenario:

We have 3 threads: the Main thread, the proton container thread for the sender, 
and a proton container thread for a broker.

In the proton handler, we have a send message method which looks just like the 
above examples, with the additional twist that our send message can throw an 
exception in the Main thread. We want to keep the tracker for further 
processing later. The code looks like this:
{code:java}
void SenderHandler::send(proton::message m)
{
...
   std::promise messageWillBeSent;
   m_senderWorkQueue->add([&]{
  messageWillBeSent.set_value(m_sender.send(m_messageToSend));
   });
   auto tracker = messageWillBeSent.get_future().get();

   waitForTrackerSettle(timeout); // checks for errors in proton, and throws an 
exception if an error did occur in proton
}
{code}
 In our case, we are simulating a problem with the broker. Therefore, the send 
will take an exception in the waitForTrackerSettle method.

The main thread will start to unroll, starting by the destruction of the 
tracker. The proton container thread, which took an error and propagated it to 
the main thread, in the mean time was finalizing the run method and exiting its 
thread. Both threads are manipulating reference counts of objects, and I 
suspect a race condition. Taking a look at the reference counting mechanism in 
proton 
([object.c|https://github.com/apache/qpid-proton/blob/0.26.0/c/src/core/object/object.c]),
 I see that the operations on reference counters are not atomic. In c++, 
shared_ptr reference counter operations are known to be 
atomic([shared_ptr_base.h|[https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/shared_ptr_base.h]).]
 I strongly suspect that this is not safe.

These cores, we get randomly, along with stacks that look exactly like the one 
you attached (with the main thread waiting on the thread.join()). Replying to  
[~ODelbeke]: "However, I still don't really understand why it solves the 
problem." We noticed that the smallest change in the code results in a 
different stack (sometimes destructor of connection, other times destructor of 
trackers, senders, ...). I'm not sure the result you're getting now is not 
random.

[~cliffjansen] I think you might better understand the inner workings of the 
memory management model of proton. Were race conditions on the reference 
counter factored in the design of the proton's memory management?

I will be testing a proton patch locally that substitutes the proton int 
reference counter, by std::atomic.


was (Author: jeremy.aouad):
Hello [~ODelbeke] and [~cliffjansen],

[~ODelbeke]: Before going into my analysis, can you please attach the gdb 
stacks for the other threads as well? Specifically, what is happening in the 
main thread.

In fact, we are facing the same randomness problem, even though we are using a 
pointer to a work queue. I've been debugging it for a couple of days now, and I 
suspect the problem comes from proton's memory management. When we don't have 
exceptions, everything runs smoothly. As soon as we start having exceptions we 
start having segfaults. On proton container errors, we stop the container and 
join the thread, and in the mean time, the main 

[jira] [Comment Edited] (PROTON-1999) [c] Crash in pn_connection_finalize

2019-02-01 Thread Jeremy (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758319#comment-16758319
 ] 

Jeremy edited comment on PROTON-1999 at 2/1/19 1:59 PM:


Hello [~ODelbeke] and [~cliffjansen],

[~ODelbeke]: Before going into my analysis, can you please attach the gdb 
stacks for the other threads as well? Specifically, what is happening in the 
main thread.

In fact, we are facing the same randomness problem, even though we are using a 
pointer to a work queue. I've been debugging it for a couple of days now, and I 
suspect the problem comes from proton's memory management. When we don't have 
exceptions, everything runs smoothly. As soon as we start having exceptions we 
start having segfaults. On proton container errors, we stop the container and 
join the thread, and in the mean time, the main thread will propagate the 
proton error by throwing it as an exception (interrupting the normal flow 
rolling back). We took care of ensuring the following order of 
construction/destruction of proton objects through a RAII object we created:

Construction:
 * Create the handler
 * Create the container
 * Run the container in a new thread (we only call run in the new thread, to 
ensure the container is destroyed after the run finishes and the thread joins)
 * Use the handler, which can store proton objects (sender, receiver, trackers, 
and pointers to deliveries)

Destruction:
 * Release stored proton objects from the handler(sender.close(), 
receiver.close(), empty the queue of trackers and deliveries)
 * Join the thread, meaning, wait for the run method to exit
 * Destroy the container
 * Destroy the handler

Even then, the segfaults persisted.

Scenario:

We have 3 threads: the Main thread, the proton container thread for the sender, 
and a proton container thread for a broker.

In the proton handler, we have a send message method which looks just like the 
above examples, with the additional twist that our send message can throw an 
exception in the Main thread. We want to keep the tracker for further 
processing later. The code looks like this:
{code:c++}
void SenderHandler::send(proton::message m)
{
...
   std::promise messageWillBeSent;
   m_senderWorkQueue->add([&]{
  messageWillBeSent.set_value(m_sender.send(m_messageToSend));
   });
   auto tracker = messageWillBeSent.get_future().get();

   waitForTrackerSettle(timeout); // checks for errors in proton, and throws an 
exception if an error did occur in proton
}
{code}
 In our case, we are simulating a problem with the broker. Therefore, the send 
will take an exception in the waitForTrackerSettle method.

The main thread will start to unroll, starting by the destruction of the 
tracker. The proton container thread, which took an error and propagated it to 
the main thread, in the mean time was finalizing the run method and exiting its 
thread. Both threads are manipulating reference counts of objects, and I 
suspect a race condition. Taking a look at the reference counting mechanism in 
proton 
([object.c|https://github.com/apache/qpid-proton/blob/0.26.0/c/src/core/object/object.c]),
 I see that the operations on reference counters are not atomic. In c++, 
shared_ptr reference counter operations are known to be 
atomic([shared_ptr_base.h|https://github.com/gcc-mirror/gcc/blob/master/libstdc%2B%2B-v3/include/bits/shared_ptr_base.h]).
 I strongly suspect that this is not safe.

These cores we get randomly, along with stacks that look exactly like the one 
you attached (with the main thread waiting on the thread.join()). Replying to  
[~ODelbeke]: "However, I still don't really understand why it solves the 
problem." We noticed that the smallest change in the code results in a 
different stack (sometimes destructor of connection, other times destructor of 
trackers, senders, ...). I'm not sure the result you're getting now is not 
random.

[~cliffjansen] I think you might better understand the inner workings of the 
memory management model of proton. Were race conditions on the reference 
counter factored in the design of the proton's memory management?

I will be testing a proton patch locally that substitutes the proton int 
reference counter, by std::atomic.


was (Author: jeremy.aouad):
Hello [~ODelbeke] and [~cliffjansen],

[~ODelbeke]: Before going into my analysis, can you please attach the gdb 
stacks for the other threads as well? Specifically, what is happening in the 
main thread.

In fact, we are facing the same randomness problem, even though we are using a 
pointer to a work queue. I've been debugging it for a couple of days now, and I 
suspect the problem comes from proton's memory management. When we don't have 
exceptions, everything runs smoothly. As soon as we start having exceptions we 
start having segfaults. On proton container errors, we stop the container and 
join the thread, and in the mean time, the main 

[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758439#comment-16758439
 ] 

ASF GitHub Bot commented on PROTON-1998:


gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459771487
 
 
   Yeah, its a fair point, and I have actually held off doing this in the past 
precisely because I think so (and also because it couldn't be added fully 
without disruption to existing usage until we moved to Java 8).
   
   It would be more consistent with proton-c to do it, and it is a very 
specific trace level tool few should use without expecting such detail, which 
is only showing detail that can be viewed in other ways already..but it may as 
you say be surprising that it starts happening from proton-j after all these 
years. Having a separate flag for it seems both reasonable in that regard, and 
annoying+inconsistent.
   
   Rock, hard place :)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
gemmellr commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459771487
 
 
   Yeah, its a fair point, and I have actually held off doing this in the past 
precisely because I think so (and also because it couldn't be added fully 
without disruption to existing usage until we moved to Java 8).
   
   It would be more consistent with proton-c to do it, and it is a very 
specific trace level tool few should use without expecting such detail, which 
is only showing detail that can be viewed in other ways already..but it may as 
you say be surprising that it starts happening from proton-j after all these 
years. Having a separate flag for it seems both reasonable in that regard, and 
annoying+inconsistent.
   
   Rock, hard place :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] rgodfrey commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
rgodfrey commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459784499
 
 
   Personally I see it as a (longstanding) bug that it doesn't do the same 
thing as proton-c and surprising/(very very) annoying that SASL is omitted 
rather than it being surprising if it suddenly appears.
   
   Given that protocol tracing should only be on in development/debugging 
situations I really don't see it as a problem (though I'd also be fine if the 
challenge/response parts are masked off by default... normally the things you 
want to see are the sasl mechanisms offered/selected and where (if) the 
exchange has hung).  


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758484#comment-16758484
 ] 

ASF GitHub Bot commented on PROTON-1998:


rgodfrey commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459784499
 
 
   Personally I see it as a (longstanding) bug that it doesn't do the same 
thing as proton-c and surprising/(very very) annoying that SASL is omitted 
rather than it being surprising if it suddenly appears.
   
   Given that protocol tracing should only be on in development/debugging 
situations I really don't see it as a problem (though I'd also be fine if the 
challenge/response parts are masked off by default... normally the things you 
want to see are the sasl mechanisms offered/selected and where (if) the 
exchange has hung).  
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758427#comment-16758427
 ] 

ASF GitHub Bot commented on PROTON-1998:


tabish121 commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459768206
 
 
   I'd tend to agree with @gemmellr that the change should be more targeted and 
not alter the TransportFrame at this point.  Also I'm wondering is we want to 
default the not logging SASL frames unless configured to as it does expose that 
data where it was not doing so before which could be surprising to some.  It 
probably isn't a huge deal as the bytes can be seen anyway and possibly logged 
by other frameworks...
   
   I guess proton-c just logs all of it now so being consistent isn't a 
terrible thing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] tabish121 commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
tabish121 commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459768206
 
 
   I'd tend to agree with @gemmellr that the change should be more targeted and 
not alter the TransportFrame at this point.  Also I'm wondering is we want to 
default the not logging SASL frames unless configured to as it does expose that 
data where it was not doing so before which could be surprising to some.  It 
probably isn't a huge deal as the bytes can be seen anyway and possibly logged 
by other frameworks...
   
   I guess proton-c just logs all of it now so being consistent isn't a 
terrible thing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1998) [Proton-J] Add SASL protocol trace

2019-02-01 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758490#comment-16758490
 ] 

ASF GitHub Bot commented on PROTON-1998:


k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459792031
 
 
   I'd personally prefer the consistency with the proton implementation, that 
is, produce the complete complete frame trace including SASL.
   
   I'm neutral about inclusion of the clear text challenge/response.  After 
all,  if a user has the ability to set an environment variable, then he can 
equally set _JAVA_OPTION `-Djavax.net.debug=all` and get the deciphered wire 
bytes which will reveal the same information.   If he cares, he should use 
SCRAM-SHA :).
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> [Proton-J] Add SASL protocol trace
> --
>
> Key: PROTON-1998
> URL: https://issues.apache.org/jira/browse/PROTON-1998
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Reporter: Keith Wall
>Priority: Minor
>
> Unlike Proton, Proton-J does not provide SASL frame trace if environment 
> variable PN_TRACE_FRM is set.  It would be useful if Proton-J had this 
> ability too to help diagnose SASL negotiation problem.
> Proton's SASL frame trace looks like this:
> {code:java}
> [0x7fc112c03a00]: -> SASL
> [0x7fc112c03a00]: <- SASL
> [0x7fc112c03a00]:0 <- @sasl-mechanisms(64) 
> [sasl-server-mechanisms=@PN_SYMBOL[:ANONYMOUS]]
> [0x7fc112c03a00]:0 -> @sasl-init(65) [mechanism=:ANONYMOUS, 
> initial-response=b"guest@Oslo.local"]
> [0x7fc112c03a00]:0 <- @sasl-outcome(68) [code=0]
> [0x7fc112c03a00]: -> AMQP
> [0x7fc112c03a00]:0 -> @open(16) 
> [container-id="be777c26-af6e-4935-a6be-316cc8bbdb35", hostname="127.0.0.1", 
> channel-max=32767]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace

2019-02-01 Thread GitBox
k-wall commented on issue #30: PROTON-1998: Add SASL protocol trace
URL: https://github.com/apache/qpid-proton-j/pull/30#issuecomment-459792031
 
 
   I'd personally prefer the consistency with the proton implementation, that 
is, produce the complete complete frame trace including SASL.
   
   I'm neutral about inclusion of the clear text challenge/response.  After 
all,  if a user has the ability to set an environment variable, then he can 
equally set _JAVA_OPTION `-Djavax.net.debug=all` and get the deciphered wire 
bytes which will reveal the same information.   If he cares, he should use 
SCRAM-SHA :).
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-1260) Closing traffic animation doesn't always work

2019-02-01 Thread Ernest Allen (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ernest Allen resolved DISPATCH-1260.

Resolution: Fixed

> Closing traffic animation doesn't always work
> -
>
> Key: DISPATCH-1260
> URL: https://issues.apache.org/jira/browse/DISPATCH-1260
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Console
>Affects Versions: 1.5.0
>Reporter: Ernest Allen
>Assignee: Ernest Allen
>Priority: Major
>
> Starting a traffic animation on the console's topology page and then stopping 
> the animation doesn't always stop the animation.
>  
> If there was a request pending when the animation is stopped, when the 
> response is received the traffic animation will redraw.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1260) Closing traffic animation doesn't always work

2019-02-01 Thread Ernest Allen (JIRA)
Ernest Allen created DISPATCH-1260:
--

 Summary: Closing traffic animation doesn't always work
 Key: DISPATCH-1260
 URL: https://issues.apache.org/jira/browse/DISPATCH-1260
 Project: Qpid Dispatch
  Issue Type: Bug
  Components: Console
Affects Versions: 1.5.0
Reporter: Ernest Allen
Assignee: Ernest Allen


Starting a traffic animation on the console's topology page and then stopping 
the animation doesn't always stop the animation.

 

If there was a request pending when the animation is stopped, when the response 
is received the traffic animation will redraw.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1261) Builds failing on CentOS7

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758562#comment-16758562
 ] 

ASF subversion and git services commented on DISPATCH-1261:
---

Commit 0310f91fe587d1a7aea266ed0168918dddbdbed8 in qpid-dispatch's branch 
refs/heads/master from Ernest Allen
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=0310f91 ]

DISPATCH-1261 Add new paths to vendor files to fix CentOS7 builds


> Builds failing on CentOS7
> -
>
> Key: DISPATCH-1261
> URL: https://issues.apache.org/jira/browse/DISPATCH-1261
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Console
>Affects Versions: 1.5.0
>Reporter: Ernest Allen
>Assignee: Ernest Allen
>Priority: Major
>
> For the last couple of days, Dispatch has not built succussfully on 
>  CentOS7 on the QIT CI machine[1]. The error first appeared on Jan 18. It 
>  appears to be a gulp issue. It works on Fedora, so it may be a 
>  difference between CentOS7 and Fedora 28.
>  
>  [1] [http://rhm-x3550-05.rhm.lab.eng.bos.redhat.com:8080/job/Build_Dispatch/]
>  
>  11:13:45 [11:13:45] Using gulpfile 
>  ~/workspace/Build_Dispatch/build/console/gulpfile.js
>  11:13:46 [11:13:46] Starting 'build'...
>  11:13:46 [11:13:46] Starting 'clean'...
>  11:13:46 [11:13:46] Finished 'clean' after 11 ms
>  11:13:46 [11:13:46] Starting 'lint'...
>  11:13:46 [11:13:46] Finished 'lint' after 40 ms
>  11:13:46 [11:13:46] Starting 'vendor_styles'...
>  11:13:46 [11:13:46] Starting 'vendor_scripts'...
>  11:13:46 [11:13:46] Starting 'styles'...
>  11:13:46 [11:13:46] 'vendor_scripts' errored after 58 ms
>  11:13:46 [11:13:46] Error: File not found with singular glob: 
>  
> /var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/d3-time/build/d3-time.min.js
>  
>  (if this was purposeful, use `allowEmpty` option)
>  11:13:46 at Glob. 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob-stream/readable.js:84:17)
>  11:13:46 at Glob.g (events.js:292:16)
>  11:13:46 at emitOne (events.js:96:13)
>  11:13:46 at Glob.emit (events.js:188:7)
>  11:13:46 at Glob._finish 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:197:8)
>  11:13:46 at done 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:182:14)
>  11:13:46 at Glob._processSimple2 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:688:12)
>  11:13:46 at 
>  
> /var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:676:10
>  11:13:46 at Glob._stat2 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:772:12)
>  11:13:46 at lstatcb_ 
>  
> (/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:764:12)
>  11:13:46 [11:13:46] 'build' errored after 115 ms
>  11:13:46 [11:13:46] The following tasks did not complete: vendor_styles, 
>  styles
>  11:13:46 [11:13:46] Did you forget to signal async completion?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1260) Closing traffic animation doesn't always work

2019-02-01 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758540#comment-16758540
 ] 

ASF subversion and git services commented on DISPATCH-1260:
---

Commit 37e81fb68b1772514791d1645556abc72da2f935 in qpid-dispatch's branch 
refs/heads/master from Ernest Allen
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=37e81fb ]

DISPATCH-1260 Discard any pending requests when traffic animation is stopped


> Closing traffic animation doesn't always work
> -
>
> Key: DISPATCH-1260
> URL: https://issues.apache.org/jira/browse/DISPATCH-1260
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Console
>Affects Versions: 1.5.0
>Reporter: Ernest Allen
>Assignee: Ernest Allen
>Priority: Major
>
> Starting a traffic animation on the console's topology page and then stopping 
> the animation doesn't always stop the animation.
>  
> If there was a request pending when the animation is stopped, when the 
> response is received the traffic animation will redraw.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (DISPATCH-1261) Builds failing on CentOS7

2019-02-01 Thread Ernest Allen (JIRA)
Ernest Allen created DISPATCH-1261:
--

 Summary: Builds failing on CentOS7
 Key: DISPATCH-1261
 URL: https://issues.apache.org/jira/browse/DISPATCH-1261
 Project: Qpid Dispatch
  Issue Type: Bug
  Components: Console
Affects Versions: 1.5.0
Reporter: Ernest Allen
Assignee: Ernest Allen


For the last couple of days, Dispatch has not built succussfully on 
 CentOS7 on the QIT CI machine[1]. The error first appeared on Jan 18. It 
 appears to be a gulp issue. It works on Fedora, so it may be a 
 difference between CentOS7 and Fedora 28.
 
 [1] [http://rhm-x3550-05.rhm.lab.eng.bos.redhat.com:8080/job/Build_Dispatch/]
 
 11:13:45 [11:13:45] Using gulpfile 
 ~/workspace/Build_Dispatch/build/console/gulpfile.js
 11:13:46 [11:13:46] Starting 'build'...
 11:13:46 [11:13:46] Starting 'clean'...
 11:13:46 [11:13:46] Finished 'clean' after 11 ms
 11:13:46 [11:13:46] Starting 'lint'...
 11:13:46 [11:13:46] Finished 'lint' after 40 ms
 11:13:46 [11:13:46] Starting 'vendor_styles'...
 11:13:46 [11:13:46] Starting 'vendor_scripts'...
 11:13:46 [11:13:46] Starting 'styles'...
 11:13:46 [11:13:46] 'vendor_scripts' errored after 58 ms
 11:13:46 [11:13:46] Error: File not found with singular glob: 
 
/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/d3-time/build/d3-time.min.js
 
 (if this was purposeful, use `allowEmpty` option)
 11:13:46 at Glob. 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob-stream/readable.js:84:17)
 11:13:46 at Glob.g (events.js:292:16)
 11:13:46 at emitOne (events.js:96:13)
 11:13:46 at Glob.emit (events.js:188:7)
 11:13:46 at Glob._finish 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:197:8)
 11:13:46 at done 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:182:14)
 11:13:46 at Glob._processSimple2 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:688:12)
 11:13:46 at 
 
/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:676:10
 11:13:46 at Glob._stat2 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:772:12)
 11:13:46 at lstatcb_ 
 
(/var/lib/jenkins/workspace/Build_Dispatch/build/console/node_modules/glob/glob.js:764:12)
 11:13:46 [11:13:46] 'build' errored after 115 ms
 11:13:46 [11:13:46] The following tasks did not complete: vendor_styles, 
 styles
 11:13:46 [11:13:46] Did you forget to signal async completion?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org