Dear All,

Sorry for putting so many emails!
It would be really helpful if someone from the community could please suggest 
me on how to resolve the issue.

Live Test Results:

cloud@imstestserver:~/clearwater-live-test$ sudo rake test[hpn.com] 
SIGNUP_CODE=secret
Basic Call - Mainline (TCP) - (6505550786, 6505550816) Failed
Endpoint threw exception:
- sip:6505550...@cw-ngv.com timed out waiting for new incoming call
   - /home/cloud/clearwater-live-test/quaff/lib/endpoint.rb:68:in `rescue in 
incoming_call'
   - /home/cloud/clearwater-live-test/quaff/lib/endpoint.rb:65:in 
`incoming_call'
   - /home/cloud/clearwater-live-test/lib/tests/basic-call.rb:55:in `block (2 
levels) in <top (required)>'
Terminating other threads after failure

Bono Logs :

19-06-2018 11:20:18.666 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:20:33.677 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:20:40.115 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:20:40.115 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:20:40.115 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:20:40.115 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:20:48.678 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:21:03.687 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:21:10.115 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:21:10.115 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:21:10.115 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:21:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:21:18.700 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:21:33.715 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:21:40.116 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:21:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:21:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:21:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:21:48.724 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:22:03.730 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:22:10.116 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:22:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:22:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:22:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:22:18.736 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:22:33.751 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:22:40.116 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:22:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:22:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:22:40.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:22:48.763 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:23:03.774 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:23:10.116 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:23:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:23:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:23:10.116 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:23:18.781 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:23:31.700 UTC [7fc2cf4d4700] Status sip_connection_pool.cpp:428: 
Recycle TCP connection slot 23
19-06-2018 11:23:33.797 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:23:40.117 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:23:40.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:23:40.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:23:40.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm
19-06-2018 11:23:48.808 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:24:03.818 UTC [7fc2ce4d2700] Warning (Net-SNMP): Warning: Failed 
to connect to the agentx master agent ([NIL]):
19-06-2018 11:24:10.117 UTC [7fc2eadf2700] Status alarm.cpp:244: Reraising all 
alarms with a known state
19-06-2018 11:24:10.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1005.1 alarm
19-06-2018 11:24:10.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1012.3 alarm
19-06-2018 11:24:10.117 UTC [7fc2eadf2700] Status alarm.cpp:37: sprout issued 
1013.3 alarm


Sprout Logs:

--start msg--

OPTIONS sip:poll-sip@20.0.0.5:5054 SIP/2.0
Via: SIP/2.0/TCP 20.0.0.5;rport;branch=z9hG4bK-498767
Max-Forwards: 2
To: <sip:poll-sip@20.0.0.5:5054>
From: poll-sip <sip:poll-sip@20.0.0.5>;tag=498767
Call-ID: poll-sip-498767
CSeq: 498767 OPTIONS
Contact: <sip:20.0.0.5>
Accept: application/sdp
Content-Length: 0
User-Agent: poll-sip


--end msg--
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug uri_classifier.cpp:139: home 
domain: false, local_to_node: true, is_gruu: false, enforce_user_phone: false, 
prefer_sip: true, treat_number_as_phone: false
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug uri_classifier.cpp:173: 
Classified URI sip:poll-sip@20.0.0.5:5054 as 3
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug common_sip_processing.cpp:180: 
Skipping SAS logging for OPTIONS request
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug thread_dispatcher.cpp:568: 
Received message 0x7faa8c03d4d0
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug thread_dispatcher.cpp:585: 
Admitted request 0x7faa8c03d4d0
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug thread_dispatcher.cpp:620: 
Incoming message 0x7faa8c03d4d0 cloned to 0x7faa8c0a4b98
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug thread_dispatcher.cpp:639: 
Queuing cloned received message 0x7faa8c0a4b98 for worker threads with priority 
15
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug 
event_statistic_accumulator.cpp:32: Accumulate 0 for 0x22a03b8
19-06-2018 11:26:01.815 UTC [7faa90cb4700] Debug 
event_statistic_accumulator.cpp:32: Accumulate 0 for 0x22a0430
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug utils.cpp:872: Added IOHook 
0x7faaaccebdf0 to stack. There are now 1 hooks
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug thread_dispatcher.cpp:181: 
Worker thread dequeue message 0x7faa8c0a4b98
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug thread_dispatcher.cpp:186: 
Request latency so far = 227us
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug pjsip: sip_endpoint.c 
Distributing rdata to modules: Request msg OPTIONS/cseq=498767 
(rdata0x7faa8c0a4b98)
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug uri_classifier.cpp:139: home 
domain: false, local_to_node: true, is_gruu: false, enforce_user_phone: false, 
prefer_sip: true, treat_number_as_phone: false
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug uri_classifier.cpp:173: 
Classified URI sip:poll-sip@20.0.0.5:5054 as 3
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug pjsip:       endpoint Response 
msg 200/OPTIONS/cseq=498767 (tdta0x7faa8c05ce80) created
19-06-2018 11:26:01.816 UTC [7faaaccec700] Verbose 
common_sip_processing.cpp:103: TX 266 bytes Response msg 
200/OPTIONS/cseq=498767 (tdta0x7faa8c05ce80) to TCP 20.0.0.5:41266:
--start msg--

SIP/2.0 200 OK
Via: SIP/2.0/TCP 20.0.0.5;rport=41266;received=20.0.0.5;branch=z9hG4bK-498767
Call-ID: poll-sip-498767
From: "poll-sip" <sip:poll-sip@20.0.0.5>;tag=498767
To: <sip:poll-sip@20.0.0.5>;tag=z9hG4bK-498767
CSeq: 498767 OPTIONS
Content-Length:  0


--end msg--
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug common_sip_processing.cpp:275: 
Skipping SAS logging for OPTIONS response
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug pjsip: tdta0x7faa8c05 
Destroying txdata Response msg 200/OPTIONS/cseq=498767 (tdta0x7faa8c05ce80)
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug thread_dispatcher.cpp:273: 
Worker thread completed processing message 0x7faa8c0a4b98
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug thread_dispatcher.cpp:287: 
Request latency = 494us
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug 
event_statistic_accumulator.cpp:32: Accumulate 494 for 0x229c428
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug 
event_statistic_accumulator.cpp:32: Accumulate 494 for 0x229c4a0
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug load_monitor.cpp:341: Not 
recalculating rate as we haven't processed 20 requests yet (only 19).
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug utils.cpp:878: Removed IOHook 
0x7faaaccebdf0 to stack. There are now 0 hooks
19-06-2018 11:26:01.816 UTC [7faaaccec700] Debug thread_dispatcher.cpp:161: 
Attempting to process queue element
19-06-2018 11:26:01.875 UTC [7faa8bfff700] Verbose httpstack.cpp:308: Process 
request for URL /ping, args (null)
19-06-2018 11:26:01.875 UTC [7faa8bfff700] Verbose httpstack.cpp:68: Sending 
response 200 to request for URL /ping, args (null)




Kind Regards,
Navdeep


From: Clearwater <clearwater-boun...@lists.projectclearwater.org> On Behalf Of 
Navdeep Uniyal
Sent: 18 June 2018 16:29
To: clearwater@lists.projectclearwater.org
Subject: Re: [Project Clearwater] Clearwater RakeTest Fails

Dear All,

I have been able to resolve some of the issues to proceed only to get stuck in 
another issue while running the Live Test.
The test is not yet successful and I am getting error in logs in dime(ralf):

18-06-2018 15:26:02.068 UTC [7f8ff0f91700] Error diameterstack.cpp:862: No 
Diameter peers have been found
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug diameterresolver.cpp:67: 
DiameterResolver::resolve for realm hpn.com, host , family 2
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug diameterresolver.cpp:72: Do 
NAPTR look-up for hpn.com
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug ttlcache.h:230: Current time 
is 1529335567, expiry time of the entry at the head of the expiry list is 
1529335562
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug ttlcache.h:128: Entry not in 
cache, so create new entry
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug baseresolver.cpp:252: NAPTR 
cache factory called for hpn.com
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug baseresolver.cpp:264: Sending 
DNS NAPTR query for hpn.com
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:250: 
Searching for DNS record matching hpn.com in the static cache
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching hpn.com
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Verbose static_dns_cache.cpp:327: No 
matching CNAME record found in static cache
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching hpn.com
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:269: 
hpn.com not found in the static cache
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Verbose dnscachedresolver.cpp:314: 
Check cache for hpn.com type 35
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:424: 
Pulling 2 records from cache for hpn.com NAPTR
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:287: 
Found result for query hpn.com (canonical domain: hpn.com)
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug ttlcache.h:139: DNS query has 
returned, populate the cache entry
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug ttlcache.h:273: Adding entry 
to expiry list, TTL=0, expiry time = 1529335567
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug diameterresolver.cpp:97: NAPTR 
lookup failed, so do SRV lookups for TCP and SCTP
18-06-2018 15:26:07.068 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:250: 
Searching for DNS record matching _diameter._tcp.hpn.com in the static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching _diameter._tcp.hpn.com
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Verbose static_dns_cache.cpp:327: No 
matching CNAME record found in static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching _diameter._tcp.hpn.com
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:269: 
_diameter._tcp.hpn.com not found in the static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:250: 
Searching for DNS record matching _diameter._sctp.hpn.com in the static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching _diameter._sctp.hpn.com
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Verbose static_dns_cache.cpp:327: No 
matching CNAME record found in static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug static_dns_cache.cpp:303: No 
static records found matching _diameter._sctp.hpn.com
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:269: 
_diameter._sctp.hpn.com not found in the static cache
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Verbose dnscachedresolver.cpp:314: 
Check cache for _diameter._tcp.hpn.com type 33
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Verbose dnscachedresolver.cpp:314: 
Check cache for _diameter._sctp.hpn.com type 33
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:424: 
Pulling 0 records from cache for _diameter._tcp.hpn.com SRV
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:424: 
Pulling 0 records from cache for _diameter._sctp.hpn.com SRV
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:287: 
Found result for query _diameter._tcp.hpn.com (canonical domain: 
_diameter._tcp.hpn.com)
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug dnscachedresolver.cpp:287: 
Found result for query _diameter._sctp.hpn.com (canonical domain: 
_diameter._sctp.hpn.com)
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug diameterresolver.cpp:106: TCP 
SRV record _diameter._tcp.hpn.com returned 0 records
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Debug diameterresolver.cpp:109: SCTP 
SRV record _diameter._sctp.hpn.com returned 0 records
18-06-2018 15:26:07.069 UTC [7f8ff0f91700] Error diameterstack.cpp:862: No 
Diameter peers have been found

I am not using any external HSS and my shared_config is as below:

# Deployment definitions
home_domain=hpn.com
sprout_hostname=sprout.hpn.com
sprout_registration_store=vellum.hpn.com
hs_hostname=hs.hpn.com:8888
hs_provisioning_hostname=hs.hpn.com:8889
homestead_impu_store=vellum.hpn.com
ralf_hostname=ralf.hpn.com:10888
ralf_session_store=vellum.hpn.com
xdms_hostname=homer.hpn.com:7888
chronos_hostname=vellum.hpn.com
cassandra_hostname=vellum.hpn.com

# Email server configuration
#smtp_smarthost=<smtp server>
#smtp_username=<username>
#smtp_password=<password>
#email_recovery_sender=clearwa...@example.org<mailto:#email_recovery_sender=clearwa...@example.org>

# Keys
signup_key=secret
turn_workaround=secret
ellis_api_key=secret
ellis_cookie_key=secret

# Application Servers
#gemini=<gemini port>
memento=5055
memento_auth_store=vellum.hpn.com


Please suggest how can I resolve the issue.

Kind Regards,
Navdeep


From: Clearwater 
<clearwater-boun...@lists.projectclearwater.org<mailto:clearwater-boun...@lists.projectclearwater.org>>
 On Behalf Of Navdeep Uniyal
Sent: 18 June 2018 10:34
To: 
clearwater@lists.projectclearwater.org<mailto:clearwater@lists.projectclearwater.org>
Subject: [Project Clearwater] Clearwater RakeTest Fails

Dear All,

I am new to the community and the clearwater ims.
I have installed the clearwater solution manually on the OpenStack VMs.
In my setup I have 2 networks, one is public and other is private(only inside 
openstack).

I have configured the machines and DNS according to the private network.
All the tests are failing with connection refused!
I tried 2 scenarios:

  1.  Test machine in the public network.
  2.  Test machine in the private network with correct DNS hostname entry.

Can someone please suggest looking at the below part of log, what could be the 
issue:

Basic Call - Mainline (TCP) - ^[[37;41mFailed^[[37;0m
  Errno::ECONNREFUSED thrown:
   - Connection refused - connect(2)
     - /usr/lib/ruby/1.9.1/net/http.rb:763:in `initialize'
     - /usr/lib/ruby/1.9.1/net/http.rb:763:in `open'
     - /usr/lib/ruby/1.9.1/net/http.rb:763:in `block in connect'
     - /usr/lib/ruby/1.9.1/timeout.rb:55:in `timeout'
     - /usr/lib/ruby/1.9.1/timeout.rb:100:in `timeout'
     - /usr/lib/ruby/1.9.1/net/http.rb:763:in `connect'
     - /usr/lib/ruby/1.9.1/net/http.rb:756:in `do_start'
     - /usr/lib/ruby/1.9.1/net/http.rb:745:in `start'
     - 
/var/lib/gems/1.9.1/gems/rest-client-1.8.0/lib/restclient/request.rb:413:in 
`transmit'
     - 
/var/lib/gems/1.9.1/gems/rest-client-1.8.0/lib/restclient/request.rb:176:in 
`execute'
     - 
/var/lib/gems/1.9.1/gems/rest-client-1.8.0/lib/restclient/request.rb:41:in 
`execute'
     - /var/lib/gems/1.9.1/gems/rest-client-1.8.0/lib/restclient.rb:69:in `post'
     - /home/cloud/clearwater-live-test/lib/ellis.rb:166:in `rescue in 
get_security_cookie'
     - /home/cloud/clearwater-live-test/lib/ellis.rb:159:in 
`get_security_cookie'
     - /home/cloud/clearwater-live-test/lib/ellis.rb:67:in `initialize'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:354:in `new'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:354:in 
`provision_line'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:181:in 
`add_endpoint'
     - /home/cloud/clearwater-live-test/lib/tests/basic-call.rb:19:in `block in 
<top (required)>'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:256:in `call'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:256:in `run'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:126:in `block (2 
levels) in run_all'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:112:in `collect'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:112:in `block in 
run_all'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:111:in `each'
     - /home/cloud/clearwater-live-test/lib/test-definition.rb:111:in `run_all'
     - /home/cloud/clearwater-live-test/lib/live-test.rb:23:in `run_tests'
     - /home/cloud/clearwater-live-test/Rakefile:18:in `block in <top 
(required)>'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:240:in `call'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:240:in `block in 
execute'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:235:in `each'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:235:in `execute'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:179:in `block in 
invoke_with_call_chain'
     - /usr/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:172:in 
`invoke_with_call_chain'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/task.rb:165:in `invoke'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:150:in 
`invoke_task'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:106:in 
`block (2 levels) in top_level'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:106:in 
`each'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:106:in 
`block in top_level'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:115:in 
`run_with_threads'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:100:in 
`top_level'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:78:in 
`block in run'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:176:in 
`standard_exception_handling'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/lib/rake/application.rb:75:in `run'
     - /var/lib/gems/1.9.1/gems/rake-10.4.2/bin/rake:33:in `<top (required)>'
     - /usr/local/bin/rake:23:in `load'
     - /usr/local/bin/rake:23:in `<main>'
Basic Call - SDP (TCP) - ^[[37;41mFailed^[[37;0m
  Errno::ECONNREFUSED thrown:
   - Connection refused - connect(2)




--------------------------------------------
Navdeep Uniyal
Email: navdeep.uni...@bristol.ac.uk<mailto:navdeep.uni...@bristol.ac.uk>
Senior Research Associate
High Performance Networks Group
University of Bristol

_______________________________________________
Clearwater mailing list
Clearwater@lists.projectclearwater.org
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to