Hi,
while installing there is no error and every node is install perfectly and
everything is fine but i don't what wrong is going on. can anyone please
help me, there is no shared_config file and dns.json file in bono node i
dont know why. How can i fix this problem.

*[bono]ubuntu@bono-A:/etc/clearwater$ cat /var/log/syslog*

Apr 12 22:24:10 bono-A issue-alarm: zmq_msg_recv: Invalid argument
Apr 12 22:24:22 bono-A clearwater-etcd: Failed to add the local node
(192.168.56.105) to the etcd cluster
Apr 12 22:24:28 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:24:31 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:24:37 bono-A ntpd_intres[2798]: host name not found:
0.ubuntu.pool.ntp.org
Apr 12 22:24:40 bono-A bono[7165]: 2005 - Description: Application started.
@@Cause: The application is starting. @@Effect: Normal. @@Action: None.
Apr 12 22:24:40 bono-A bono[7165]: 1013 - Description: DNS config file is
missing. @@Cause: The DNS config file /etc/clearwater/dns.json is not
present. @@Effect: The DNS config file will be ignored, and all DNS queries
will be directed at the DNS server rather than using any local overrides.
@@Action: (1). Replace the missing DNS config file if desired.(2). Upload
the corrected config with
/usr/share/clearwater/clearwater-config-manager/scripts/upload_dns_json (if
no config file is present, no DNS overrides will be applied)
Apr 12 22:24:40 bono-A bono[7165]: 1013 - Description: DNS config file is
missing. @@Cause: The DNS config file /etc/clearwater/dns.json is not
present. @@Effect: The DNS config file will be ignored, and all DNS queries
will be directed at the DNS server rather than using any local overrides.
@@Action: (1). Replace the missing DNS config file if desired.(2). Upload
the corrected config with
/usr/share/clearwater/clearwater-config-manager/scripts/upload_dns_json (if
no config file is present, no DNS overrides will be applied)
Apr 12 22:24:40 bono-A bono[7165]: 1015 - Description: The SAS config file
is missing. @@Cause: The SAS config file /etc/clearwater/sas.json is
missing. @@Effect: The SAS config has not been updated.  The last valid
configuration will continue to be used @@Action: The SAS configuration
should be defined in /etc/clearwater/sas.json. Populate this file according
to the documentation.
Apr 12 22:24:40 bono-A bono[7165]: 2013 - Description: The application did
not start a connection to Ralf because Ralf is not enabled. @@Cause: Ralf
was not configured in the /etc/clearwater/config file. @@Effect: Billing
service will not be available. @@Action: Correct the /etc/clearwater/config
file if the billing feature is desired.
Apr 12 22:24:40 bono-A bono[7165]: 2060 - Description: The fallback iFCs
configuration file is not present. @@Cause: The S-CSCF supports fallback
iFCs, but the configuration file for them does not exist. @@Effect: The
S-CSCF will not be able to correctly apply any fallback iFCs. @@Action: The
fallback iFCs should be defined in /etc/clearwater/fallback_ifcs.xml.
Create this file according to the documentation. If you are expecting
clearwater-config-manager to be managing this file, check that it is
running and that there are no ENT logs relating to it or clearwater-etcd.
Apr 12 22:24:40 bono-A bono[7165]: 2067 - Description: The RPH file is not
present. @@Cause: The S-CSCF supports message prioritization based on the
Resource-Priority header, but the configuration file for this does not
exist. @@Effect: The S-CSCF will not be able to prioritize messages based
on a Resource-Priority header. @@Action: The RPH configuration should be
defined in /etc/clearwater/rph.json. Create this file according to the
documentation. If you are expecting clearwater-config-manager to be
managing this file, check that it is running and that there are no ENT logs
relating to it or clearwater-etcd.
Apr 12 22:24:41 bono-A bono[7165]: <analytics>
2018-04-12T16:54:41.141+00:00 Call-Disconnected: CALL_ID=poll-sip-15447
REASON=408
Apr 12 22:24:53 bono-A dnsmasq[10628]: exiting on receipt of SIGTERM
Apr 12 22:24:55 bono-A dnsmasq[7276]: started, version 2.68 cachesize 150
Apr 12 22:24:55 bono-A dnsmasq[7276]: compile time options: IPv6 GNU-getopt
DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth
Apr 12 22:24:55 bono-A dnsmasq[7276]: reading /etc/dnsmasq.resolv.conf
Apr 12 22:24:55 bono-A dnsmasq[7276]: using nameserver 10.224.61.20#53
Apr 12 22:24:55 bono-A dnsmasq[7276]: using nameserver 10.224.61.82#53
Apr 12 22:24:55 bono-A dnsmasq[7276]: read /etc/hosts - 7 addresses
Apr 12 22:24:55 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:24:57 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:24:57 bono-A ntpd_intres[2798]: host name not found:
1.ubuntu.pool.ntp.org
Apr 12 22:24:58 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:25:01 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:25:01 bono-A CRON[7347]: (root) CMD (/usr/lib/sysstat/sadc 1 1
/var/log/sysstat/clearwater-sa`date +%d` > /dev/null 2>&1)
Apr 12 22:25:01 bono-A CRON[7348]: (root) CMD (command -v debian-sa1 >
/dev/null && debian-sa1 1 1)
Apr 12 22:25:01 bono-A CRON[7349]: (root) CMD (/usr/sbin/iotop -b -o -t -n
1 -k >> /var/log/iotop.log 2>&1)
Apr 12 22:25:17 bono-A ntpd_intres[2798]: host name not found:
2.ubuntu.pool.ntp.org
Apr 12 22:25:28 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:25:31 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:25:37 bono-A ntpd_intres[2798]: host name not found:
3.ubuntu.pool.ntp.org
Apr 12 22:25:57 bono-A issue-alarm: zmq_msg_recv: Invalid argument
Apr 12 22:25:57 bono-A issue-alarm: zmq_msg_recv: Invalid argument
Apr 12 22:25:57 bono-A issue-alarm: zmq_msg_recv: Invalid argument
Apr 12 22:25:57 bono-A ntpd_intres[2798]: host name not found:
ntp.ubuntu.com
Apr 12 22:25:58 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:26:00 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:26:00 bono-A bono[7165]: 2021 - Description: The application is
ending -- Shutting down. @@Cause: The application has been terminated by
monit or has exited. @@Effect: Application services are no longer
available. @@Action: (1). This occurs normally when the application is
stopped. (2). If the application failed to respond to monit queries in a
timely manner, monit restarts the application.  This can occur if the
application is busy or unresponsive.
Apr 12 22:26:01 bono-A CRON[7468]: (root) CMD (/usr/sbin/iotop -b -o -t -n
1 -k >> /var/log/iotop.log 2>&1)
Apr 12 22:26:01 bono-A CRON[7469]: (root) CMD (/usr/lib/sysstat/sadc 1 1
/var/log/sysstat/clearwater-sa`date +%d` > /dev/null 2>&1)
Apr 12 22:26:01 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9002.1'
Apr 12 22:26:02 bono-A queue-manager[1376]: dropped request: 'issue-alarm
queue-manager 9001.1'
Apr 12 22:26:26 bono-A clearwater-etcd: Failed to add the local node
(192.168.56.105) to the etcd cluster




*[bono]ubuntu@bono-A:/etc/clearwater$ cat /var/log/monit.log*

Connection to 192.168.56.105 5058 port [tcp/*] succeeded!
stdout was:
SIP/2.0 408 Request Timeout
Via: SIP/2.0/TCP
192.168.56.105;rport=44514;received=192.168.56.105;branch=z9hG4bK-17478
Call-ID: poll-sip-17478
From: "poll-sip" <sip:poll-sip@192.168.56.105>;tag=17478
To: <sip:poll-sip@192.168.56.105>;tag=z9hG4bK-17478
CSeq: 17478 OPTIONS
Content-Length:  0
[IST Apr 12 22:59:52] error    : 'restund_process' process is not running
[IST Apr 12 22:59:52] info     : 'restund_process' trying to restart
[IST Apr 12 22:59:52] info     : 'restund_process' restart:
/etc/init.d/restund
[IST Apr 12 23:00:23] error    : 'restund_process' failed to restart (exit
status 0) -- /etc/init.d/restund: error loading configuration:
/etc/clearwater/restund.conf: No such file or directory

[IST Apr 12 23:00:23] error    : 'etcd_process' process is not running
[IST Apr 12 23:00:23] info     : 'etcd_process' trying to restart
[IST Apr 12 23:00:23] info     : 'etcd_process' restart: /bin/bash
[IST Apr 12 23:00:54] error    : 'etcd_process' failed to restart (exit
status 2) -- /bin/bash: zmq_msg_recv: Resource temporarily unavailable
client: etcd cluster is unavailable or misconfigured; error #0: client:
etcd member http://192.168.56.106:4000 has no leader
; error #1: client: etcd member http://192.168.56.103:4000 has no leader
[IST Apr 12 23:00:54] error    : 'poll_bono'
'/usr/share/clearwater/bin/poll_bono.sh' failed with exit status (1) -- SIP
poll failed to 192.168.56.105:5058 with Call-ID poll-sip-17550 at
2018-04-12 17:29:42.988483149+00:00
stderr was:
Connection to 192.168.56.105 5058 port [tcp/*] succeeded!
stdout was:
SIP/2.0 408 Request Timeout
Via: SIP/2.0/TCP
192.168.56.105;rport=44628;received=192.168.56.105;branch=z9hG4bK-17550
Call-ID: poll-sip-17550
From: "poll-sip" <sip:poll-sip@192.168.56.105>;tag=17550
To: <sip:poll-sip@192.168.56.105>;tag=z9hG4bK-17550
CSeq: 17550 OPTIONS
Content-Length:  0
[IST Apr 12 23:01:04] error    : 'restund_process' process is not running
[IST Apr 12 23:01:04] info     : 'restund_process' trying to restart
[IST Apr 12 23:01:04] info     : 'restund_process' restart:
/etc/init.d/restund


On Thu, Apr 12, 2018 at 2:40 PM, Sunil Kumar <skgola1...@gmail.com> wrote:

> Hi,
>
> In bono it giving error like:
>
> [bono]ubuntu@bono-A:~$* cw-config download shared_config*
> Unable to contact the etcd cluster.
>
> and there is no shared_config file, while in other nodes i am able to
> download the shared file but i am not able to open it except dime.
>
> what should i do?
> Please provide some solution.
>
>
> *[bono]ubuntu@bono-A:/var/log/bono$ cat bono_err.log*
>
> 11-04-2018 20:46:11.548 UTC Advanced stack dump (requires gdb):
> sh: 1: /usr/bin/gdb: not found
> gdb failed with return code 32512
>
> 12-04-2018 12:40:51.950 UTC Advanced stack dump (requires gdb):
> sh: 1: /usr/bin/gdb: not found
> gdb failed with return code 32512
>
> 12-04-2018 12:46:19.069 UTC Advanced stack dump (requires gdb):
> sh: 1: /usr/bin/gdb: not found
> gdb failed with return code 32512
>
> 12-04-2018 17:00:01.191 UTC Advanced stack dump (requires gdb):
> sh: 1: /usr/bin/gdb: not found
> gdb failed with return code 32512
>
>
>
> *[bono]ubuntu@bono-A:/var/log/bono$ cat bono_current.log*
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
> 12-04-2018 16:46:25.967 UTC [7fe1ca7fc700] Error
> sip_connection_pool.cpp:190: Failed to resolve icscf. to an IP address -
> Not found (PJ_ENOTFOUND)
>
>
> On Thu, Apr 12, 2018 at 12:35 PM, Sunil Kumar <skgola1...@gmail.com>
> wrote:
>
>> Hi all,
>> I am trying to install clearwater manually in virtualbox. I am using NAT
>> and host only network network mode there, and i make host only network's ip
>> as static and used in local_ip and dns also (NAT is used for connecting to
>> internet).
>> I have installed everything perfectly but facing problem while login to
>> ellis its not opening.
>> when i am trying to stress node it also not work giving error like:
>>
>> 2018-04-12      18:35:36.672571 1523538336.672571: Unknown remote host 
>> '*sprout.imstest.com
>> <http://sprout.imstest.com>*' (Name or service not known, Inappropriate
>> ioctl for device).
>> Use 'sipp -h' for details.
>>
>> *local_config:*
>> *local_ip=192.168.56.102*
>> *public_ip= 192.168.56.102 *
>> *public_hostname=dime-A*
>>
>> *etcd_cluster="192.168.56.102,192.168.56.103,192.168.56.104,192.168.56.1052,192.168.56.106,192.168.56.107"*
>>
>>
>>
>> *bono's local_config:*
>>
>> local_ip=192.168.56.105
>> public_ip= 10.224.61.82
>> public_hostname=bono-A
>> etcd_cluster="192.168.56.102,192.168.56.103,192.168.56.104,1
>> 92.168.56.1052,192.168.56.106,192.168.56.107"
>>
>>
>>
>> i used public and private_ip same in local_config file i.e. both private,
>> I just want to run stress node. in* bono' local_config file i gave
>> public_ip as the ip of host itself because in NAT mode the vm is connecting
>> to internet using the ip of host itself thats why* (please guide if it
>> is wrong).
>>
>>
>> *monit summary:*
>>
>>
>> *[bono]ubuntu@bono-A:~$ sudo monit summary*
>> Monit 5.18.1 uptime: 2h 9m
>>  Service Name                     Status                      Type
>>  node-bono-A                      Running                     System
>>  restund_process                  Execution failed | Does...  Process
>>  ntp_process                      Running                     Process
>>  clearwater_queue_manager_pro...  Running                     Process
>>  *etcd_process                     Execution failed | Does...  Process*
>>  clearwater_diags_monitor_pro...  Running                     Process
>>  clearwater_config_manager_pr...  Running                     Process
>>  clearwater_cluster_manager_p...  Running                     Process
>>  bono_process                     Running                     Process
>>  *poll_restund                     Wait parent                 Program*
>>  monit_uptime                     Status ok                   Program
>>  clearwater_queue_manager_uptime  Status ok                   Program
>> * etcd_uptime                      Wait parent                 Program*
>> * poll_etcd_cluster                Wait parent                 Program*
>> * poll_etcd                        Wait parent                 Program*
>> * poll_bono                        Status failed               Program*
>>
>>
>> *[dime]ubuntu@dime-A:~$ sudo monit summary*
>> [sudo] password for ubuntu:
>> Monit 5.18.1 uptime: 2h 9m
>>  Service Name                     Status                      Type
>>  node-dime-A                      Running                     System
>>  snmpd_process                    Running                     Process
>>  ralf_process                     Running                     Process
>>  ntp_process                      Running                     Process
>>  nginx_process                    Running                     Process
>>  homestead_process                Running                     Process
>>  homestead-prov_process           Running                     Process
>>  clearwater_queue_manager_pro...  Running                     Process
>>  etcd_process                     Running                     Process
>>  clearwater_diags_monitor_pro...  Running                     Process
>>  clearwater_config_manager_pr...  Running                     Process
>>  clearwater_cluster_manager_p...  Running                     Process
>>  ralf_uptime                      Status ok                   Program
>>  poll_ralf                        Status ok                   Program
>>  nginx_ping                       Status ok                   Program
>>  nginx_uptime                     Status ok                   Program
>>  monit_uptime                     Status ok                   Program
>>  homestead_uptime                 Status ok                   Program
>>  poll_homestead                   Status ok                   Program
>>  check_cx_health                  Status ok                   Program
>>  poll_homestead-prov              Status ok                   Program
>>  clearwater_queue_manager_uptime  Status ok                   Program
>>  etcd_uptime                      Status ok                   Program
>> * poll_etcd_cluster                Status failed               Program*
>>  poll_etcd                        Status ok                   Program
>> [dime]ubuntu@dime-A:~$
>>
>>
>>
>>
>> *[sprout]ubuntu@sprout-A:~$ sudo monit summary*
>> [sudo] password for ubuntu:
>> Monit 5.18.1 uptime: 2h 9m
>>  Service Name                     Status                      Type
>>  node-sprout-A                    Running                     System
>>  sprout_process                   Running                     Process
>>  ntp_process                      Running                     Process
>>  nginx_process                    Running                     Process
>>  clearwater_queue_manager_pro...  Running                     Process
>>  etcd_process                     Running                     Process
>>  clearwater_diags_monitor_pro...  Running                     Process
>>  clearwater_config_manager_pr...  Running                     Process
>>  clearwater_cluster_manager_p...  Running                     Process
>>  sprout_uptime                    Status ok                   Program
>>  poll_sprout_sip                  Status ok                   Program
>>  poll_sprout_http                 Status ok                   Program
>>  nginx_ping                       Status ok                   Program
>>  nginx_uptime                     Status ok                   Program
>>  monit_uptime                     Status ok                   Program
>>  clearwater_queue_manager_uptime  Status ok                   Program
>>  etcd_uptime                      Status ok                   Program
>> * poll_etcd_cluster                Status failed               Program*
>>  poll_etcd                        Status ok                   Program
>>
>>
>> etc.
>>
>>
>> please guide any solution
>>
>> thanks in avance
>>
>> Regards
>> sunil
>>
>
>
_______________________________________________
Clearwater mailing list
Clearwater@lists.projectclearwater.org
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to