Hi Ben,
Thanks for replying. I want to do stress testing, I havr created new node
and create 50000 no. Using that script given in document. But all call are
failed. Do I need to register these no. to any client before running
command of stress testing?

Thanks

On 13-Mar-2018 10:50 PM, "Bennett Allen" <bennett.al...@metaswitch.com>
wrote:

> Hi Pushpendra,
>
> Firstly I have CC’d this onto the clearwater mailing list and please send
> all replies via this as otherwise your emails may not be seen by my
> colleagues or me!
>
> In answer to your questions I don’t think what the ns value is matters so
> what you’ve got should be fine, and the second value using your bind IP
> looks good. It also doesn’t look like the DNS server/queries is the problem
> in your situation, as homestead is trying to contact the vellum node and
> failing there.
>
> If you have a look at the logs on the vellum node for Cassandra, Astaire
> and Rogers, these are at /var/log/Cassandra/, /var/log/Astaire and
> /var/log/rogers. This should give a clue as to what is happening, as based
> on the monit.log they don’t look like they’re working properly.
>
> Let us know how it goes,
>
> Ben
>
>
>
>
>
> *From:* Pushpendra [mailto:pushpendra16mn...@gmail.com]
> *Sent:* 09 March 2018 11:06
> *To:* Bennett Allen <bennett.al...@metaswitch.com>
> *Subject:* Failed to register to client (Zoiper5 and XLite)
>
>
>
> Hi Ben,
>
> I am able to login and create private identity on ellis, but when I am
> trying to register these identity on client app (zoiper5 an XLite), its not
> registered. Its in an endless loop. I am using <zone> as domain while
> registering to client. Please find the log detail below:
>
> Please let me what is wrong and how can I fix, thanks in advance J
>
>
>
> [vellum]ubuntu@vellum:/var/log$ tail -40 monit.log
>
> [IST Mar  7 23:54:05] error    : 'cassandra_process' process is not running
>
> [IST Mar  7 23:54:05] info     : 'cassandra_process' trying to restart
>
> [IST Mar  7 23:54:05] info     : 'cassandra_process' restart: /bin/bash
>
> [IST Mar  7 23:54:06] info     : 'cassandra_process' process is running
> with pid 31746
>
> [IST Mar  7 23:54:06] error    : 'cassandra_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-cassandra-uptime' failed with exit
> status (1) -- no output
>
> [IST Mar  7 23:54:06] error    : 'astaire_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-astaire-uptime' failed with exit status
> (1) -- no output
>
> [IST Mar  7 23:54:16] error    : 'rogers_process' process is not running
>
> [IST Mar  7 23:54:16] info     : 'rogers_process' trying to restart
>
> [IST Mar  7 23:54:16] info     : 'rogers_process' restart: /bin/bash
>
> [IST Mar  7 23:54:16] info     : 'rogers_process' process is running with
> pid 31827
>
> [IST Mar  7 23:54:16] error    : 'clearwater_cluster_manager_process'
> process is not running
>
> [IST Mar  7 23:54:16] info     : 'clearwater_cluster_manager_process'
> trying to restart
>
> [IST Mar  7 23:54:16] info     : 'clearwater_cluster_manager_process'
> restart: /bin/bash
>
> [IST Mar  7 23:54:17] error    : 'cassandra_process' process is not running
>
> [IST Mar  7 23:54:17] info     : 'cassandra_process' trying to restart
>
> [IST Mar  7 23:54:17] info     : 'cassandra_process' restart: /bin/bash
>
> [IST Mar  7 23:54:17] info     : 'cassandra_process' process is running
> with pid 32022
>
> [IST Mar  7 23:54:17] error    : 'cassandra_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-cassandra-uptime' failed with exit
> status (1) -- no output
>
> [IST Mar  7 23:54:17] error    : 'astaire_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-astaire-uptime' failed with exit status
> (1) -- no output
>
> [IST Mar  7 23:54:27] error    : 'rogers_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-rogers-uptime' failed with exit status
> (1) -- no output
>
> [IST Mar  7 23:54:27] info     : 'clearwater_cluster_manager_process'
> process is running with pid 31936
>
> [IST Mar  7 23:54:27] error    : 'cassandra_process' process is not running
>
> [IST Mar  7 23:54:27] info     : 'cassandra_process' trying to restart
>
> [IST Mar  7 23:54:27] info     : 'cassandra_process' restart: /bin/bash
>
> [IST Mar  7 23:54:28] info     : 'cassandra_process' process is running
> with pid 32610
>
> [IST Mar  7 23:54:28] error    : 'cassandra_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-cassandra-uptime' failed with exit
> status (1) -- no output
>
> [IST Mar  7 23:54:28] error    : 'astaire_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-astaire-uptime' failed with exit status
> (1) -- no output
>
> [IST Mar  7 23:54:38] error    : 'rogers_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-rogers-uptime' failed with exit status
> (1) -- no output
>
> [IST Mar  7 23:54:38] error    : 'cassandra_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-cassandra-uptime' failed with exit
> status (1) -- no output
>
> [IST Mar  7 23:54:39] info     : 'astaire_uptime' status succeeded
> [status=0] -- zmq_msg_recv: Resource temporarily unavailable
>
> [IST Mar  7 23:54:46] info     : Awakened by the SIGHUP signal
>
> Reinitializing Monit - Control file '/etc/monit/monitrc'
>
> [IST Mar  7 23:54:47] info     : 'node-vellum' Monit reloaded
>
> [IST Mar  7 23:54:47] error    : Cannot create socket to [localhost]:2812
> -- Connection refused
>
> [IST Mar  7 23:54:57] info     : Awakened by the SIGHUP signal
>
> Reinitializing Monit - Control file '/etc/monit/monitrc'
>
> [IST Mar  7 23:54:57] info     : 'node-vellum' Monit reloaded
>
> [IST Mar  7 23:54:57] error    : Cannot create socket to [localhost]:2812
> -- Connection refused
>
> [IST Mar  7 23:55:07] error    : 'cassandra_uptime' '/usr/share/clearwater/
> infrastructure/monit_uptime/check-cassandra-uptime' failed with exit
> status (1) -- no output
>
> [IST Mar  7 23:55:17] info     : 'cassandra_uptime' status succeeded
> [status=0] -- zmq_msg_recv: Resource temporarily unavailable
>
>
>
>
>
>
>
> [dime]ubuntu@dime:/var/log/homestead$ tail -30 homestead_err.log
>
> Thrift: Wed Mar  7 23:54:03 2018 TSocket::open() error on socket (after
> THRIFT_POLL) <Host: 10.224.61.24 Port: 9160>Connection refused
>
> Thrift: Wed Mar  7 23:54:03 2018 TSocket::open() error on socket (after
> THRIFT_POLL) <Host: 10.224.61.24 Port: 9160>Connection refused
>
>
>
>
>
> [dime]ubuntu@dime:/var/log/ralf$ tail -40 ralf_current.txt
>
> 08-03-2018 14:00:02.739 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:07.740 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:12.740 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:17.744 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:22.745 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:27.745 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:31.591 UTC [7f0837fff700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 14:00:32.745 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:37.746 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:42.746 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:47.747 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:52.747 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:00:57.747 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:01.591 UTC [7f0837fff700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 14:01:02.748 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:07.748 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:12.749 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:17.749 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:22.749 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:27.750 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:01:31.591 UTC [7f0837fff700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 14:03:57.762 UTC [7f07ee7bc700] Error diameterstack.cpp:853: No
> Diameter peers have been found
>
> 08-03-2018 14:04:01.599 UTC [7f0837fff700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 14:04:02.764 UTC [7f07ee7bc700] Warning
> dnscachedresolver.cpp:836: Failed to retrieve record for _diameter._
> tcp.iind.intel.com: Domain name not found
>
> 08-03-2018 14:04:02.764 UTC [7f07ee7bc700] Warning
> dnscachedresolver.cpp:836: Failed to retrieve record for _diameter._
> sctp.iind.intel.com: Domain name not found
>
>
>
>
>
>
>
> [dime]ubuntu@dime:/var/log/homestead$ tail -30 homestead_current.txt
>
> 08-03-2018 16:00:25.983 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:00:55.984 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:01:25.984 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:01:55.984 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:02:25.989 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:02:55.989 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
> 08-03-2018 16:03:25.989 UTC [7fd5977fe700] Status alarm.cpp:244: Reraising
> all alarms with a known state
>
>
>
>
>
>
>
>
>
> [sprout]ubuntu@sprout:/var/log/sprout$ tail -30 sprout_current.txt
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug pjsip: sip_endpoint.c
> Distributing rdata to modules: Request msg OPTIONS/cseq=97208
> (rdata0x7fd04c092988)
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug uri_classifier.cpp:139:
> home domain: false, local_to_node: true, is_gruu: false,
> enforce_user_phone: false, prefer_sip: true, treat_number_as_phone: false
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug uri_classifier.cpp:172:
> Classified URI as 3
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug pjsip:       endpoint
> Response msg 200/OPTIONS/cseq=97208 (tdta0x7fd0d40829d0) created
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Verbose
> common_sip_processing.cpp:103: TX 277 bytes Response msg
> 200/OPTIONS/cseq=97208 (tdta0x7fd0d40829d0) to TCP 10.224.61.22:52668:
>
> --start msg--
>
>
>
> SIP/2.0 200 OK
>
> Via: SIP/2.0/TCP 10.224.61.22;rport=52668;received=10.224.61.22;branch=
> z9hG4bK-97208
>
> Call-ID: poll-sip-97208
>
> From: "poll-sip" <sip:poll-sip@10.224.61.22>;tag=97208
>
> To: <sip:poll-sip@10.224.61.22>;tag=z9hG4bK-97208
>
> CSeq: 97208 OPTIONS
>
> Content-Length:  0
>
>
>
>
>
> --end msg--
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> common_sip_processing.cpp:275: Skipping SAS logging for OPTIONS response
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug pjsip: tdta0x7fd0d408
> Destroying txdata Response msg 200/OPTIONS/cseq=97208 (tdta0x7fd0d40829d0)
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> thread_dispatcher.cpp:270: Worker thread completed processing message
> 0x7fd04c092988
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> thread_dispatcher.cpp:284: Request latency = 292us
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> event_statistic_accumulator.cpp:32: Accumulate 292 for 0x187c788
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> event_statistic_accumulator.cpp:32: Accumulate 292 for 0x187c7d0
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug load_monitor.cpp:341: Not
> recalculating rate as we haven't processed 20 requests yet (only 10).
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug utils.cpp:878: Removed
> IOHook 0x7fd0b37bde30 to stack. There are now 0 hooks
>
> 08-03-2018 17:45:38.954 UTC [7fd0b37be700] Debug
> thread_dispatcher.cpp:158: Attempting to process queue element
>
> 08-03-2018 17:45:39.463 UTC [7fd0d17fa700] Warning (Net-SNMP): Warning:
> Failed to connect to the agentx master agent ([NIL]):
>
> 08-03-2018 17:45:40.955 UTC [7fd053eff700] Verbose pjsip: tcps0x7fd04c09
> TCP connection closed
>
> 08-03-2018 17:45:40.955 UTC [7fd053eff700] Debug
> connection_tracker.cpp:67: Connection 0x7fd04c0910f8 has been destroyed
>
> 08-03-2018 17:45:40.955 UTC [7fd053eff700] Verbose pjsip: tcps0x7fd04c09
> TCP transport destroyed with reason 70016: End of file (PJ_EEOF)
>
>
>
>
>
> $TTL 5m ; Default TTL
>
>
>
> ; SOA, NS and A record for DNS server itself
>
> @                 3600 IN SOA  ns admin ( 2014010800 ; Serial
>
>                                           3600       ; Refresh
>
>                                           3600       ; Retry
>
>                                           3600       ; Expire
>
>                                           300 )      ; Minimum TTL
>
> @                 3600 IN NS   ns
>
> *ns                3600 IN A    10.224.61.82* ; IPv4 address of BIND
> server
>
>
>
>
>
> ; bono
>
> ; ====
>
> ;
>
> ; Per-node records - not required to have both IPv4 and IPv6 records
>
>
>
> bono-1                 IN A     10.224.61.8
>
> .
>
> .
>
> .
>
> .
>
>
>
>  In zone file infront of ns (you can see above) I am using the IP address
> of machine where i installed the bind9, but in that machine i checked in
> /etc/resolv.conf file nameserver is 127.0.0.53. (*what should I use in
> place of ns?*)
>
>
>
> -> In clearwater node also i have write in  /etc/dnsmasq.resolv.conf
>
> naeserver as  10.224.61.82 (that machine's IP where i installed bind9),
> what should i write?
>
>
>
> Thanks,
>
> Pushpendra
>
>
>
>
>
>
_______________________________________________
Clearwater mailing list
Clearwater@lists.projectclearwater.org
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org

Reply via email to