[jira] [Commented] (TS-4696) CID 1267824: Missing unlock in mgmt/Alarms.cc

2016-07-28 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397262#comment-15397262
 ] 

taoyunxing commented on TS-4696:


Indeed, there exist two places return without releasing the mutex.

> CID 1267824: Missing unlock in mgmt/Alarms.cc
> -
>
> Key: TS-4696
> URL: https://issues.apache.org/jira/browse/TS-4696
> Project: Traffic Server
>  Issue Type: Bug
>Reporter: Tyler Stroh
>Assignee: Tyler Stroh
> Fix For: 7.0.0
>
>
> {code}
> 2. lock: ink_mutex_acquire locks this->mutex. [show details]
>   ink_mutex_acquire();
>   if (!((n + (int)strlen("type: alarm\n")) < max)) {
> if (max >= 1) {
>   message[0] = '\0';
> }
> CID 1267824 (#1 of 1): Missing unlock (LOCK)
> 5. missing_unlock: Returning without unlocking this->mutex.
>return;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4704) LogObject.cc failed assert `bytes_needed >= bytes_used`

2016-07-27 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4704:
---
Description: 
ATS all  versions: from 4.3.2, 5.3.2, 6.1.1 etc, I test a url  with 502 status 
code using 4.3.2

Records.config:
{code}
CONFIG proxy.config.reverse_proxy.enabled INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.url_remap.remap_required INT 1
CONFIG proxy.config.diags.show_location INT 1
{code}

logs_xml.config:
{code}

  
  /% % \"%\" % % % 
% % \"%\" % \"%<{Referer}cqh>\" \"%\" 
\"%<{User-agent}cqh>\" % % %"/>



  
  
  
  
  
  

{code}

Descriptions:
I request a 502 url on ATS 4.3.2, and first I found the assertion failed:

{code}
using root directory '/usr'
FATAL: LogAccess.cc:790: failed assert `actual_len < padded_len`
/usr/bin/traffic_server - STACK TRACE: 
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x3e5c85db8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x3e5c85c7ec]
/usr/bin/traffic_server(LogAccess::marshal_mem(char*, char const*, int, 
int)+0x62)[0x5f948e]
/usr/bin/traffic_server(LogAccessHttp::marshal_client_req_unmapped_url_canon(char*)+0x8e)[0x5fbca2]
/usr/bin/traffic_server(LogField::marshal(LogAccess*, char*)+0x73)[0x6095e7]
/usr/bin/traffic_server(LogFieldList::marshal(LogAccess*, char*)+0x51)[0x60a0ff]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x632)[0x616cac]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619223]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]
/usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
HttpTransact::State*, long, long*, char*, unsigned long, char*, unsigned long, 
char const*, __va_list_tag*)+0x368)[0
x560056]
/usr/bin/traffic_server(HttpTransact::build_error_response(HttpTransact::State*,
 HTTPStatus, char const*, char const*, char const*, ...)+0x75f)[0x5b4c49]
/usr/bin/traffic_server(HttpTransact::handle_server_died(HttpTransact::State*)+0x559)[0x5b3027]
/usr/bin/traffic_server(HttpTransact::handle_server_connection_not_open(HttpTransact::State*)+0x3b3)[0x5a46cd]
/usr/bin/traffic_server(HttpTransact::handle_response_from_server(HttpTransact::State*)+0x783)[0x5a39c5]
/usr/bin/traffic_server(HttpTransact::HandleResponse(HttpTransact::State*)+0x748)[0x5a1e90]
/usr/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x84)[0x5885aa]
/usr/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
void*)+0x664)[0x582ade]
/usr/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
void*)+0x31c)[0x576902]
/usr/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x578cd6]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server[0x7019f1]
/usr/bin/traffic_server[0x701bb1]
/usr/bin/traffic_server[0x701c69]
/usr/bin/traffic_server[0x702173]
/usr/bin/traffic_server(UnixNetVConnection::net_read_io(NetHandler*, 
EThread*)+0x2b)[0x704611]
/usr/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x700)[0x6fbafa]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server(EThread::process_event(Event*, int, Queue*)+0x124)[0x724882]
/usr/bin/traffic_server(EThread::execute()+0x4cc)[0x724ef0]
/usr/bin/traffic_server[0x723da9]
/lib64/libpthread.so.0[0x3703207aa1]
/lib64/libc.so.6(clone+0x6d)[0x3702ee893d]
{code}

I modified the code at LogAccessHttp::marshal_client_req_unmapped_url_canon:

{code}
} else {
len = round_strlen(m_client_req_unmapped_url_canon_len + 1);  // +1 for 
eos
+ if (m_client_req_unmapped_url_host_len < 0)
+m_client_req_unmapped_url_host_len = 0;
{code}

then I found another assertion failed:

{code}
FATAL: LogObject.cc:634: failed assert `bytes_needed >= bytes_used`
/usr/bin/traffic_server - STACK TRACE:
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x2b5da1050b8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x2b5da104f7ec]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x65f)[0x616d09]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619253]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]
/usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
HttpTransact::State*, 

[jira] [Updated] (TS-4704) LogObject.cc failed assert `bytes_needed >= bytes_used`

2016-07-27 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4704:
---
Description: 
ATS all  versions: from 4.3.2, 5.3.2, 6.1.1 etc, I test a url  with 502 status 
code using 4.3.2

Records.config:
CONFIG proxy.config.reverse_proxy.enabled INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.url_remap.remap_required INT 1
CONFIG proxy.config.diags.show_location INT 1

logs_xml.config:

  
  /% % \"%\" % % % 
% % \"%\" % \"%<{Referer}cqh>\" \"%\" 
\"%<{User-agent}cqh>\" % % %"/>



  
  
  
  
  
  


Descriptions:
I request a 502 url on ATS 4.3.2, and first I found the assertion failed:

{code}
using root directory '/usr'
FATAL: LogAccess.cc:790: failed assert `actual_len < padded_len`
/usr/bin/traffic_server - STACK TRACE: 
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x3e5c85db8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x3e5c85c7ec]
/usr/bin/traffic_server(LogAccess::marshal_mem(char*, char const*, int, 
int)+0x62)[0x5f948e]
/usr/bin/traffic_server(LogAccessHttp::marshal_client_req_unmapped_url_canon(char*)+0x8e)[0x5fbca2]
/usr/bin/traffic_server(LogField::marshal(LogAccess*, char*)+0x73)[0x6095e7]
/usr/bin/traffic_server(LogFieldList::marshal(LogAccess*, char*)+0x51)[0x60a0ff]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x632)[0x616cac]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619223]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]
/usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
HttpTransact::State*, long, long*, char*, unsigned long, char*, unsigned long, 
char const*, __va_list_tag*)+0x368)[0
x560056]
/usr/bin/traffic_server(HttpTransact::build_error_response(HttpTransact::State*,
 HTTPStatus, char const*, char const*, char const*, ...)+0x75f)[0x5b4c49]
/usr/bin/traffic_server(HttpTransact::handle_server_died(HttpTransact::State*)+0x559)[0x5b3027]
/usr/bin/traffic_server(HttpTransact::handle_server_connection_not_open(HttpTransact::State*)+0x3b3)[0x5a46cd]
/usr/bin/traffic_server(HttpTransact::handle_response_from_server(HttpTransact::State*)+0x783)[0x5a39c5]
/usr/bin/traffic_server(HttpTransact::HandleResponse(HttpTransact::State*)+0x748)[0x5a1e90]
/usr/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x84)[0x5885aa]
/usr/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
void*)+0x664)[0x582ade]
/usr/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
void*)+0x31c)[0x576902]
/usr/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x578cd6]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server[0x7019f1]
/usr/bin/traffic_server[0x701bb1]
/usr/bin/traffic_server[0x701c69]
/usr/bin/traffic_server[0x702173]
/usr/bin/traffic_server(UnixNetVConnection::net_read_io(NetHandler*, 
EThread*)+0x2b)[0x704611]
/usr/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x700)[0x6fbafa]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server(EThread::process_event(Event*, int, Queue*)+0x124)[0x724882]
/usr/bin/traffic_server(EThread::execute()+0x4cc)[0x724ef0]
/usr/bin/traffic_server[0x723da9]
/lib64/libpthread.so.0[0x3703207aa1]
/lib64/libc.so.6(clone+0x6d)[0x3702ee893d]
{code}

I modified the code at LogAccessHttp::marshal_client_req_unmapped_url_canon:

{code}
} else {
len = round_strlen(m_client_req_unmapped_url_canon_len + 1);  // +1 for 
eos
+ if (m_client_req_unmapped_url_host_len < 0)
+m_client_req_unmapped_url_host_len = 0;
{code}

then I found another assertion failed:

{code}
FATAL: LogObject.cc:634: failed assert `bytes_needed >= bytes_used`
/usr/bin/traffic_server - STACK TRACE:
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x2b5da1050b8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x2b5da104f7ec]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x65f)[0x616d09]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619253]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]
/usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
HttpTransact::State*, long, long*, char*, unsigned 

[jira] [Updated] (TS-4704) LogObject.cc failed assert `bytes_needed >= bytes_used`

2016-07-27 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4704:
---
Fix Version/s: 6.2.1

> LogObject.cc failed assert `bytes_needed >= bytes_used`
> ---
>
> Key: TS-4704
> URL: https://issues.apache.org/jira/browse/TS-4704
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 6.2.0
>Reporter: taoyunxing
> Fix For: 6.2.1
>
>
> ATS all  versions: from 4.3.2, 5.3.2, 6.1.1 etc, I test a url  with 502 
> status code using 4.3.2
> Records.config:
> CONFIG proxy.config.reverse_proxy.enabled INT 1
> CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
> CONFIG proxy.config.url_remap.remap_required INT 1
> CONFIG proxy.config.diags.show_location INT 1
> logs_xml.config:
> 
>   
>   /% % \"%\" % % % 
> % % \"%\" % \"%<{Referer}cqh>\" \"%\" 
> \"%<{User-agent}cqh>\" % % %"/>
> 
> 
>   
>   
>   
>   
>   
>   
> 
> Descriptions:
> I request a 502 url on ATS 4.3.2, and first I found the assertion failed:
> using root directory '/usr'
> FATAL: LogAccess.cc:790: failed assert `actual_len < padded_len`
> /usr/bin/traffic_server - STACK TRACE: 
> /usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x3e5c85db8a]
> /usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x3e5c85c7ec]
> /usr/bin/traffic_server(LogAccess::marshal_mem(char*, char const*, int, 
> int)+0x62)[0x5f948e]
> /usr/bin/traffic_server(LogAccessHttp::marshal_client_req_unmapped_url_canon(char*)+0x8e)[0x5fbca2]
> /usr/bin/traffic_server(LogField::marshal(LogAccess*, char*)+0x73)[0x6095e7]
> /usr/bin/traffic_server(LogFieldList::marshal(LogAccess*, 
> char*)+0x51)[0x60a0ff]
> /usr/bin/traffic_server(LogObject::log(LogAccess*, char 
> const*)+0x632)[0x616cac]
> /usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619223]
> /usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
> /usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
>  long*)+0xbf)[0x562807]
> /usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
> const*, HttpTransact::State*, long*, char const**, char const**, char 
> const**)+0x296)[0x560ef0]
> /usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
> HttpTransact::State*, long, long*, char*, unsigned long, char*, unsigned 
> long, char const*, __va_list_tag*)+0x368)[0
> x560056]
> /usr/bin/traffic_server(HttpTransact::build_error_response(HttpTransact::State*,
>  HTTPStatus, char const*, char const*, char const*, ...)+0x75f)[0x5b4c49]
> /usr/bin/traffic_server(HttpTransact::handle_server_died(HttpTransact::State*)+0x559)[0x5b3027]
> /usr/bin/traffic_server(HttpTransact::handle_server_connection_not_open(HttpTransact::State*)+0x3b3)[0x5a46cd]
> /usr/bin/traffic_server(HttpTransact::handle_response_from_server(HttpTransact::State*)+0x783)[0x5a39c5]
> /usr/bin/traffic_server(HttpTransact::HandleResponse(HttpTransact::State*)+0x748)[0x5a1e90]
> /usr/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*))+0x84)[0x5885aa]
> /usr/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
> void*)+0x664)[0x582ade]
> /usr/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
> void*)+0x31c)[0x576902]
> /usr/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x578cd6]
> /usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
> /usr/bin/traffic_server[0x7019f1]
> /usr/bin/traffic_server[0x701bb1]
> /usr/bin/traffic_server[0x701c69]
> /usr/bin/traffic_server[0x702173]
> /usr/bin/traffic_server(UnixNetVConnection::net_read_io(NetHandler*, 
> EThread*)+0x2b)[0x704611]
> /usr/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x700)[0x6fbafa]
> /usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
> /usr/bin/traffic_server(EThread::process_event(Event*, int, Queue Event::Link_link>*)+0x124)[0x724882]
> /usr/bin/traffic_server(EThread::execute()+0x4cc)[0x724ef0]
> /usr/bin/traffic_server[0x723da9]
> /lib64/libpthread.so.0[0x3703207aa1]
> /lib64/libc.so.6(clone+0x6d)[0x3702ee893d]
> I modified the code at LogAccessHttp::marshal_client_req_unmapped_url_canon:
> } else {
> len = round_strlen(m_client_req_unmapped_url_canon_len + 1);  // +1 
> for eos
> + if (m_client_req_unmapped_url_host_len < 0)
> +m_client_req_unmapped_url_host_len = 0;
> then I found another assertion failed:
> FATAL: LogObject.cc:634: failed assert `bytes_needed >= bytes_used`
> /usr/bin/traffic_server - STACK TRACE:
> /usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x2b5da1050b8a]
> /usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x2b5da104f7ec]
> /usr/bin/traffic_server(LogObject::log(LogAccess*, char 
> const*)+0x65f)[0x616d09]
> 

[jira] [Updated] (TS-4704) LogObject.cc failed assert `bytes_needed >= bytes_used`

2016-07-27 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4704:
---
Affects Version/s: 6.2.0

> LogObject.cc failed assert `bytes_needed >= bytes_used`
> ---
>
> Key: TS-4704
> URL: https://issues.apache.org/jira/browse/TS-4704
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 6.2.0
>Reporter: taoyunxing
> Fix For: 6.2.1
>
>
> ATS all  versions: from 4.3.2, 5.3.2, 6.1.1 etc, I test a url  with 502 
> status code using 4.3.2
> Records.config:
> CONFIG proxy.config.reverse_proxy.enabled INT 1
> CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
> CONFIG proxy.config.url_remap.remap_required INT 1
> CONFIG proxy.config.diags.show_location INT 1
> logs_xml.config:
> 
>   
>   /% % \"%\" % % % 
> % % \"%\" % \"%<{Referer}cqh>\" \"%\" 
> \"%<{User-agent}cqh>\" % % %"/>
> 
> 
>   
>   
>   
>   
>   
>   
> 
> Descriptions:
> I request a 502 url on ATS 4.3.2, and first I found the assertion failed:
> using root directory '/usr'
> FATAL: LogAccess.cc:790: failed assert `actual_len < padded_len`
> /usr/bin/traffic_server - STACK TRACE: 
> /usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x3e5c85db8a]
> /usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x3e5c85c7ec]
> /usr/bin/traffic_server(LogAccess::marshal_mem(char*, char const*, int, 
> int)+0x62)[0x5f948e]
> /usr/bin/traffic_server(LogAccessHttp::marshal_client_req_unmapped_url_canon(char*)+0x8e)[0x5fbca2]
> /usr/bin/traffic_server(LogField::marshal(LogAccess*, char*)+0x73)[0x6095e7]
> /usr/bin/traffic_server(LogFieldList::marshal(LogAccess*, 
> char*)+0x51)[0x60a0ff]
> /usr/bin/traffic_server(LogObject::log(LogAccess*, char 
> const*)+0x632)[0x616cac]
> /usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619223]
> /usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
> /usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
>  long*)+0xbf)[0x562807]
> /usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
> const*, HttpTransact::State*, long*, char const**, char const**, char 
> const**)+0x296)[0x560ef0]
> /usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
> HttpTransact::State*, long, long*, char*, unsigned long, char*, unsigned 
> long, char const*, __va_list_tag*)+0x368)[0
> x560056]
> /usr/bin/traffic_server(HttpTransact::build_error_response(HttpTransact::State*,
>  HTTPStatus, char const*, char const*, char const*, ...)+0x75f)[0x5b4c49]
> /usr/bin/traffic_server(HttpTransact::handle_server_died(HttpTransact::State*)+0x559)[0x5b3027]
> /usr/bin/traffic_server(HttpTransact::handle_server_connection_not_open(HttpTransact::State*)+0x3b3)[0x5a46cd]
> /usr/bin/traffic_server(HttpTransact::handle_response_from_server(HttpTransact::State*)+0x783)[0x5a39c5]
> /usr/bin/traffic_server(HttpTransact::HandleResponse(HttpTransact::State*)+0x748)[0x5a1e90]
> /usr/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*))+0x84)[0x5885aa]
> /usr/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
> void*)+0x664)[0x582ade]
> /usr/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
> void*)+0x31c)[0x576902]
> /usr/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x578cd6]
> /usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
> /usr/bin/traffic_server[0x7019f1]
> /usr/bin/traffic_server[0x701bb1]
> /usr/bin/traffic_server[0x701c69]
> /usr/bin/traffic_server[0x702173]
> /usr/bin/traffic_server(UnixNetVConnection::net_read_io(NetHandler*, 
> EThread*)+0x2b)[0x704611]
> /usr/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x700)[0x6fbafa]
> /usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
> /usr/bin/traffic_server(EThread::process_event(Event*, int, Queue Event::Link_link>*)+0x124)[0x724882]
> /usr/bin/traffic_server(EThread::execute()+0x4cc)[0x724ef0]
> /usr/bin/traffic_server[0x723da9]
> /lib64/libpthread.so.0[0x3703207aa1]
> /lib64/libc.so.6(clone+0x6d)[0x3702ee893d]
> I modified the code at LogAccessHttp::marshal_client_req_unmapped_url_canon:
> } else {
> len = round_strlen(m_client_req_unmapped_url_canon_len + 1);  // +1 
> for eos
> + if (m_client_req_unmapped_url_host_len < 0)
> +m_client_req_unmapped_url_host_len = 0;
> then I found another assertion failed:
> FATAL: LogObject.cc:634: failed assert `bytes_needed >= bytes_used`
> /usr/bin/traffic_server - STACK TRACE:
> /usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x2b5da1050b8a]
> /usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x2b5da104f7ec]
> /usr/bin/traffic_server(LogObject::log(LogAccess*, char 
> const*)+0x65f)[0x616d09]

[jira] [Created] (TS-4704) LogObject.cc failed assert `bytes_needed >= bytes_used`

2016-07-27 Thread taoyunxing (JIRA)
taoyunxing created TS-4704:
--

 Summary: LogObject.cc failed assert `bytes_needed >= bytes_used`
 Key: TS-4704
 URL: https://issues.apache.org/jira/browse/TS-4704
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: taoyunxing


ATS all  versions: from 4.3.2, 5.3.2, 6.1.1 etc, I test a url  with 502 status 
code using 4.3.2

Records.config:
CONFIG proxy.config.reverse_proxy.enabled INT 1
CONFIG proxy.config.url_remap.pristine_host_hdr INT 1
CONFIG proxy.config.url_remap.remap_required INT 1
CONFIG proxy.config.diags.show_location INT 1

logs_xml.config:

  
  /% % \"%\" % % % 
% % \"%\" % \"%<{Referer}cqh>\" \"%\" 
\"%<{User-agent}cqh>\" % % %"/>



  
  
  
  
  
  


Descriptions:
I request a 502 url on ATS 4.3.2, and first I found the assertion failed:
using root directory '/usr'
FATAL: LogAccess.cc:790: failed assert `actual_len < padded_len`
/usr/bin/traffic_server - STACK TRACE: 
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x3e5c85db8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x3e5c85c7ec]
/usr/bin/traffic_server(LogAccess::marshal_mem(char*, char const*, int, 
int)+0x62)[0x5f948e]
/usr/bin/traffic_server(LogAccessHttp::marshal_client_req_unmapped_url_canon(char*)+0x8e)[0x5fbca2]
/usr/bin/traffic_server(LogField::marshal(LogAccess*, char*)+0x73)[0x6095e7]
/usr/bin/traffic_server(LogFieldList::marshal(LogAccess*, char*)+0x51)[0x60a0ff]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x632)[0x616cac]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619223]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]
/usr/bin/traffic_server(HttpBodyFactory::fabricate_with_old_api(char const*, 
HttpTransact::State*, long, long*, char*, unsigned long, char*, unsigned long, 
char const*, __va_list_tag*)+0x368)[0
x560056]
/usr/bin/traffic_server(HttpTransact::build_error_response(HttpTransact::State*,
 HTTPStatus, char const*, char const*, char const*, ...)+0x75f)[0x5b4c49]
/usr/bin/traffic_server(HttpTransact::handle_server_died(HttpTransact::State*)+0x559)[0x5b3027]
/usr/bin/traffic_server(HttpTransact::handle_server_connection_not_open(HttpTransact::State*)+0x3b3)[0x5a46cd]
/usr/bin/traffic_server(HttpTransact::handle_response_from_server(HttpTransact::State*)+0x783)[0x5a39c5]
/usr/bin/traffic_server(HttpTransact::HandleResponse(HttpTransact::State*)+0x748)[0x5a1e90]
/usr/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x84)[0x5885aa]
/usr/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
void*)+0x664)[0x582ade]
/usr/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
void*)+0x31c)[0x576902]
/usr/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x578cd6]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server[0x7019f1]
/usr/bin/traffic_server[0x701bb1]
/usr/bin/traffic_server[0x701c69]
/usr/bin/traffic_server[0x702173]
/usr/bin/traffic_server(UnixNetVConnection::net_read_io(NetHandler*, 
EThread*)+0x2b)[0x704611]
/usr/bin/traffic_server(NetHandler::mainNetEvent(int, Event*)+0x700)[0x6fbafa]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6c)[0x4f402e]
/usr/bin/traffic_server(EThread::process_event(Event*, int, Queue*)+0x124)[0x724882]
/usr/bin/traffic_server(EThread::execute()+0x4cc)[0x724ef0]
/usr/bin/traffic_server[0x723da9]
/lib64/libpthread.so.0[0x3703207aa1]
/lib64/libc.so.6(clone+0x6d)[0x3702ee893d]

I modified the code at LogAccessHttp::marshal_client_req_unmapped_url_canon:
} else {
len = round_strlen(m_client_req_unmapped_url_canon_len + 1);  // +1 for 
eos
+ if (m_client_req_unmapped_url_host_len < 0)
+m_client_req_unmapped_url_host_len = 0;

then I found another assertion failed:
FATAL: LogObject.cc:634: failed assert `bytes_needed >= bytes_used`
/usr/bin/traffic_server - STACK TRACE:
/usr/lib64/trafficserver/libtsutil.so.4(ink_fatal_die+0x0)[0x2b5da1050b8a]
/usr/lib64/trafficserver/libtsutil.so.4(ink_get_rand()+0x0)[0x2b5da104f7ec]
/usr/bin/traffic_server(LogObject::log(LogAccess*, char const*)+0x65f)[0x616d09]
/usr/bin/traffic_server(LogObjectManager::log(LogAccess*)+0x75)[0x619253]
/usr/bin/traffic_server(Log::access(LogAccess*)+0x2a1)[0x5f6237]
/usr/bin/traffic_server(HttpBodyTemplate::build_instantiated_buffer(HttpTransact::State*,
 long*)+0xbf)[0x562807]
/usr/bin/traffic_server(HttpBodyFactory::fabricate(StrList*, StrList*, char 
const*, HttpTransact::State*, long*, char const**, char const**, char 
const**)+0x296)[0x560ef0]

[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-05-05 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6, ATS: 5.3.1

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log for a long time like 
this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thanks a lot.

  was:
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6, ATS: 5.3.1

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log for a long time like 
this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  

[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Affects Version/s: 5.3.1

> ats fallen into dead loop for cache directory overflow
> --
>
> Key: TS-4279
> URL: https://issues.apache.org/jira/browse/TS-4279
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.2.0
>
>
> CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 
> records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 64424509440
> storage.config:
> /dev/sdc id=cache.disk.1
> I encountered a kind of dead loop situation of ats 5.3.1 on two production 
> hosts, a burst of warning is seen by me in the diags.log like this:
> {code}
> [Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> {code}
> ats restart in every serval hours, and the TIMEWAIT count is huge above the 
> ESTABLISH TCP connection count.
> the following is the current dir snapshot of those host:
> {code}
> Directory for [cache.disk.1 172032:109741163]
> Bytes: 8573600
> Segments:  14
> Buckets:   15310
> Entries:   857360
> Full:  852904
> Empty: 4085
> Stale: 0
> Free:  371
> Bucket Fullness:   4085158003204441621 
>   42175331372223212605 
>
> Segment Fullness:  60903 60918 60914 60947 60956 
>60947 60872 60943 60918 60927 
>60858 60917 60927 60957 
> Freelist Fullness:45302713 0 
>789 53212 
>   83 020 8 
> {code}
> I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)
taoyunxing created TS-4279:
--

 Summary: ats fallen into dead loop for cache directory overflow
 Key: TS-4279
 URL: https://issues.apache.org/jira/browse/TS-4279
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: taoyunxing


CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of those host:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thinks a lot.

  was:
CPU: 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, OS: CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 

[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Fix Version/s: 6.2.0

> ats fallen into dead loop for cache directory overflow
> --
>
> Key: TS-4279
> URL: https://issues.apache.org/jira/browse/TS-4279
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.2.0
>
>
> CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 
> records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 64424509440
> storage.config:
> /dev/sdc id=cache.disk.1
> I encountered a kind of dead loop situation of ats 5.3.1 on two production 
> hosts, a burst of warning is seen by me in the diags.log like this:
> {code}
> [Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> [Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  (freelist_clean)> cache directory overflow on '/dev/sde' segment 4, purging...
> {code}
> ats restart in every serval hours, and the TIMEWAIT count is huge above the 
> ESTABLISH TCP connection count.
> the following is the current dir snapshot of those host:
> {code}
> Directory for [cache.disk.1 172032:109741163]
> Bytes: 8573600
> Segments:  14
> Buckets:   15310
> Entries:   857360
> Full:  852904
> Empty: 4085
> Stale: 0
> Free:  371
> Bucket Fullness:   4085158003204441621 
>   42175331372223212605 
>
> Segment Fullness:  60903 60918 60914 60947 60956 
>60947 60872 60943 60918 60927 
>60858 60917 60927 60957 
> Freelist Fullness:45302713 0 
>789 53212 
>   83 020 8 
> {code}
> I wonder why, anyone help me? thinks a lot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-19 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, OS: CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thinks a lot.

  was:
CPU 40 cores, Mem: 120GB, Disk: 1*300 sys + 11 * 899GB, 

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sde' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} 

[jira] [Updated] (TS-4279) ats fallen into dead loop for cache directory overflow

2016-03-18 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4279:
---
Description: 
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6, ATS: 5.3.1

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log for a long time like 
this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.756] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
{code}
ats restart in every serval hours, and the TIMEWAIT count is huge above the 
ESTABLISH TCP connection count.
the following is the current dir snapshot of disk  /dev/sdc on one of the hosts:
{code}
Directory for [cache.disk.1 172032:109741163]
Bytes: 8573600
Segments:  14
Buckets:   15310
Entries:   857360
Full:  852904
Empty: 4085
Stale: 0
Free:  371
Bucket Fullness:   4085158003204441621 
  42175331372223212605 
   
Segment Fullness:  60903 60918 60914 60947 60956 
   60947 60872 60943 60918 60927 
   60858 60917 60927 60957 
Freelist Fullness:45302713 0 
   789 53212 
  83 020 8 
{code}
I wonder why the value of freelist[4] is zero, which cause ats dead loop, 
anyone help me? thinks a lot.

  was:
CPU: 40 cores, Mem: 120GB, Disk: 1*300GB sys + 11 * 899GB data(naked), OS: 
CentOS 6.6

records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 64424509440

storage.config:
/dev/sdc id=cache.disk.1

I encountered a kind of dead loop situation of ats 5.3.1 on two production 
hosts, a burst of warning is seen by me in the diags.log like this:
{code}
[Mar 16 13:04:32.730] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.732] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.733] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.735] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.737] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.739] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.742] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.744] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.747] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.750] Server {0x2b8ffc544700} WARNING:  cache directory overflow on '/dev/sdc' segment 4, purging...
[Mar 16 13:04:32.753] Server {0x2b8ffc544700} WARNING:  cache directory overflow on 

[jira] [Updated] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-08 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4254:
---
Description: 
ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found a assert errors when ats use linux native aio, occur many times in 12 
hours!
{code}
FATAL: HttpTransactHeaders.cc:180: failed assert `!new_hdr->valid()`
traffic_server: Aborted (Signal sent by tkill() 20759 495)traffic_server - 
STACK TRACE:
/opt/ats/bin/traffic_server(crash_logger_invoke(int, siginfo*, 
void*)+0xc3)[0x5036f6]
/lib64/libc.so.6[0x3f1f2326a0]
/lib64/libc.so.6(gsignal+0x35)[0x3f1f232625]
/lib64/libc.so.6(abort+0x175)[0x3f1f233e05]
/opt/ats/lib/libtsutil.so.5(ink_fatal_va(char const*, 
__va_list_tag*)+0x0)[0x2b6a5013a37d]
/opt/ats/lib/libtsutil.so.5(ink_fatal(char const*, ...)+0x0)[0x2b6a5013a434]
/opt/ats/lib/libtsutil.so.5(ink_pfatal(char const*, ...)+0x0)[0x2b6a5013a4f9]
/opt/ats/lib/libtsutil.so.5(+0x39fa2)[0x2b6a50137fa2]
/opt/ats/bin/traffic_server(HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long)+0x73)[0x62eddb]
/opt/ats/bin/traffic_server(HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion)+0x12b)[0x624c9b]
/opt/ats/bin/traffic_server(HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*)+0x6a4)[0x612e72]
/opt/ats/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x71)[0x5f7db1]
/opt/ats/bin/traffic_server(HttpSM::handle_api_return()+0xfa)[0x5e3c82]
/opt/ats/bin/traffic_server(HttpSM::do_api_callout()+0x40)[0x5fe7c6]
/opt/ats/bin/traffic_server(HttpSM::set_next_state()+0x5e)[0x5f7f54]
/opt/ats/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x1ae)[0x5f7eee]
/opt/ats/bin/traffic_server(HttpSM::do_hostdb_lookup()+0x71e)[0x5ed4b6]
/opt/ats/bin/traffic_server(HttpSM::set_next_state()+0xb34)[0x5f8a2a]
/opt/ats/bin/traffic_server(HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*))+0x1ae)[0x5f7eee]
/opt/ats/bin/traffic_server(HttpSM::handle_server_setup_error(int, 
void*)+0x65a)[0x5f227e]
/opt/ats/bin/traffic_server(HttpSM::state_send_server_request_header(int, 
void*)+0x57c)[0x5e53b8]
/opt/ats/bin/traffic_server(HttpSM::main_handler(int, void*)+0x270)[0x5e79a4]
/opt/ats/bin/traffic_server(Continuation::handleEvent(int, 
void*)+0x6c)[0x506956]
/opt/ats/bin/traffic_server[0x779c97]
/opt/ats/bin/traffic_server(UnixNetVConnection::mainEvent(int, 
Event*)+0x4de)[0x77d584]
/opt/ats/bin/traffic_server(Continuation::handleEvent(int, 
void*)+0x6c)[0x506956]
/opt/ats/bin/traffic_server(InactivityCop::check_inactivity(int, 
Event*)+0x4d3)[0x773ed9]
/opt/ats/bin/traffic_server(Continuation::handleEvent(int, 
void*)+0x6c)[0x506956]
/opt/ats/bin/traffic_server(EThread::process_event(Event*, int)+0x136)[0x79bee6]
/opt/ats/bin/traffic_server(EThread::execute()+0x289)[0x79c2f5]
/opt/ats/bin/traffic_server[0x79b3ec]
/lib64/libpthread.so.0[0x3f1f6079d1]
/lib64/libc.so.6(clone+0x6d)[0x3f1f2e89dd]
{code}
or
{code}
Thread 20825, [ET_NET 41]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
70x2b6a50137fa2 _ink_assert + 0x32
80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long) + 0x73
90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
10   0x00612e72 
HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x71
12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
14   0x005f7f54 HttpSM::set_next_state() + 0x5e
15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
17   0x005f8a2a HttpSM::set_next_state() + 0xb34
18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) + 
0x57c
21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 0x5a
24   0x0077d584 UnixNetVConnection::mainEvent(int, 

[jira] [Updated] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-08 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4254:
---
Description: 
ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found a assert errors when ats use linux native aio, occur many times in 12 
hours!
{code}
FATAL: HttpTransactHeaders.cc:180: failed assert `!new_hdr->valid()`
Thread 20825, [ET_NET 41]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
70x2b6a50137fa2 _ink_assert + 0x32
80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long) + 0x73
90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
10   0x00612e72 
HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x71
12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
14   0x005f7f54 HttpSM::set_next_state() + 0x5e
15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
17   0x005f8a2a HttpSM::set_next_state() + 0xb34
18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) + 
0x57c
21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 0x5a
24   0x0077d584 UnixNetVConnection::mainEvent(int, Event*) + 0x4de
25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
26   0x00773ed9 InactivityCop::check_inactivity(int, Event*) + 0x4d3
27   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
28   0x0079bee6 EThread::process_event(Event*, int) + 0x136
29   0x0079c2f5 EThread::execute() + 0x289
30   0x0079b3ec spawn_thread_internal(void*) + 0x75
31   0x003f1f6079d1 start_thread + 0xd1
32   0x003f1f2e89dd clone + 0x6d
33   0x 0x0 + 0x6d
{code}
anyone can give me some advices? thanks in advance !

  was:
ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found a assert errors when ats use linux native aio, occur many times in 12 
hours!
{code}
Thread 20825, [ET_NET 41]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
70x2b6a50137fa2 _ink_assert + 0x32
80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long) + 0x73
90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
10   0x00612e72 
HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x71
12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
14   0x005f7f54 HttpSM::set_next_state() + 0x5e
15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
17   0x005f8a2a HttpSM::set_next_state() + 0xb34
18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) + 
0x57c
21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 0x5a
24   0x0077d584 UnixNetVConnection::mainEvent(int, Event*) + 0x4de
25 

[jira] [Updated] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-08 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4254:
---
Description: 
ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found a assert errors when ats use linux native aio, occur many times in 12 
hours!
{code}
Thread 20825, [ET_NET 41]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
70x2b6a50137fa2 _ink_assert + 0x32
80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long) + 0x73
90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
10   0x00612e72 
HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x71
12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
14   0x005f7f54 HttpSM::set_next_state() + 0x5e
15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
17   0x005f8a2a HttpSM::set_next_state() + 0xb34
18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) + 
0x57c
21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 0x5a
24   0x0077d584 UnixNetVConnection::mainEvent(int, Event*) + 0x4de
25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
26   0x00773ed9 InactivityCop::check_inactivity(int, Event*) + 0x4d3
27   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
28   0x0079bee6 EThread::process_event(Event*, int) + 0x136
29   0x0079c2f5 EThread::execute() + 0x289
30   0x0079b3ec spawn_thread_internal(void*) + 0x75
31   0x003f1f6079d1 start_thread + 0xd1
32   0x003f1f2e89dd clone + 0x6d
33   0x 0x0 + 0x6d
{code}
anyone can give me some advices? thanks in advance !

  was:
ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found 2 types of assert errors when ats use linux native aio, occur many 
times in 12 hours!
{code}
Thread 24898, [ET_NET 38]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b81cd71937d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b81cd719434 ink_fatal(char const*, ...) + (nil)
60x2b81cd7194f9 ink_pfatal(char const*, ...) + (nil)
70x2b81cd716fa2 _ink_assert + 0x32
80x0066a6b6 LogAccess::marshal_mem(char*, char const*, int, int) + 
0x62
90x0066d829 
LogAccessHttp::marshal_client_req_unmapped_url_host(char*) + 0x75
10   0x0067b815 LogField::marshal(LogAccess*, char*) + 0x73
11   0x0067c32f LogFieldList::marshal(LogAccess*, char*) + 0x51
12   0x0068b7dc LogObject::log(LogAccess*, char const*) + 0x6a0
13   0x0068ddf6 LogObjectManager::log(LogAccess*) + 0x72
14   0x006670e2 Log::access(LogAccess*) + 0x2a0
15   0x005f6c37 HttpSM::kill_this() + 0x545
16   0x005e7a92 HttpSM::main_handler(int, void*) + 0x35e
17   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
18   0x00635e81 HttpTunnel::main_handler(int, void*) + 0x15d
19   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
20   0x00779e7b write_signal_and_update(int, UnixNetVConnection*) + 0x5a
21   0x0077a093 write_signal_done(int, NetHandler*, 
UnixNetVConnection*) + 0x32
22   0x0077b206 write_to_net_io(NetHandler*, UnixNetVConnection*, 
EThread*) + 0x8a2
23   0x0077a95d write_to_net(NetHandler*, UnixNetVConnection*, 
EThread*) + 0x80
24   0x00773672 NetHandler::mainNetEvent(int, Event*) + 0x7ae
25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
26   0x0079bee6 EThread::process_event(Event*, int) + 0x136
27   0x0079c507 

[jira] [Commented] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-08 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184656#comment-15184656
 ] 

taoyunxing commented on TS-4262:


I code review the DNS.cc and  found the "proxy.process.dns.lookup_successes" 
stats is not precise, I move the line:
{code}
DNS_INCREMENT_DYN_STAT(dns_total_lookups_stat);
{code}
under the line:
{code}
DNS_INCREMENT_DYN_STAT(dns_in_flight_stat);
{code}
in the file trafficserver-5.3.2/iocore/dns/DNS.cc, then compile ats 5.3.2 
again, enable the stats_over_http plugin, I got the desired result:
{code}
"proxy.process.hostdb.total_entries": "135000",
"proxy.process.hostdb.total_lookups": "21",
"proxy.process.hostdb.total_hits": "10",
"proxy.process.hostdb.ttl": "0.00",
"proxy.process.hostdb.ttl_expires": "2",
"proxy.process.hostdb.re_dns_on_reload": "0",
"proxy.process.hostdb.bytes": "25935872",
"proxy.process.dns.total_dns_lookups": "11",
"proxy.process.dns.lookup_avg_time": "0",
"proxy.process.dns.lookup_successes": "11",
"proxy.process.dns.fail_avg_time": "0",
"proxy.process.dns.lookup_failures": "0",
"proxy.process.dns.retries": "0",
"proxy.process.dns.max_retries_exceeded": "0",
{code}
so, the conclusion:
move a line code can fix this bug !

> Some imprecise statistics of stats_over_http plugin
> ---
>
> Key: TS-4262
> URL: https://issues.apache.org/jira/browse/TS-4262
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 5.3.2
>Reporter: taoyunxing
> Fix For: 6.1.2
>
>
> I found the statistics of HostDB and DNS are not precise, see the following:
> {code}
> proxy.process.hostdb.total_lookups  286951594
> proxy.process.hostdb.total_hits 277568738
> proxy.process.dns.total_dns_lookups23238887
> proxy.process.dns.lookup_successes8952696
> proxy.process.dns.lookup_failures 332447
> {code}
> calculation:
> {code}
> 286951594 - 277568738 = 9382856
> 8952696 +  332447 = 9285143
> 9382856 - 9285143 = 97713
> {code}
> I have two question:
> 1.why ?
> proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures
> 2.why?
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
> proxy.process.dns.total_dns_lookups
> anyone can show me the reason? thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-07 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4262:
---
Fix Version/s: 6.1.2

> Some imprecise statistics of stats_over_http plugin
> ---
>
> Key: TS-4262
> URL: https://issues.apache.org/jira/browse/TS-4262
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 5.3.2
>Reporter: taoyunxing
> Fix For: 6.1.2
>
>
> I found the statistics of HostDB and DNS are not precise, see the following:
> {code}
> proxy.process.hostdb.total_lookups  286951594
> proxy.process.hostdb.total_hits 277568738
> proxy.process.dns.total_dns_lookups23238887
> proxy.process.dns.lookup_successes8952696
> proxy.process.dns.lookup_failures 332447
> {code}
> calculation:
> {code}
> 286951594 - 277568738 = 9382856
> 8952696 +  332447 = 9285143
> 9382856 - 9285143 = 97713
> {code}
> I have two question:
> 1.why ?
> proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures
> 2.why?
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
> proxy.process.dns.total_dns_lookups
> anyone can show me the reason? thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-07 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4262:
---
Affects Version/s: 5.3.2

> Some imprecise statistics of stats_over_http plugin
> ---
>
> Key: TS-4262
> URL: https://issues.apache.org/jira/browse/TS-4262
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 5.3.2
>Reporter: taoyunxing
>
> I found the statistics of HostDB and DNS are not precise, see the following:
> {code}
> proxy.process.hostdb.total_lookups  286951594
> proxy.process.hostdb.total_hits 277568738
> proxy.process.dns.total_dns_lookups23238887
> proxy.process.dns.lookup_successes8952696
> proxy.process.dns.lookup_failures 332447
> {code}
> calculation:
> {code}
> 286951594 - 277568738 = 9382856
> 8952696 +  332447 = 9285143
> 9382856 - 9285143 = 97713
> {code}
> I have two question:
> 1.why ?
> proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures
> 2.why?
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
> proxy.process.dns.total_dns_lookups
> anyone can show me the reason? thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-07 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4262:
---
Description: 
I found the statistics of HostDB and DNS are not precise, see the following:
{code}
proxy.process.hostdb.total_lookups  286951594
proxy.process.hostdb.total_hits 277568738

proxy.process.dns.total_dns_lookups23238887
proxy.process.dns.lookup_successes8952696
proxy.process.dns.lookup_failures 332447
{code}

calculation:
{code}
286951594 - 277568738 = 9382856
8952696 +  332447 = 9285143
9382856 - 9285143 = 97713
{code}

I have two question:
1.why ?
proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures

2.why?
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
proxy.process.dns.total_dns_lookups

anyone can show me the reason? thanks in advance!

  was:
I found the statistics of HostDB and DNS are not precise, see the following:
{code}
proxy.process.hostdb.total_lookups286951594
proxy.process.hostdb.total_hits   277568738

proxy.process.dns.total_dns_lookups23238887
proxy.process.dns.lookup_successes  8952696
proxy.process.dns.lookup_failures 332447
{code}

calculation:
286951594 - 277568738 = 9382856
8952696 +  332447 = 9285143
9382856 - 9285143 = 97713

I have two question:
1.why ?
proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures

2.why?
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
proxy.process.dns.total_dns_lookups

anyone can show me the reason? thanks in advance!


> Some imprecise statistics of stats_over_http plugin
> ---
>
> Key: TS-4262
> URL: https://issues.apache.org/jira/browse/TS-4262
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: taoyunxing
>
> I found the statistics of HostDB and DNS are not precise, see the following:
> {code}
> proxy.process.hostdb.total_lookups  286951594
> proxy.process.hostdb.total_hits 277568738
> proxy.process.dns.total_dns_lookups23238887
> proxy.process.dns.lookup_successes8952696
> proxy.process.dns.lookup_failures 332447
> {code}
> calculation:
> {code}
> 286951594 - 277568738 = 9382856
> 8952696 +  332447 = 9285143
> 9382856 - 9285143 = 97713
> {code}
> I have two question:
> 1.why ?
> proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures
> 2.why?
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
> proxy.process.dns.total_dns_lookups
> anyone can show me the reason? thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-07 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4262:
---
Description: 
I found the statistics of HostDB and DNS are not precise, see the following:
{code}
proxy.process.hostdb.total_lookups286951594
proxy.process.hostdb.total_hits   277568738

proxy.process.dns.total_dns_lookups23238887
proxy.process.dns.lookup_successes  8952696
proxy.process.dns.lookup_failures 332447
{code}

calculation:
286951594 - 277568738 = 9382856
8952696 +  332447 = 9285143
9382856 - 9285143 = 97713

I have two question:
1.why ?
proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures

2.why?
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
proxy.process.dns.total_dns_lookups

anyone can show me the reason? thanks in advance!

  was:
I found the statistics of HostDB and DNS are not precise, see the following:
{code}
proxy.process.hostdb.total_lookups 286951594
proxy.process.hostdb.total_hits  277568738

proxy.process.dns.total_dns_lookups 23238887
proxy.process.dns.lookup_successes  8952696
proxy.process.dns.lookup_failures  332447
{code}

calculation:
286951594 - 277568738 = 9382856
8952696 +332447 = 9285143
9382856 - 9285143 = 97713

I have two question:
1.why ?
proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures

2.why?
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
proxy.process.dns.total_dns_lookups

anyone can show me the reason? thinks in advance!


> Some imprecise statistics of stats_over_http plugin
> ---
>
> Key: TS-4262
> URL: https://issues.apache.org/jira/browse/TS-4262
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Plugins
>Reporter: taoyunxing
>
> I found the statistics of HostDB and DNS are not precise, see the following:
> {code}
> proxy.process.hostdb.total_lookups286951594
> proxy.process.hostdb.total_hits   277568738
> proxy.process.dns.total_dns_lookups23238887
> proxy.process.dns.lookup_successes  8952696
> proxy.process.dns.lookup_failures 332447
> {code}
> calculation:
> 286951594 - 277568738 = 9382856
> 8952696 +  332447 = 9285143
> 9382856 - 9285143 = 97713
> I have two question:
> 1.why ?
> proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures
> 2.why?
> proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
> proxy.process.dns.total_dns_lookups
> anyone can show me the reason? thanks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-4262) Some imprecise statistics of stats_over_http plugin

2016-03-07 Thread taoyunxing (JIRA)
taoyunxing created TS-4262:
--

 Summary: Some imprecise statistics of stats_over_http plugin
 Key: TS-4262
 URL: https://issues.apache.org/jira/browse/TS-4262
 Project: Traffic Server
  Issue Type: Bug
  Components: Plugins
Reporter: taoyunxing


I found the statistics of HostDB and DNS are not precise, see the following:
{code}
proxy.process.hostdb.total_lookups 286951594
proxy.process.hostdb.total_hits  277568738

proxy.process.dns.total_dns_lookups 23238887
proxy.process.dns.lookup_successes  8952696
proxy.process.dns.lookup_failures  332447
{code}

calculation:
286951594 - 277568738 = 9382856
8952696 +332447 = 9285143
9382856 - 9285143 = 97713

I have two question:
1.why ?
proxy.process.hostdb.total_lookups - proxy.process.hostdb.total_hits != 
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures

2.why?
proxy.process.dns.lookup_successes + proxy.process.dns.lookup_failures != 
proxy.process.dns.total_dns_lookups

anyone can show me the reason? thinks in advance!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-03 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4254:
---
Fix Version/s: 6.2.0

> abort failure when enable --enable-linux-native-aio in production host
> --
>
> Key: TS-4254
> URL: https://issues.apache.org/jira/browse/TS-4254
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.2
>Reporter: taoyunxing
>  Labels: AIO
> Fix For: 6.2.0
>
>
> ATS: 5.3.2 , CentOS 6.6
> compile:
> yum -y libaio libaio-devel
> ./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
> install -j 4
> I found 2 types of assert errors when ats use linux native aio, occur many 
> times in 12 hours!
> {code}
> Thread 24898, [ET_NET 38]:
> 00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
> 10x003f1f2326a0 __restore_rt + (nil)
> 20x003f1f232625 gsignal + 0x35
> 30x003f1f233e05 abort + 0x175
> 40x2b81cd71937d ink_fatal_va(char const*, __va_list_tag*) + (nil)
> 50x2b81cd719434 ink_fatal(char const*, ...) + (nil)
> 60x2b81cd7194f9 ink_pfatal(char const*, ...) + (nil)
> 70x2b81cd716fa2 _ink_assert + 0x32
> 80x0066a6b6 LogAccess::marshal_mem(char*, char const*, int, int) 
> + 0x62
> 90x0066d829 
> LogAccessHttp::marshal_client_req_unmapped_url_host(char*) + 0x75
> 10   0x0067b815 LogField::marshal(LogAccess*, char*) + 0x73
> 11   0x0067c32f LogFieldList::marshal(LogAccess*, char*) + 0x51
> 12   0x0068b7dc LogObject::log(LogAccess*, char const*) + 0x6a0
> 13   0x0068ddf6 LogObjectManager::log(LogAccess*) + 0x72
> 14   0x006670e2 Log::access(LogAccess*) + 0x2a0
> 15   0x005f6c37 HttpSM::kill_this() + 0x545
> 16   0x005e7a92 HttpSM::main_handler(int, void*) + 0x35e
> 17   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 18   0x00635e81 HttpTunnel::main_handler(int, void*) + 0x15d
> 19   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 20   0x00779e7b write_signal_and_update(int, UnixNetVConnection*) + 
> 0x5a
> 21   0x0077a093 write_signal_done(int, NetHandler*, 
> UnixNetVConnection*) + 0x32
> 22   0x0077b206 write_to_net_io(NetHandler*, UnixNetVConnection*, 
> EThread*) + 0x8a2
> 23   0x0077a95d write_to_net(NetHandler*, UnixNetVConnection*, 
> EThread*) + 0x80
> 24   0x00773672 NetHandler::mainNetEvent(int, Event*) + 0x7ae
> 25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 26   0x0079bee6 EThread::process_event(Event*, int) + 0x136
> 27   0x0079c507 EThread::execute() + 0x49b
> 28   0x0079b3ec spawn_thread_internal(void*) + 0x75
> 29   0x003f1f6079d1 start_thread + 0xd1
> 30   0x003f1f2e89dd clone + 0x6d
> 31   0x 0x0 + 0x6d
> {code}
> 
> {code}
> Thread 20825, [ET_NET 41]:
> 00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
> 10x003f1f2326a0 __restore_rt + (nil)
> 20x003f1f232625 gsignal + 0x35
> 30x003f1f233e05 abort + 0x175
> 40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
> 50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
> 60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
> 70x2b6a50137fa2 _ink_assert + 0x32
> 80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
> HTTPHdr*, bool, long) + 0x73
> 90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
> HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
> 10   0x00612e72 
> HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
> 11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x71
> 12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
> 13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
> 14   0x005f7f54 HttpSM::set_next_state() + 0x5e
> 15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x1ae
> 16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
> 17   0x005f8a2a HttpSM::set_next_state() + 0xb34
> 18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x1ae
> 19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
> 20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) 
> + 0x57c
> 21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
> 22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 23   0x00779c97 

[jira] [Updated] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-03 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4254:
---
Affects Version/s: 5.3.2

> abort failure when enable --enable-linux-native-aio in production host
> --
>
> Key: TS-4254
> URL: https://issues.apache.org/jira/browse/TS-4254
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.2
>Reporter: taoyunxing
>  Labels: AIO
>
> ATS: 5.3.2 , CentOS 6.6
> compile:
> yum -y libaio libaio-devel
> ./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
> install -j 4
> I found 2 types of assert errors when ats use linux native aio, occur many 
> times in 12 hours!
> {code}
> Thread 24898, [ET_NET 38]:
> 00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
> 10x003f1f2326a0 __restore_rt + (nil)
> 20x003f1f232625 gsignal + 0x35
> 30x003f1f233e05 abort + 0x175
> 40x2b81cd71937d ink_fatal_va(char const*, __va_list_tag*) + (nil)
> 50x2b81cd719434 ink_fatal(char const*, ...) + (nil)
> 60x2b81cd7194f9 ink_pfatal(char const*, ...) + (nil)
> 70x2b81cd716fa2 _ink_assert + 0x32
> 80x0066a6b6 LogAccess::marshal_mem(char*, char const*, int, int) 
> + 0x62
> 90x0066d829 
> LogAccessHttp::marshal_client_req_unmapped_url_host(char*) + 0x75
> 10   0x0067b815 LogField::marshal(LogAccess*, char*) + 0x73
> 11   0x0067c32f LogFieldList::marshal(LogAccess*, char*) + 0x51
> 12   0x0068b7dc LogObject::log(LogAccess*, char const*) + 0x6a0
> 13   0x0068ddf6 LogObjectManager::log(LogAccess*) + 0x72
> 14   0x006670e2 Log::access(LogAccess*) + 0x2a0
> 15   0x005f6c37 HttpSM::kill_this() + 0x545
> 16   0x005e7a92 HttpSM::main_handler(int, void*) + 0x35e
> 17   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 18   0x00635e81 HttpTunnel::main_handler(int, void*) + 0x15d
> 19   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 20   0x00779e7b write_signal_and_update(int, UnixNetVConnection*) + 
> 0x5a
> 21   0x0077a093 write_signal_done(int, NetHandler*, 
> UnixNetVConnection*) + 0x32
> 22   0x0077b206 write_to_net_io(NetHandler*, UnixNetVConnection*, 
> EThread*) + 0x8a2
> 23   0x0077a95d write_to_net(NetHandler*, UnixNetVConnection*, 
> EThread*) + 0x80
> 24   0x00773672 NetHandler::mainNetEvent(int, Event*) + 0x7ae
> 25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 26   0x0079bee6 EThread::process_event(Event*, int) + 0x136
> 27   0x0079c507 EThread::execute() + 0x49b
> 28   0x0079b3ec spawn_thread_internal(void*) + 0x75
> 29   0x003f1f6079d1 start_thread + 0xd1
> 30   0x003f1f2e89dd clone + 0x6d
> 31   0x 0x0 + 0x6d
> {code}
> 
> {code}
> Thread 20825, [ET_NET 41]:
> 00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
> 10x003f1f2326a0 __restore_rt + (nil)
> 20x003f1f232625 gsignal + 0x35
> 30x003f1f233e05 abort + 0x175
> 40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
> 50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
> 60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
> 70x2b6a50137fa2 _ink_assert + 0x32
> 80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
> HTTPHdr*, bool, long) + 0x73
> 90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
> HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
> 10   0x00612e72 
> HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
> 11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x71
> 12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
> 13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
> 14   0x005f7f54 HttpSM::set_next_state() + 0x5e
> 15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x1ae
> 16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
> 17   0x005f8a2a HttpSM::set_next_state() + 0xb34
> 18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
> (*)(HttpTransact::State*)) + 0x1ae
> 19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
> 20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) 
> + 0x57c
> 21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
> 22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
> 23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 

[jira] [Created] (TS-4254) abort failure when enable --enable-linux-native-aio in production host

2016-03-03 Thread taoyunxing (JIRA)
taoyunxing created TS-4254:
--

 Summary: abort failure when enable --enable-linux-native-aio in 
production host
 Key: TS-4254
 URL: https://issues.apache.org/jira/browse/TS-4254
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: taoyunxing


ATS: 5.3.2 , CentOS 6.6
compile:
yum -y libaio libaio-devel
./configure --prefix=/opt/ats --enable-linux-native-aio && make -j 4 && make 
install -j 4

I found 2 types of assert errors when ats use linux native aio, occur many 
times in 12 hours!
{code}
Thread 24898, [ET_NET 38]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b81cd71937d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b81cd719434 ink_fatal(char const*, ...) + (nil)
60x2b81cd7194f9 ink_pfatal(char const*, ...) + (nil)
70x2b81cd716fa2 _ink_assert + 0x32
80x0066a6b6 LogAccess::marshal_mem(char*, char const*, int, int) + 
0x62
90x0066d829 
LogAccessHttp::marshal_client_req_unmapped_url_host(char*) + 0x75
10   0x0067b815 LogField::marshal(LogAccess*, char*) + 0x73
11   0x0067c32f LogFieldList::marshal(LogAccess*, char*) + 0x51
12   0x0068b7dc LogObject::log(LogAccess*, char const*) + 0x6a0
13   0x0068ddf6 LogObjectManager::log(LogAccess*) + 0x72
14   0x006670e2 Log::access(LogAccess*) + 0x2a0
15   0x005f6c37 HttpSM::kill_this() + 0x545
16   0x005e7a92 HttpSM::main_handler(int, void*) + 0x35e
17   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
18   0x00635e81 HttpTunnel::main_handler(int, void*) + 0x15d
19   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
20   0x00779e7b write_signal_and_update(int, UnixNetVConnection*) + 0x5a
21   0x0077a093 write_signal_done(int, NetHandler*, 
UnixNetVConnection*) + 0x32
22   0x0077b206 write_to_net_io(NetHandler*, UnixNetVConnection*, 
EThread*) + 0x8a2
23   0x0077a95d write_to_net(NetHandler*, UnixNetVConnection*, 
EThread*) + 0x80
24   0x00773672 NetHandler::mainNetEvent(int, Event*) + 0x7ae
25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
26   0x0079bee6 EThread::process_event(Event*, int) + 0x136
27   0x0079c507 EThread::execute() + 0x49b
28   0x0079b3ec spawn_thread_internal(void*) + 0x75
29   0x003f1f6079d1 start_thread + 0xd1
30   0x003f1f2e89dd clone + 0x6d
31   0x 0x0 + 0x6d
{code}

{code}
Thread 20825, [ET_NET 41]:
00x005036cb crash_logger_invoke(int, siginfo*, void*) + 0x98
10x003f1f2326a0 __restore_rt + (nil)
20x003f1f232625 gsignal + 0x35
30x003f1f233e05 abort + 0x175
40x2b6a5013a37d ink_fatal_va(char const*, __va_list_tag*) + (nil)
50x2b6a5013a434 ink_fatal(char const*, ...) + (nil)
60x2b6a5013a4f9 ink_pfatal(char const*, ...) + (nil)
70x2b6a50137fa2 _ink_assert + 0x32
80x0062eddb HttpTransactHeaders::copy_header_fields(HTTPHdr*, 
HTTPHdr*, bool, long) + 0x73
90x00624c9b HttpTransact::build_request(HttpTransact::State*, 
HTTPHdr*, HTTPHdr*, HTTPVersion) + 0x12b
10   0x00612e72 
HttpTransact::HandleCacheOpenReadMiss(HttpTransact::State*) + 0x6a4
11   0x005f7db1 HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x71
12   0x005e3c82 HttpSM::handle_api_return() + 0xfa
13   0x005fe7c6 HttpSM::do_api_callout() + 0x40
14   0x005f7f54 HttpSM::set_next_state() + 0x5e
15   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
16   0x005ed4b6 HttpSM::do_hostdb_lookup() + 0x71e
17   0x005f8a2a HttpSM::set_next_state() + 0xb34
18   0x005f7eee HttpSM::call_transact_and_set_next_state(void 
(*)(HttpTransact::State*)) + 0x1ae
19   0x005f227e HttpSM::handle_server_setup_error(int, void*) + 0x65a
20   0x005e53b8 HttpSM::state_send_server_request_header(int, void*) + 
0x57c
21   0x005e79a4 HttpSM::main_handler(int, void*) + 0x270
22   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
23   0x00779c97 read_signal_and_update(int, UnixNetVConnection*) + 0x5a
24   0x0077d584 UnixNetVConnection::mainEvent(int, Event*) + 0x4de
25   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
26   0x00773ed9 InactivityCop::check_inactivity(int, Event*) + 0x4d3
27   0x00506956 Continuation::handleEvent(int, void*) + 0x6c
28   0x0079bee6 EThread::process_event(Event*, int) + 0x136
29   0x0079c2f5 EThread::execute() + 0x289
30   

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Description: 
CPU:40Cores, Mem:132GB, Hard Disk:300GB sys + 11* 2TB data(naked), Network 
Card: 4*GbE bond, ATS: 5.3.1 or 6.1.1
Recently I found possible memory leak in production host with ATS version from 
5.3.1 to 6.1.1, the memory tracker log is as follows:

{code}

 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |   33554432 |2097152 | 
memory/ioBufAllocator[14]
41271951360 |40564162560 |1048576 | 
memory/ioBufAllocator[13]
  570425344 |  566755328 | 524288 | 
memory/ioBufAllocator[12]
  276824064 |  270794752 | 262144 | 
memory/ioBufAllocator[11]
  155189248 |  150994944 | 131072 | 
memory/ioBufAllocator[10]
  113246208 |  111607808 |  65536 | memory/ioBufAllocator[9]
 1319108608 | 1278607360 |  32768 | memory/ioBufAllocator[8]
   39845888 |   39501824 |  16384 | memory/ioBufAllocator[7]
   78381056 |   55869440 |   8192 | memory/ioBufAllocator[6]
  228589568 |  212971520 |   4096 | memory/ioBufAllocator[5]
 262144 |  0 |   2048 | memory/ioBufAllocator[4]
 131072 |  0 |   1024 | memory/ioBufAllocator[3]
  65536 |  0 |512 | memory/ioBufAllocator[2]
 163840 |256 |256 | memory/ioBufAllocator[1]
1785856 |768 |128 | memory/ioBufAllocator[0]
  0 |  0 |512 | memory/FetchSMAllocator
  0 |  0 |592 | 
memory/ICPRequestCont_allocator
  0 |  0 |128 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
  0 |  0 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |240 | memory/INKVConnAllocator
  0 |  0 | 96 | memory/INKContAllocator
 503808 | 492864 | 32 | memory/apiHookAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
   17211392 |   17169856 |704 | 
memory/httpClientSessionAllocator
  180633600 |  170085888 |   8064 | memory/httpSMAllocator
2236416 |2229024 |224 | 
memory/httpServerSessionAllocator
  0 |  0 | 48 | 
memory/CacheLookupHttpConfigAllocator
  0 |  0 |848 | 
memory/http2ClientSessionAllocator
  0 |  0 |128 | memory/RemapPluginsAlloc
  0 |  0 | 48 | 
memory/CongestRequestParamAllocator
  0 |  0 |160 | 
memory/CongestionDBContAllocator
4325376 |4245248 |256 | 
memory/httpCacheAltAllocator
  155451392 |  146763776 |   2048 | memory/hdrStrHeap
  269221888 |  253577216 |   2048 | memory/hdrHeap
  0 |  0 |128 | 
memory/OneWayTunnelAllocator
  0 |  0 | 96 | 
memory/hostDBFileContAllocator
 296960 |   4640 |   2320 | 
memory/hostDBContAllocator
 135424 |  33856 |  33856 | memory/dnsBufAllocator
 163840 |   1280 |   1280 | memory/dnsEntryAllocator
  0 |  0 | 16 | 
memory/DNSRequestDataAllocator
  0 |  0 |112 | 
memory/inControlAllocator
  0 |  0 |128 | 
memory/outControlAllocator
  0 |  0 | 32 | memory/byteBankAllocator
  0 |  0 |592 | 
memory/clusterVCAllocator
  0 |  0 | 48 | 
memory/ClusterVConnectionCache::Entry
  0 |  0 |576 | 
memory/cacheContAllocator
  0 |  0 | 48 | memory/evacuationKey
  0 |  0 | 64 | memory/cacheRemoveCont
 380928 | 380928 | 96 | memory/evacuationBlock
   20003360 |   19959936 |944 | memory/cacheVConnection
 163840 | 

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Description: 
CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GbE bond, ATS: 
5.3.1 or 6.1.1
Recently I found possible memory leak in production host with ATS version from 
5.3.1 to 6.1.1, the memory tracker log is as following:

{code}

 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |   33554432 |2097152 | 
memory/ioBufAllocator[14]
41271951360 |40564162560 |1048576 | 
memory/ioBufAllocator[13]
  570425344 |  566755328 | 524288 | 
memory/ioBufAllocator[12]
  276824064 |  270794752 | 262144 | 
memory/ioBufAllocator[11]
  155189248 |  150994944 | 131072 | 
memory/ioBufAllocator[10]
  113246208 |  111607808 |  65536 | memory/ioBufAllocator[9]
 1319108608 | 1278607360 |  32768 | memory/ioBufAllocator[8]
   39845888 |   39501824 |  16384 | memory/ioBufAllocator[7]
   78381056 |   55869440 |   8192 | memory/ioBufAllocator[6]
  228589568 |  212971520 |   4096 | memory/ioBufAllocator[5]
 262144 |  0 |   2048 | memory/ioBufAllocator[4]
 131072 |  0 |   1024 | memory/ioBufAllocator[3]
  65536 |  0 |512 | memory/ioBufAllocator[2]
 163840 |256 |256 | memory/ioBufAllocator[1]
1785856 |768 |128 | memory/ioBufAllocator[0]
  0 |  0 |512 | memory/FetchSMAllocator
  0 |  0 |592 | 
memory/ICPRequestCont_allocator
  0 |  0 |128 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
  0 |  0 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |240 | memory/INKVConnAllocator
  0 |  0 | 96 | memory/INKContAllocator
 503808 | 492864 | 32 | memory/apiHookAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
   17211392 |   17169856 |704 | 
memory/httpClientSessionAllocator
  180633600 |  170085888 |   8064 | memory/httpSMAllocator
2236416 |2229024 |224 | 
memory/httpServerSessionAllocator
  0 |  0 | 48 | 
memory/CacheLookupHttpConfigAllocator
  0 |  0 |848 | 
memory/http2ClientSessionAllocator
  0 |  0 |128 | memory/RemapPluginsAlloc
  0 |  0 | 48 | 
memory/CongestRequestParamAllocator
  0 |  0 |160 | 
memory/CongestionDBContAllocator
4325376 |4245248 |256 | 
memory/httpCacheAltAllocator
  155451392 |  146763776 |   2048 | memory/hdrStrHeap
  269221888 |  253577216 |   2048 | memory/hdrHeap
  0 |  0 |128 | 
memory/OneWayTunnelAllocator
  0 |  0 | 96 | 
memory/hostDBFileContAllocator
 296960 |   4640 |   2320 | 
memory/hostDBContAllocator
 135424 |  33856 |  33856 | memory/dnsBufAllocator
 163840 |   1280 |   1280 | memory/dnsEntryAllocator
  0 |  0 | 16 | 
memory/DNSRequestDataAllocator
  0 |  0 |112 | 
memory/inControlAllocator
  0 |  0 |128 | 
memory/outControlAllocator
  0 |  0 | 32 | memory/byteBankAllocator
  0 |  0 |592 | 
memory/clusterVCAllocator
  0 |  0 | 48 | 
memory/ClusterVConnectionCache::Entry
  0 |  0 |576 | 
memory/cacheContAllocator
  0 |  0 | 48 | memory/evacuationKey
  0 |  0 | 64 | memory/cacheRemoveCont
 380928 | 380928 | 96 | memory/evacuationBlock
   20003360 |   19959936 |944 | memory/cacheVConnection
 163840 | 156960 |160 | 

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Description: 
CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GbE bond, ATS: 
5.3.1 or 6.1.1
Recently I found possible memory leak in production host with ATS version from 
5.3.1 to 6.1.1, the memory tracker log is as following:

{quote}
 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |   33554432 |2097152 | 
memory/ioBufAllocator[14]
41271951360 |40564162560 |1048576 | 
memory/ioBufAllocator[13]
  570425344 |  566755328 | 524288 | 
memory/ioBufAllocator[12]
  276824064 |  270794752 | 262144 | 
memory/ioBufAllocator[11]
  155189248 |  150994944 | 131072 | 
memory/ioBufAllocator[10]
  113246208 |  111607808 |  65536 | memory/ioBufAllocator[9]
 1319108608 | 1278607360 |  32768 | memory/ioBufAllocator[8]
   39845888 |   39501824 |  16384 | memory/ioBufAllocator[7]
   78381056 |   55869440 |   8192 | memory/ioBufAllocator[6]
  228589568 |  212971520 |   4096 | memory/ioBufAllocator[5]
 262144 |  0 |   2048 | memory/ioBufAllocator[4]
 131072 |  0 |   1024 | memory/ioBufAllocator[3]
  65536 |  0 |512 | memory/ioBufAllocator[2]
 163840 |256 |256 | memory/ioBufAllocator[1]
1785856 |768 |128 | memory/ioBufAllocator[0]
  0 |  0 |512 | memory/FetchSMAllocator
  0 |  0 |592 | 
memory/ICPRequestCont_allocator
  0 |  0 |128 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
  0 |  0 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |240 | memory/INKVConnAllocator
  0 |  0 | 96 | memory/INKContAllocator
 503808 | 492864 | 32 | memory/apiHookAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
   17211392 |   17169856 |704 | 
memory/httpClientSessionAllocator
  180633600 |  170085888 |   8064 | memory/httpSMAllocator
2236416 |2229024 |224 | 
memory/httpServerSessionAllocator
  0 |  0 | 48 | 
memory/CacheLookupHttpConfigAllocator
  0 |  0 |848 | 
memory/http2ClientSessionAllocator
  0 |  0 |128 | memory/RemapPluginsAlloc
  0 |  0 | 48 | 
memory/CongestRequestParamAllocator
  0 |  0 |160 | 
memory/CongestionDBContAllocator
4325376 |4245248 |256 | 
memory/httpCacheAltAllocator
  155451392 |  146763776 |   2048 | memory/hdrStrHeap
  269221888 |  253577216 |   2048 | memory/hdrHeap
  0 |  0 |128 | 
memory/OneWayTunnelAllocator
  0 |  0 | 96 | 
memory/hostDBFileContAllocator
 296960 |   4640 |   2320 | 
memory/hostDBContAllocator
 135424 |  33856 |  33856 | memory/dnsBufAllocator
 163840 |   1280 |   1280 | memory/dnsEntryAllocator
  0 |  0 | 16 | 
memory/DNSRequestDataAllocator
  0 |  0 |112 | 
memory/inControlAllocator
  0 |  0 |128 | 
memory/outControlAllocator
  0 |  0 | 32 | memory/byteBankAllocator
  0 |  0 |592 | 
memory/clusterVCAllocator
  0 |  0 | 48 | 
memory/ClusterVConnectionCache::Entry
  0 |  0 |576 | 
memory/cacheContAllocator
  0 |  0 | 48 | memory/evacuationKey
  0 |  0 | 64 | memory/cacheRemoveCont
 380928 | 380928 | 96 | memory/evacuationBlock
   20003360 |   19959936 |944 | memory/cacheVConnection
 163840 | 156960 |160 | 

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Affects Version/s: 6.1.1

> ATS possible memory leak in CacheVC
> ---
>
> Key: TS-4246
> URL: https://issues.apache.org/jira/browse/TS-4246
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 6.1.1
>Reporter: taoyunxing
>  Labels: CacheVC
> Fix For: 6.1.2
>
>
> CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GB bond, ATS: 
> 5.3.1 or 6.1.1
> Recently I found possible memory leak in production host with ATS version 
> from 5.3.1 to 6.1.1, the memory tracker log is as following:
> 
>  allocated  |in-use  | type size  |   free list name
> |||--
>67108864 |   33554432 |2097152 | 
> memory/ioBufAllocator[14]
> 41271951360 |40564162560 |1048576 | 
> memory/ioBufAllocator[13]
>   570425344 |  566755328 | 524288 | 
> memory/ioBufAllocator[12]
>   276824064 |  270794752 | 262144 | 
> memory/ioBufAllocator[11]
>   155189248 |  150994944 | 131072 | 
> memory/ioBufAllocator[10]
>   113246208 |  111607808 |  65536 | 
> memory/ioBufAllocator[9]
>  1319108608 | 1278607360 |  32768 | 
> memory/ioBufAllocator[8]
>39845888 |   39501824 |  16384 | 
> memory/ioBufAllocator[7]
>78381056 |   55869440 |   8192 | 
> memory/ioBufAllocator[6]
>   228589568 |  212971520 |   4096 | 
> memory/ioBufAllocator[5]
>  262144 |  0 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>  163840 |256 |256 | 
> memory/ioBufAllocator[1]
> 1785856 |768 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |512 | 
> memory/FetchSMAllocator
>   0 |  0 |592 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |128 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>  503808 | 492864 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>17211392 |   17169856 |704 | 
> memory/httpClientSessionAllocator
>   180633600 |  170085888 |   8064 | memory/httpSMAllocator
> 2236416 |2229024 |224 | 
> memory/httpServerSessionAllocator
>   0 |  0 | 48 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |848 | 
> memory/http2ClientSessionAllocator
>   0 |  0 |128 | 
> memory/RemapPluginsAlloc
>   0 |  0 | 48 | 
> memory/CongestRequestParamAllocator
>   0 |  0 |160 | 
> memory/CongestionDBContAllocator
> 4325376 |4245248 |256 | 
> memory/httpCacheAltAllocator
>   155451392 |  146763776 |   2048 | memory/hdrStrHeap
>   269221888 |  253577216 |   2048 | memory/hdrHeap
>   0 |  0 |128 | 
> memory/OneWayTunnelAllocator
>   0 |  0 | 96 | 
> memory/hostDBFileContAllocator
>  296960 |   4640 |   2320 | 
> memory/hostDBContAllocator
>  135424 |  33856 |  33856 | memory/dnsBufAllocator
>  163840 |   1280 |   1280 | 
> memory/dnsEntryAllocator
>   0 |  0 | 16 | 
> memory/DNSRequestDataAllocator
>   0 |  0 |112 | 
> memory/inControlAllocator
>   0 |  0 |128 | 
> memory/outControlAllocator
>   0 |  0 | 32 | 
> memory/byteBankAllocator
>   0 |

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Labels: CacheVC  (was: )

> ATS possible memory leak in CacheVC
> ---
>
> Key: TS-4246
> URL: https://issues.apache.org/jira/browse/TS-4246
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Affects Versions: 6.1.1
>Reporter: taoyunxing
>  Labels: CacheVC
> Fix For: 6.1.2
>
>
> CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GB bond, ATS: 
> 5.3.1 or 6.1.1
> Recently I found possible memory leak in production host with ATS version 
> from 5.3.1 to 6.1.1, the memory tracker log is as following:
> 
>  allocated  |in-use  | type size  |   free list name
> |||--
>67108864 |   33554432 |2097152 | 
> memory/ioBufAllocator[14]
> 41271951360 |40564162560 |1048576 | 
> memory/ioBufAllocator[13]
>   570425344 |  566755328 | 524288 | 
> memory/ioBufAllocator[12]
>   276824064 |  270794752 | 262144 | 
> memory/ioBufAllocator[11]
>   155189248 |  150994944 | 131072 | 
> memory/ioBufAllocator[10]
>   113246208 |  111607808 |  65536 | 
> memory/ioBufAllocator[9]
>  1319108608 | 1278607360 |  32768 | 
> memory/ioBufAllocator[8]
>39845888 |   39501824 |  16384 | 
> memory/ioBufAllocator[7]
>78381056 |   55869440 |   8192 | 
> memory/ioBufAllocator[6]
>   228589568 |  212971520 |   4096 | 
> memory/ioBufAllocator[5]
>  262144 |  0 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>  163840 |256 |256 | 
> memory/ioBufAllocator[1]
> 1785856 |768 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |512 | 
> memory/FetchSMAllocator
>   0 |  0 |592 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |128 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>  503808 | 492864 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>17211392 |   17169856 |704 | 
> memory/httpClientSessionAllocator
>   180633600 |  170085888 |   8064 | memory/httpSMAllocator
> 2236416 |2229024 |224 | 
> memory/httpServerSessionAllocator
>   0 |  0 | 48 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |848 | 
> memory/http2ClientSessionAllocator
>   0 |  0 |128 | 
> memory/RemapPluginsAlloc
>   0 |  0 | 48 | 
> memory/CongestRequestParamAllocator
>   0 |  0 |160 | 
> memory/CongestionDBContAllocator
> 4325376 |4245248 |256 | 
> memory/httpCacheAltAllocator
>   155451392 |  146763776 |   2048 | memory/hdrStrHeap
>   269221888 |  253577216 |   2048 | memory/hdrHeap
>   0 |  0 |128 | 
> memory/OneWayTunnelAllocator
>   0 |  0 | 96 | 
> memory/hostDBFileContAllocator
>  296960 |   4640 |   2320 | 
> memory/hostDBContAllocator
>  135424 |  33856 |  33856 | memory/dnsBufAllocator
>  163840 |   1280 |   1280 | 
> memory/dnsEntryAllocator
>   0 |  0 | 16 | 
> memory/DNSRequestDataAllocator
>   0 |  0 |112 | 
> memory/inControlAllocator
>   0 |  0 |128 | 
> memory/outControlAllocator
>   0 |  0 | 32 | 
> memory/byteBankAllocator
>   0 |

[jira] [Updated] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4246:
---
Fix Version/s: 6.1.2

> ATS possible memory leak in CacheVC
> ---
>
> Key: TS-4246
> URL: https://issues.apache.org/jira/browse/TS-4246
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Cache
>Reporter: taoyunxing
> Fix For: 6.1.2
>
>
> CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GB bond, ATS: 
> 5.3.1 or 6.1.1
> Recently I found possible memory leak in production host with ATS version 
> from 5.3.1 to 6.1.1, the memory tracker log is as following:
> 
>  allocated  |in-use  | type size  |   free list name
> |||--
>67108864 |   33554432 |2097152 | 
> memory/ioBufAllocator[14]
> 41271951360 |40564162560 |1048576 | 
> memory/ioBufAllocator[13]
>   570425344 |  566755328 | 524288 | 
> memory/ioBufAllocator[12]
>   276824064 |  270794752 | 262144 | 
> memory/ioBufAllocator[11]
>   155189248 |  150994944 | 131072 | 
> memory/ioBufAllocator[10]
>   113246208 |  111607808 |  65536 | 
> memory/ioBufAllocator[9]
>  1319108608 | 1278607360 |  32768 | 
> memory/ioBufAllocator[8]
>39845888 |   39501824 |  16384 | 
> memory/ioBufAllocator[7]
>78381056 |   55869440 |   8192 | 
> memory/ioBufAllocator[6]
>   228589568 |  212971520 |   4096 | 
> memory/ioBufAllocator[5]
>  262144 |  0 |   2048 | 
> memory/ioBufAllocator[4]
>  131072 |  0 |   1024 | 
> memory/ioBufAllocator[3]
>   65536 |  0 |512 | 
> memory/ioBufAllocator[2]
>  163840 |256 |256 | 
> memory/ioBufAllocator[1]
> 1785856 |768 |128 | 
> memory/ioBufAllocator[0]
>   0 |  0 |512 | 
> memory/FetchSMAllocator
>   0 |  0 |592 | 
> memory/ICPRequestCont_allocator
>   0 |  0 |128 | 
> memory/ICPPeerReadContAllocator
>   0 |  0 |432 | 
> memory/PeerReadDataAllocator
>   0 |  0 | 32 | 
> memory/MIMEFieldSDKHandle
>   0 |  0 |240 | 
> memory/INKVConnAllocator
>   0 |  0 | 96 | 
> memory/INKContAllocator
>  503808 | 492864 | 32 | 
> memory/apiHookAllocator
>   0 |  0 |128 | 
> memory/socksProxyAllocator
>17211392 |   17169856 |704 | 
> memory/httpClientSessionAllocator
>   180633600 |  170085888 |   8064 | memory/httpSMAllocator
> 2236416 |2229024 |224 | 
> memory/httpServerSessionAllocator
>   0 |  0 | 48 | 
> memory/CacheLookupHttpConfigAllocator
>   0 |  0 |848 | 
> memory/http2ClientSessionAllocator
>   0 |  0 |128 | 
> memory/RemapPluginsAlloc
>   0 |  0 | 48 | 
> memory/CongestRequestParamAllocator
>   0 |  0 |160 | 
> memory/CongestionDBContAllocator
> 4325376 |4245248 |256 | 
> memory/httpCacheAltAllocator
>   155451392 |  146763776 |   2048 | memory/hdrStrHeap
>   269221888 |  253577216 |   2048 | memory/hdrHeap
>   0 |  0 |128 | 
> memory/OneWayTunnelAllocator
>   0 |  0 | 96 | 
> memory/hostDBFileContAllocator
>  296960 |   4640 |   2320 | 
> memory/hostDBContAllocator
>  135424 |  33856 |  33856 | memory/dnsBufAllocator
>  163840 |   1280 |   1280 | 
> memory/dnsEntryAllocator
>   0 |  0 | 16 | 
> memory/DNSRequestDataAllocator
>   0 |  0 |112 | 
> memory/inControlAllocator
>   0 |  0 |128 | 
> memory/outControlAllocator
>   0 |  0 | 32 | 
> memory/byteBankAllocator
>   0 |  0 |592 | 
> memory/clusterVCAllocator
>   

[jira] [Created] (TS-4246) ATS possible memory leak in CacheVC

2016-03-01 Thread taoyunxing (JIRA)
taoyunxing created TS-4246:
--

 Summary: ATS possible memory leak in CacheVC
 Key: TS-4246
 URL: https://issues.apache.org/jira/browse/TS-4246
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: taoyunxing


CPU:40Core, Mem:132GB, Disk:300 data + 11* 2T, Network Card: 4*GB bond, ATS: 
5.3.1 or 6.1.1
Recently I found possible memory leak in production host with ATS version from 
5.3.1 to 6.1.1, the memory tracker log is as following:

 allocated  |in-use  | type size  |   free list name
|||--
   67108864 |   33554432 |2097152 | 
memory/ioBufAllocator[14]
41271951360 |40564162560 |1048576 | 
memory/ioBufAllocator[13]
  570425344 |  566755328 | 524288 | 
memory/ioBufAllocator[12]
  276824064 |  270794752 | 262144 | 
memory/ioBufAllocator[11]
  155189248 |  150994944 | 131072 | 
memory/ioBufAllocator[10]
  113246208 |  111607808 |  65536 | memory/ioBufAllocator[9]
 1319108608 | 1278607360 |  32768 | memory/ioBufAllocator[8]
   39845888 |   39501824 |  16384 | memory/ioBufAllocator[7]
   78381056 |   55869440 |   8192 | memory/ioBufAllocator[6]
  228589568 |  212971520 |   4096 | memory/ioBufAllocator[5]
 262144 |  0 |   2048 | memory/ioBufAllocator[4]
 131072 |  0 |   1024 | memory/ioBufAllocator[3]
  65536 |  0 |512 | memory/ioBufAllocator[2]
 163840 |256 |256 | memory/ioBufAllocator[1]
1785856 |768 |128 | memory/ioBufAllocator[0]
  0 |  0 |512 | memory/FetchSMAllocator
  0 |  0 |592 | 
memory/ICPRequestCont_allocator
  0 |  0 |128 | 
memory/ICPPeerReadContAllocator
  0 |  0 |432 | 
memory/PeerReadDataAllocator
  0 |  0 | 32 | 
memory/MIMEFieldSDKHandle
  0 |  0 |240 | memory/INKVConnAllocator
  0 |  0 | 96 | memory/INKContAllocator
 503808 | 492864 | 32 | memory/apiHookAllocator
  0 |  0 |128 | 
memory/socksProxyAllocator
   17211392 |   17169856 |704 | 
memory/httpClientSessionAllocator
  180633600 |  170085888 |   8064 | memory/httpSMAllocator
2236416 |2229024 |224 | 
memory/httpServerSessionAllocator
  0 |  0 | 48 | 
memory/CacheLookupHttpConfigAllocator
  0 |  0 |848 | 
memory/http2ClientSessionAllocator
  0 |  0 |128 | memory/RemapPluginsAlloc
  0 |  0 | 48 | 
memory/CongestRequestParamAllocator
  0 |  0 |160 | 
memory/CongestionDBContAllocator
4325376 |4245248 |256 | 
memory/httpCacheAltAllocator
  155451392 |  146763776 |   2048 | memory/hdrStrHeap
  269221888 |  253577216 |   2048 | memory/hdrHeap
  0 |  0 |128 | 
memory/OneWayTunnelAllocator
  0 |  0 | 96 | 
memory/hostDBFileContAllocator
 296960 |   4640 |   2320 | 
memory/hostDBContAllocator
 135424 |  33856 |  33856 | memory/dnsBufAllocator
 163840 |   1280 |   1280 | memory/dnsEntryAllocator
  0 |  0 | 16 | 
memory/DNSRequestDataAllocator
  0 |  0 |112 | 
memory/inControlAllocator
  0 |  0 |128 | 
memory/outControlAllocator
  0 |  0 | 32 | memory/byteBankAllocator
  0 |  0 |592 | 
memory/clusterVCAllocator
  0 |  0 | 48 | 
memory/ClusterVConnectionCache::Entry
  0 |  0 |576 | 
memory/cacheContAllocator
  0 |  0 | 48 | memory/evacuationKey
  0 |  0 | 64 | memory/cacheRemoveCont
 380928 | 380928 | 96 | memory/evacuationBlock
  

[jira] [Commented] (TS-4215) ATS choke frequently in production host

2016-02-29 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15173144#comment-15173144
 ] 

taoyunxing commented on TS-4215:


I turned on http diags, observed and found that http chokes may occur on all 
the following places on the worst case:
cache lookup, send request , receive response, http tunnel, ...
---
 112552 [Feb 23 10:19:44.415] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) Next action 
SM_ACTION_CACHE_LOOKUP; __null
 112553 [Feb 23 10:19:44.418] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] State Transition: 
SM_ACTION_API_POST_REMAP -> SM_ACTION_CACHE_LOOKUP
 112554 [Feb 23 10:19:44.419] Server {0x2b1b75e1d700} DEBUG:  (http_seq) [HttpSM::do_cache_lookup_and_read] 
[24146] Issuing cache lookup for URL http://www.taobao.com/
 
 116043 [Feb 23 10:19:47.402] Server {0x2b1b75e1d700} DEBUG: 
 (http_cache) [24146] 
[::state_cache_open_read, CACHE_EVENT_OPEN_READ_FAILED]
 116044 [Feb 23 10:19:47.403] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [HttpSM::main_handler, 
CACHE_EVENT_OPEN_READ_FAILED]
 116045 [Feb 23 10:19:47.403] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [::state_cache_open_read, 
CACHE_EVENT_OPEN_READ_FAILED]
 116046 [Feb 23 10:19:47.403] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] cache_open_read - 
CACHE_EVENT_OPEN_READ_FAILED
 116047 [Feb 23 10:19:47.405] Server {0x2b1b75e1d700} DEBUG:  (http) [state_cache_open_read] open read failed.
 116048 [Feb 23 10:19:47.406] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) 
[HttpTransact::HandleCacheOpenRead]
 116049 [Feb 23 10:19:47.406] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) CacheOpenRead -- miss
 116050 [Feb 23 10:19:47.407] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) Next action 
SM_ACTION_DNS_LOOKUP; OSDNSLookup
 116051 [Feb 23 10:19:47.407] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] State Transition: 
SM_ACTION_CACHE_LOOKUP -> SM_ACTION_DNS_LOOKUP
 116052 [Feb 23 10:19:47.408] Server {0x2b1b75e1d700} DEBUG:  (http_seq) [HttpSM::do_hostdb_lookup] Doing DNS Lookup
 116053 [Feb 23 10:19:47.409] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) [ink_cluster_time] 
local: 1456193987, highest_delta: 0, cluster: 1456193987
 
 116077 [Feb 23 10:19:47.423] Server {0x2b1b75e1d700} DEBUG:  (http_seq) [HttpSM::do_http_server_open] Sending request 
to server
 116078 [Feb 23 10:19:47.424] Server {0x2b1b75e1d700} DEBUG:  (http) calling netProcessor.connect_re
 116079 [Feb 23 10:19:47.427] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [HttpSM::main_handler, NET_EVENT_OPEN]
 116080 [Feb 23 10:19:47.427] Server {0x2b1b75e1d700} DEBUG:  (http_track) entered inside state_http_server_open
 116081 [Feb 23 10:19:47.430] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [::state_http_server_open, 
NET_EVENT_OPEN]
 116082 [Feb 23 10:19:47.430] Server {0x2b1b75e1d700} DEBUG: 
 (http_ss) [37085] session born, 
netvc 0x2b1ded0e44c0
 
 117130 [Feb 23 10:19:48.438] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [HttpSM::main_handler, VC_EVENT_WRITE_COMPLETE]
 117131 [Feb 23 10:19:48.438] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] 
[::state_send_server_request_header, VC_EVENT_WRITE_COMPLETE]
 
 117408 [Feb 23 10:19:48.640] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] [HttpSM::main_handler, VC_EVENT_READ_READY]
 117409 [Feb 23 10:19:48.642] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] 
[::state_read_server_response_header, VC_EVENT_READ_READY]
 117410 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG:  (http_seq) Done parsing server response 
header
 117411 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG: 
 (http_redirect) 
[HttpTunnel::deallocate_postdata_copy_buffers]
 117412 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) 
[HttpTransact::HandleResponse]
 117413 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG: 
 (http_seq) 
[HttpTransact::HandleResponse] Response received
 117414 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) [ink_cluster_time] 
local: 1456193988, highest_delta: 0, cluster: 1456193988
 117415 [Feb 23 10:19:48.643] Server {0x2b1b75e1d700} DEBUG: 
 (http_trans) [HandleResponse] 
response_received_time: 1456193988
  
 117451 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG:  (http) [24146] perform_cache_write_action 
CACHE_DO_NO_ACTION
 117452 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG:  (http_tunnel) tunnel_run started, p_arg is provided
 117453 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG: 
 (http_cs) tcp_init_cwnd_set 0
 117454 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG: 
 (http_cs) desired TCP congestion 
window is 0
 117455 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG:  (http_tunnel) [24146] [tunnel_run] producer already done
 117456 [Feb 23 10:19:48.644] Server {0x2b1b75e1d700} DEBUG: 
 

[jira] [Updated] (TS-4215) ATS choke frequently in production host

2016-02-29 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4215:
---
Description: 
During this period, I am suffering from great pressures on my  job, because ATS 
chokes frequently in production environment, the QoS is not guaranteed. I try 
many ats version, such as 5.3.x, 6.0.0, 6.1.1, etc, but all have the same one.

HardWare:
CentOS 6.3/6.6/6.7, CPU 8Cores, Memory 64GB, HardDisk: 300GB sys + 24TB data, 

ATS records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 32G

Scenario:
curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
* About to connect() to proxy 127.0.0.1 port 8081 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET http://www.taobao.com/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: www.taobao.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> X-Debug: via,x-cache,x-cache-key,x-milestones
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0< 
HTTP/1.1 302 Found
< Server: ats/1.5.0
< Date: Thu, 18 Feb 2016 05:53:59 GMT
< Content-Type: text/html
< Content-Length: 258
< Location: https://www.taobao.com/
< Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
05:53:59 GMT;
< Strict-Transport-Security: max-age=31536000
< Age: 0
< Proxy-Connection: keep-alive
< Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
< X-Cache-Key: http://www.taobao.com/
< X-Cache: miss
< X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
UA-BEGIN=0.0
< 
{ [data not shown]
129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  4777* 
Connection #0 to host 127.0.0.1 left intact

* Closing connection #0

Problem:
ats may choke for some seconds before the server response returned,ten times 
maybe occur one time,and I don't know why ? 
here, I use X-Debug plugin to show each milestone(in second) of current http 
request

  was:
During this period, I am from suffering great pressures on my  job, because ATS 
chokes frequently in production environment, the QoS is not guaranteed. I try 
many ats version, such as 5.3.x, 6.0.0, 6.1.1, etc, but all have the same one.

HardWare:
CentOS 6.3/6.6/6.7, CPU 8Cores, Memory 64GB, HardDisk: 300GB sys + 24TB data, 

ATS records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 32G

Scenario:
curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
* About to connect() to proxy 127.0.0.1 port 8081 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET http://www.taobao.com/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: www.taobao.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> X-Debug: via,x-cache,x-cache-key,x-milestones
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0< 
HTTP/1.1 302 Found
< Server: ats/1.5.0
< Date: Thu, 18 Feb 2016 05:53:59 GMT
< Content-Type: text/html
< Content-Length: 258
< Location: https://www.taobao.com/
< Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
05:53:59 GMT;
< Strict-Transport-Security: max-age=31536000
< Age: 0
< Proxy-Connection: keep-alive
< Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
< X-Cache-Key: http://www.taobao.com/
< X-Cache: miss
< X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 

[jira] [Updated] (TS-4215) ATS choke frequently in production host

2016-02-17 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4215:
---
Fix Version/s: 6.1.2

> ATS choke frequently in production host
> ---
>
> Key: TS-4215
> URL: https://issues.apache.org/jira/browse/TS-4215
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.1
>Reporter: taoyunxing
> Fix For: 6.1.2
>
>
> During this period, I am from suffering great pressures on my  job, because 
> ATS chokes frequently in production environment, the QoS is not guaranteed. I 
> try many ats version, such as 5.3.x, 6.0.0, 6.1.1, etc, but all have the same 
> one.
> HardWare:
> CentOS 6.3/6.6/6.7, CPU 8Cores, Memory 64GB, HardDisk: 300GB sys + 24TB data, 
> ATS records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 32G
> Scenario:
> curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
> via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
> * About to connect() to proxy 127.0.0.1 port 8081 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> > GET http://www.taobao.com/ HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: www.taobao.com
> > Accept: */*
> > Proxy-Connection: Keep-Alive
> > X-Debug: via,x-cache,x-cache-key,x-milestones
> > 
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 
> 0< HTTP/1.1 302 Found
> < Server: ats/1.5.0
> < Date: Thu, 18 Feb 2016 05:53:59 GMT
> < Content-Type: text/html
> < Content-Length: 258
> < Location: https://www.taobao.com/
> < Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
> 05:53:59 GMT;
> < Strict-Transport-Security: max-age=31536000
> < Age: 0
> < Proxy-Connection: keep-alive
> < Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
> < X-Cache-Key: http://www.taobao.com/
> < X-Cache: miss
> < X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
> CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
> CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
> SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
> SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
> SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
> UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
> UA-BEGIN=0.0
> < 
> { [data not shown]
> 129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  
> 4777* Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> Problem:
> ats may choke for some seconds before the server response returned,ten times 
> maybe occur one time,and I don't know why ? 
> here, I use X-Debug plugin to show each milestone(in second) of current http 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4215) ATS choke frequently in production host

2016-02-17 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4215:
---
Affects Version/s: 5.3.1

> ATS choke frequently in production host
> ---
>
> Key: TS-4215
> URL: https://issues.apache.org/jira/browse/TS-4215
> Project: Traffic Server
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 5.3.1
>Reporter: taoyunxing
>
> During this period, I am from suffering great pressures on my  job, because 
> ATS chokes frequently in production environment, the QoS is not guaranteed. I 
> try many ats version, such as 5.3.x, 6.0.0, 6.1.1, etc, but all have the same 
> one.
> HardWare:
> CentOS 6.3/6.6/6.7, CPU 8Cores, Memory 64GB, HardDisk: 300GB sys + 24TB data, 
> ATS records.config:
> CONFIG proxy.config.cache.min_average_object_size INT 1048576
> CONFIG proxy.config.cache.ram_cache.algorithm INT 1
> CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
> CONFIG proxy.config.cache.ram_cache.size INT 32G
> Scenario:
> curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
> via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
> * About to connect() to proxy 127.0.0.1 port 8081 (#0)
> *   Trying 127.0.0.1... connected
> * Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> > GET http://www.taobao.com/ HTTP/1.1
> > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> > Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> > Host: www.taobao.com
> > Accept: */*
> > Proxy-Connection: Keep-Alive
> > X-Debug: via,x-cache,x-cache-key,x-milestones
> > 
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 
> 0< HTTP/1.1 302 Found
> < Server: ats/1.5.0
> < Date: Thu, 18 Feb 2016 05:53:59 GMT
> < Content-Type: text/html
> < Content-Length: 258
> < Location: https://www.taobao.com/
> < Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
> 05:53:59 GMT;
> < Strict-Transport-Security: max-age=31536000
> < Age: 0
> < Proxy-Connection: keep-alive
> < Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
> < X-Cache-Key: http://www.taobao.com/
> < X-Cache: miss
> < X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
> CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
> CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
> SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
> SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
> SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
> UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
> UA-BEGIN=0.0
> < 
> { [data not shown]
> 129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  
> 4777* Connection #0 to host 127.0.0.1 left intact
> * Closing connection #0
> Problem:
> ats may choke for some seconds before the server response returned,ten times 
> maybe occur one time,and I don't know why ? 
> here, I use X-Debug plugin to show each milestone(in second) of current http 
> request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-4215) ATS choke frequently in production host

2016-02-17 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-4215:
---
Description: 
During this period, I am from suffering great pressures on my  job, because ATS 
chokes frequently in production environment, the QoS is not guaranteed. I try 
many ats version, such as 5.3.x, 6.0.0, 6.1.1, etc, but all have the same one.

HardWare:
CentOS 6.3/6.6/6.7, CPU 8Cores, Memory 64GB, HardDisk: 300GB sys + 24TB data, 

ATS records.config:
CONFIG proxy.config.cache.min_average_object_size INT 1048576
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.ram_cache_cutoff INT 4194304
CONFIG proxy.config.cache.ram_cache.size INT 32G

Scenario:
curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
* About to connect() to proxy 127.0.0.1 port 8081 (#0)
*   Trying 127.0.0.1... connected
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET http://www.taobao.com/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: www.taobao.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> X-Debug: via,x-cache,x-cache-key,x-milestones
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0< 
HTTP/1.1 302 Found
< Server: ats/1.5.0
< Date: Thu, 18 Feb 2016 05:53:59 GMT
< Content-Type: text/html
< Content-Length: 258
< Location: https://www.taobao.com/
< Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
05:53:59 GMT;
< Strict-Transport-Security: max-age=31536000
< Age: 0
< Proxy-Connection: keep-alive
< Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
< X-Cache-Key: http://www.taobao.com/
< X-Cache: miss
< X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
UA-BEGIN=0.0
< 
{ [data not shown]
129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  4777* 
Connection #0 to host 127.0.0.1 left intact

* Closing connection #0

Problem:
ats may choke for some seconds before the server response returned,ten times 
maybe occur one time,and I don't know why ? 
here, I use X-Debug plugin to show each milestone(in second) of current http 
request

  was:
During this period, I am suffering great pressures on my  job, because ATS 
chokes frequently in production environment, the QoS is not guaranteed. I try 
very much ats version, such as 5.3.x, 6.0.0, 6.1.1, etc

Scenario:
curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
* About to connect() to proxy 127.0.0.1 port 8081 (#0)
*   Trying 127.0.0.1... ^[[Aconnected
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET http://www.taobao.com/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: www.taobao.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> X-Debug: via,x-cache,x-cache-key,x-milestones
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0< 
HTTP/1.1 302 Found
< Server: ics/1.5.0
< Date: Thu, 18 Feb 2016 05:53:59 GMT
< Content-Type: text/html
< Content-Length: 258
< Location: https://www.taobao.com/
< Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
05:53:59 GMT;
< Strict-Transport-Security: max-age=31536000
< Age: 0
< Proxy-Connection: keep-alive
< Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
< X-Cache-Key: http://www.taobao.com/
< X-Cache: miss
< X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
UA-BEGIN=0.0
< 
{ [data not shown]
129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  4777* 
Connection #0 to host 127.0.0.1 left intact

* Closing 

[jira] [Created] (TS-4215) ATS choke frequently in production host

2016-02-17 Thread taoyunxing (JIRA)
taoyunxing created TS-4215:
--

 Summary: ATS choke frequently in production host
 Key: TS-4215
 URL: https://issues.apache.org/jira/browse/TS-4215
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: taoyunxing


During this period, I am suffering great pressures on my  job, because ATS 
chokes frequently in production environment, the QoS is not guaranteed. I try 
very much ats version, such as 5.3.x, 6.0.0, 6.1.1, etc

Scenario:
curl -vo /dev/null -x 127.0.0.1:8081 -H 'X-Debug: 
via,x-cache,x-cache-key,x-milestones' 'http://www.taobao.com/'
* About to connect() to proxy 127.0.0.1 port 8081 (#0)
*   Trying 127.0.0.1... ^[[Aconnected
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET http://www.taobao.com/ HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 
> Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: www.taobao.com
> Accept: */*
> Proxy-Connection: Keep-Alive
> X-Debug: via,x-cache,x-cache-key,x-milestones
> 
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0< 
HTTP/1.1 302 Found
< Server: ics/1.5.0
< Date: Thu, 18 Feb 2016 05:53:59 GMT
< Content-Type: text/html
< Content-Length: 258
< Location: https://www.taobao.com/
< Set-Cookie: thw=cn; Path=/; Domain=.taobao.com; Expires=Fri, 17-Feb-17 
05:53:59 GMT;
< Strict-Transport-Security: max-age=31536000
< Age: 0
< Proxy-Connection: keep-alive
< Via: http/1.1 ltgj51 (ats [uScMsSf pSeN:t cCMi p sS])
< X-Cache-Key: http://www.taobao.com/
< X-Cache: miss
< X-Milestones: DNS-LOOKUP-END=0.64880, DNS-LOOKUP-BEGIN=0.64880, 
CACHE-OPEN-WRITE-END=0.84858, CACHE-OPEN-WRITE-BEGIN=0.77224, 
CACHE-OPEN-READ-END=0.64880, CACHE-OPEN-READ-BEGIN=0.29354, 
SERVER-READ-HEADER-DONE=0.053633813, SERVER-FIRST-READ=0.053633813, 
SERVER-BEGIN-WRITE=0.000115707, SERVER-CONNECT-END=0.000115707, 
SERVER-CONNECT=0.87664, SERVER-FIRST-CONNECT=0.87664, 
UA-BEGIN-WRITE=0.053633813, UA-READ-HEADER-DONE=0.29354, 
UA-BEGIN=0.0
< 
{ [data not shown]
129   258  129   2580 0 17  0  0:00:15  0:00:15 --:--:--  4777* 
Connection #0 to host 127.0.0.1 left intact

* Closing connection #0

Problem:
ats may choke for some seconds before the server response returned,ten times 
maybe occur one time,and I don't know why ? 
here, I use X-Debug plugin to show each milestone(in second) of current http 
request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3841) LogAccess.cc assert failed when enabling custom log and stats_over_http plugin

2015-08-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696907#comment-14696907
 ] 

taoyunxing commented on TS-3841:


I found the log field cqquh is the essence of the problem, in 
int
LogAccessHttp::marshal_client_req_unmapped_url_host(char *buf)
{
  int len = INK_MIN_ALIGN;

  validate_unmapped_url();
  validate_unmapped_url_path();

  len = round_strlen(m_client_req_unmapped_url_host_len + 1); // +1 for eos
  if (buf) {
marshal_mem(buf, m_client_req_unmapped_url_host_str, 
m_client_req_unmapped_url_host_len, len);
  }
  return len;
}
m_client_req_unmapped_url_host_len is -1, and len is 0, this is not the 
truth,line should be 8!

 LogAccess.cc assert failed when enabling custom log and stats_over_http plugin
 --

 Key: TS-3841
 URL: https://issues.apache.org/jira/browse/TS-3841
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: taoyunxing

 when I enable both the custom log and stats_over_http plugin, I encounter the 
 following problem:
 FATAL: LogAccess.cc:816: failed assert `actual_len  padded_len`
 Program received signal SIGABRT, Aborted.
 (gdb) bt
 #0  0x003d150328a5 in raise () from /lib64/libc.so.6
 #1  0x003d15034085 in abort () from /lib64/libc.so.6
 #2  0x77dd8751 in ink_die_die_die () at ink_error.cc:43
 #3  0x77dd8808 in ink_fatal_va(const char *, typedef __va_list_tag 
 __va_list_tag *) (fmt=0x77dea738 %s:%d: failed assert `%s`, 
 ap=0x75e1f430) at ink_error.cc:65
 #4  0x77dd88cd in ink_fatal (message_format=0x77dea738 %s:%d: 
 failed assert `%s`) at ink_error.cc:73
 #5  0x77dd6272 in _ink_assert (expression=0x7e653b actual_len  
 padded_len, file=0x7e64c7 LogAccess.cc, line=816) at ink_assert.cc:37
 #6  0x0066a2e2 in LogAccess::marshal_mem (dest=0x7fffec004358 , 
 source=0x7e6539 -, actual_len=1, padded_len=0) at LogAccess.cc:816
 #7  0x0066d455 in LogAccessHttp::marshal_client_req_unmapped_url_host 
 (this=0x75e1f7d0, buf=0x7fffec004358 ) at LogAccessHttp.cc:472
 #8  0x0067b441 in LogField::marshal (this=0x1110e80, 
 lad=0x75e1f7d0, buf=0x7fffec004358 ) at LogField.cc:276
 #9  0x0067bf5b in LogFieldList::marshal (this=0x110d5b0, 
 lad=0x75e1f7d0, buf=0x7fffec004318 ) at LogField.cc:574
 #10 0x0068b408 in LogObject::log (this=0x110d4e0, lad=0x75e1f7d0, 
 text_entry=0x0) at LogObject.cc:623
 #11 0x0068da22 in LogObjectManager::log (this=0x1116da8, 
 lad=0x75e1f7d0) at LogObject.cc:1331
 #12 0x00666d0e in Log::access (lad=0x75e1f7d0) at Log.cc:927
 #13 0x005f630d in HttpSM::kill_this (this=0x70ef1aa0) at 
 HttpSM.cc:6571
 #14 0x005e7184 in HttpSM::main_handler (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at HttpSM.cc:2567
 #15 0x00506166 in Continuation::handleEvent (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at ../iocore/eventsystem/I_Continuation.h:145
 #16 0x00635ad9 in HttpTunnel::main_handler (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at HttpTunnel.cc:1585
 #17 0x00506166 in Continuation::handleEvent (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at ../iocore/eventsystem/I_Continuation.h:145
 #18 0x00778bdb in write_signal_and_update (event=103, 
 vc=0x718b1f80) at UnixNetVConnection.cc:170
 #19 0x00778df3 in write_signal_done (event=103, nh=0x7622a780, 
 vc=0x718b1f80) at UnixNetVConnection.cc:212
 #20 0x00779f92 in write_to_net_io (nh=0x7622a780, 
 vc=0x718b1f80, thread=0x76227010) at UnixNetVConnection.cc:530
 #21 0x007796e9 in write_to_net (nh=0x7622a780, vc=0x718b1f80, 
 thread=0x76227010) at UnixNetVConnection.cc:384
 #22 0x0077245a in NetHandler::mainNetEvent (this=0x7622a780, 
 event=5, e=0x75a1eb40) at UnixNet.cc:562
 #23 0x00506166 in Continuation::handleEvent (this=0x7622a780, 
 event=5, data=0x75a1eb40) at ../iocore/eventsystem/I_Continuation.h:145
 #24 0x0079aa26 in EThread::process_event (this=0x76227010, 
 e=0x75a1eb40, calling_code=5) at UnixEThread.cc:128
 #25 0x0079b047 in EThread::execute (this=0x76227010) at 
 UnixEThread.cc:252
 #26 0x00799f2c in spawn_thread_internal (a=0x1106a40) at Thread.cc:85
 #27 0x003d15407851 in start_thread () from /lib64/libpthread.so.0
 #28 0x003d150e767d in clone () from /lib64/libc.so.6
 (gdb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3841) LogAccess.cc assert failed when enabling custom log and stats_over_http plugin

2015-08-14 Thread taoyunxing (JIRA)
taoyunxing created TS-3841:
--

 Summary: LogAccess.cc assert failed when enabling custom log and 
stats_over_http plugin
 Key: TS-3841
 URL: https://issues.apache.org/jira/browse/TS-3841
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: taoyunxing


when I enable both the custom log and stats_over_http plugin, I encounter the 
following problem:
FATAL: LogAccess.cc:816: failed assert `actual_len  padded_len`
Program received signal SIGABRT, Aborted.

(gdb) bt
#0  0x003d150328a5 in raise () from /lib64/libc.so.6
#1  0x003d15034085 in abort () from /lib64/libc.so.6
#2  0x77dd8751 in ink_die_die_die () at ink_error.cc:43
#3  0x77dd8808 in ink_fatal_va(const char *, typedef __va_list_tag 
__va_list_tag *) (fmt=0x77dea738 %s:%d: failed assert `%s`, 
ap=0x75e1f430) at ink_error.cc:65
#4  0x77dd88cd in ink_fatal (message_format=0x77dea738 %s:%d: 
failed assert `%s`) at ink_error.cc:73
#5  0x77dd6272 in _ink_assert (expression=0x7e653b actual_len  
padded_len, file=0x7e64c7 LogAccess.cc, line=816) at ink_assert.cc:37
#6  0x0066a2e2 in LogAccess::marshal_mem (dest=0x7fffec004358 , 
source=0x7e6539 -, actual_len=1, padded_len=0) at LogAccess.cc:816
#7  0x0066d455 in LogAccessHttp::marshal_client_req_unmapped_url_host 
(this=0x75e1f7d0, buf=0x7fffec004358 ) at LogAccessHttp.cc:472
#8  0x0067b441 in LogField::marshal (this=0x1110e80, 
lad=0x75e1f7d0, buf=0x7fffec004358 ) at LogField.cc:276
#9  0x0067bf5b in LogFieldList::marshal (this=0x110d5b0, 
lad=0x75e1f7d0, buf=0x7fffec004318 ) at LogField.cc:574
#10 0x0068b408 in LogObject::log (this=0x110d4e0, lad=0x75e1f7d0, 
text_entry=0x0) at LogObject.cc:623
#11 0x0068da22 in LogObjectManager::log (this=0x1116da8, 
lad=0x75e1f7d0) at LogObject.cc:1331
#12 0x00666d0e in Log::access (lad=0x75e1f7d0) at Log.cc:927
#13 0x005f630d in HttpSM::kill_this (this=0x70ef1aa0) at 
HttpSM.cc:6571
#14 0x005e7184 in HttpSM::main_handler (this=0x70ef1aa0, 
event=2301, data=0x70ef2ef8) at HttpSM.cc:2567
#15 0x00506166 in Continuation::handleEvent (this=0x70ef1aa0, 
event=2301, data=0x70ef2ef8) at ../iocore/eventsystem/I_Continuation.h:145
#16 0x00635ad9 in HttpTunnel::main_handler (this=0x70ef2ef8, 
event=103, data=0x718b2110) at HttpTunnel.cc:1585
#17 0x00506166 in Continuation::handleEvent (this=0x70ef2ef8, 
event=103, data=0x718b2110) at ../iocore/eventsystem/I_Continuation.h:145
#18 0x00778bdb in write_signal_and_update (event=103, 
vc=0x718b1f80) at UnixNetVConnection.cc:170
#19 0x00778df3 in write_signal_done (event=103, nh=0x7622a780, 
vc=0x718b1f80) at UnixNetVConnection.cc:212
#20 0x00779f92 in write_to_net_io (nh=0x7622a780, 
vc=0x718b1f80, thread=0x76227010) at UnixNetVConnection.cc:530
#21 0x007796e9 in write_to_net (nh=0x7622a780, vc=0x718b1f80, 
thread=0x76227010) at UnixNetVConnection.cc:384
#22 0x0077245a in NetHandler::mainNetEvent (this=0x7622a780, 
event=5, e=0x75a1eb40) at UnixNet.cc:562
#23 0x00506166 in Continuation::handleEvent (this=0x7622a780, 
event=5, data=0x75a1eb40) at ../iocore/eventsystem/I_Continuation.h:145
#24 0x0079aa26 in EThread::process_event (this=0x76227010, 
e=0x75a1eb40, calling_code=5) at UnixEThread.cc:128
#25 0x0079b047 in EThread::execute (this=0x76227010) at 
UnixEThread.cc:252
#26 0x00799f2c in spawn_thread_internal (a=0x1106a40) at Thread.cc:85
#27 0x003d15407851 in start_thread () from /lib64/libpthread.so.0
#28 0x003d150e767d in clone () from /lib64/libc.so.6
(gdb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3841) LogAccess.cc assert failed when enabling custom log and stats_over_http plugin

2015-08-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696719#comment-14696719
 ] 

taoyunxing commented on TS-3841:


ATS 5.3.0 on CentOS 6.3 64bit

 LogAccess.cc assert failed when enabling custom log and stats_over_http plugin
 --

 Key: TS-3841
 URL: https://issues.apache.org/jira/browse/TS-3841
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: taoyunxing

 when I enable both the custom log and stats_over_http plugin, I encounter the 
 following problem:
 FATAL: LogAccess.cc:816: failed assert `actual_len  padded_len`
 Program received signal SIGABRT, Aborted.
 (gdb) bt
 #0  0x003d150328a5 in raise () from /lib64/libc.so.6
 #1  0x003d15034085 in abort () from /lib64/libc.so.6
 #2  0x77dd8751 in ink_die_die_die () at ink_error.cc:43
 #3  0x77dd8808 in ink_fatal_va(const char *, typedef __va_list_tag 
 __va_list_tag *) (fmt=0x77dea738 %s:%d: failed assert `%s`, 
 ap=0x75e1f430) at ink_error.cc:65
 #4  0x77dd88cd in ink_fatal (message_format=0x77dea738 %s:%d: 
 failed assert `%s`) at ink_error.cc:73
 #5  0x77dd6272 in _ink_assert (expression=0x7e653b actual_len  
 padded_len, file=0x7e64c7 LogAccess.cc, line=816) at ink_assert.cc:37
 #6  0x0066a2e2 in LogAccess::marshal_mem (dest=0x7fffec004358 , 
 source=0x7e6539 -, actual_len=1, padded_len=0) at LogAccess.cc:816
 #7  0x0066d455 in LogAccessHttp::marshal_client_req_unmapped_url_host 
 (this=0x75e1f7d0, buf=0x7fffec004358 ) at LogAccessHttp.cc:472
 #8  0x0067b441 in LogField::marshal (this=0x1110e80, 
 lad=0x75e1f7d0, buf=0x7fffec004358 ) at LogField.cc:276
 #9  0x0067bf5b in LogFieldList::marshal (this=0x110d5b0, 
 lad=0x75e1f7d0, buf=0x7fffec004318 ) at LogField.cc:574
 #10 0x0068b408 in LogObject::log (this=0x110d4e0, lad=0x75e1f7d0, 
 text_entry=0x0) at LogObject.cc:623
 #11 0x0068da22 in LogObjectManager::log (this=0x1116da8, 
 lad=0x75e1f7d0) at LogObject.cc:1331
 #12 0x00666d0e in Log::access (lad=0x75e1f7d0) at Log.cc:927
 #13 0x005f630d in HttpSM::kill_this (this=0x70ef1aa0) at 
 HttpSM.cc:6571
 #14 0x005e7184 in HttpSM::main_handler (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at HttpSM.cc:2567
 #15 0x00506166 in Continuation::handleEvent (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at ../iocore/eventsystem/I_Continuation.h:145
 #16 0x00635ad9 in HttpTunnel::main_handler (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at HttpTunnel.cc:1585
 #17 0x00506166 in Continuation::handleEvent (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at ../iocore/eventsystem/I_Continuation.h:145
 #18 0x00778bdb in write_signal_and_update (event=103, 
 vc=0x718b1f80) at UnixNetVConnection.cc:170
 #19 0x00778df3 in write_signal_done (event=103, nh=0x7622a780, 
 vc=0x718b1f80) at UnixNetVConnection.cc:212
 #20 0x00779f92 in write_to_net_io (nh=0x7622a780, 
 vc=0x718b1f80, thread=0x76227010) at UnixNetVConnection.cc:530
 #21 0x007796e9 in write_to_net (nh=0x7622a780, vc=0x718b1f80, 
 thread=0x76227010) at UnixNetVConnection.cc:384
 #22 0x0077245a in NetHandler::mainNetEvent (this=0x7622a780, 
 event=5, e=0x75a1eb40) at UnixNet.cc:562
 #23 0x00506166 in Continuation::handleEvent (this=0x7622a780, 
 event=5, data=0x75a1eb40) at ../iocore/eventsystem/I_Continuation.h:145
 #24 0x0079aa26 in EThread::process_event (this=0x76227010, 
 e=0x75a1eb40, calling_code=5) at UnixEThread.cc:128
 #25 0x0079b047 in EThread::execute (this=0x76227010) at 
 UnixEThread.cc:252
 #26 0x00799f2c in spawn_thread_internal (a=0x1106a40) at Thread.cc:85
 #27 0x003d15407851 in start_thread () from /lib64/libpthread.so.0
 #28 0x003d150e767d in clone () from /lib64/libc.so.6
 (gdb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3841) LogAccess.cc assert failed when enabling custom log and stats_over_http plugin

2015-08-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14696733#comment-14696733
 ] 

taoyunxing commented on TS-3841:


logs_xml.config file:
LogFormat
  Name = access/
  Format = %cqtq %ttms %pssc %sssc [%cqtt] %phi %phn %cquuh 
%cquuc %{X-Forwarded-For}cqh \%cqtx\ %psql \%pqsi\ %crc:%phr 
%{Referer}cqh \%{User-Agent}cqh\ %psct/
/LogFormat
LogObject
  Format = access/
  Filename = access/
/LogObject

 LogAccess.cc assert failed when enabling custom log and stats_over_http plugin
 --

 Key: TS-3841
 URL: https://issues.apache.org/jira/browse/TS-3841
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: taoyunxing

 when I enable both the custom log and stats_over_http plugin, I encounter the 
 following problem:
 FATAL: LogAccess.cc:816: failed assert `actual_len  padded_len`
 Program received signal SIGABRT, Aborted.
 (gdb) bt
 #0  0x003d150328a5 in raise () from /lib64/libc.so.6
 #1  0x003d15034085 in abort () from /lib64/libc.so.6
 #2  0x77dd8751 in ink_die_die_die () at ink_error.cc:43
 #3  0x77dd8808 in ink_fatal_va(const char *, typedef __va_list_tag 
 __va_list_tag *) (fmt=0x77dea738 %s:%d: failed assert `%s`, 
 ap=0x75e1f430) at ink_error.cc:65
 #4  0x77dd88cd in ink_fatal (message_format=0x77dea738 %s:%d: 
 failed assert `%s`) at ink_error.cc:73
 #5  0x77dd6272 in _ink_assert (expression=0x7e653b actual_len  
 padded_len, file=0x7e64c7 LogAccess.cc, line=816) at ink_assert.cc:37
 #6  0x0066a2e2 in LogAccess::marshal_mem (dest=0x7fffec004358 , 
 source=0x7e6539 -, actual_len=1, padded_len=0) at LogAccess.cc:816
 #7  0x0066d455 in LogAccessHttp::marshal_client_req_unmapped_url_host 
 (this=0x75e1f7d0, buf=0x7fffec004358 ) at LogAccessHttp.cc:472
 #8  0x0067b441 in LogField::marshal (this=0x1110e80, 
 lad=0x75e1f7d0, buf=0x7fffec004358 ) at LogField.cc:276
 #9  0x0067bf5b in LogFieldList::marshal (this=0x110d5b0, 
 lad=0x75e1f7d0, buf=0x7fffec004318 ) at LogField.cc:574
 #10 0x0068b408 in LogObject::log (this=0x110d4e0, lad=0x75e1f7d0, 
 text_entry=0x0) at LogObject.cc:623
 #11 0x0068da22 in LogObjectManager::log (this=0x1116da8, 
 lad=0x75e1f7d0) at LogObject.cc:1331
 #12 0x00666d0e in Log::access (lad=0x75e1f7d0) at Log.cc:927
 #13 0x005f630d in HttpSM::kill_this (this=0x70ef1aa0) at 
 HttpSM.cc:6571
 #14 0x005e7184 in HttpSM::main_handler (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at HttpSM.cc:2567
 #15 0x00506166 in Continuation::handleEvent (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at ../iocore/eventsystem/I_Continuation.h:145
 #16 0x00635ad9 in HttpTunnel::main_handler (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at HttpTunnel.cc:1585
 #17 0x00506166 in Continuation::handleEvent (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at ../iocore/eventsystem/I_Continuation.h:145
 #18 0x00778bdb in write_signal_and_update (event=103, 
 vc=0x718b1f80) at UnixNetVConnection.cc:170
 #19 0x00778df3 in write_signal_done (event=103, nh=0x7622a780, 
 vc=0x718b1f80) at UnixNetVConnection.cc:212
 #20 0x00779f92 in write_to_net_io (nh=0x7622a780, 
 vc=0x718b1f80, thread=0x76227010) at UnixNetVConnection.cc:530
 #21 0x007796e9 in write_to_net (nh=0x7622a780, vc=0x718b1f80, 
 thread=0x76227010) at UnixNetVConnection.cc:384
 #22 0x0077245a in NetHandler::mainNetEvent (this=0x7622a780, 
 event=5, e=0x75a1eb40) at UnixNet.cc:562
 #23 0x00506166 in Continuation::handleEvent (this=0x7622a780, 
 event=5, data=0x75a1eb40) at ../iocore/eventsystem/I_Continuation.h:145
 #24 0x0079aa26 in EThread::process_event (this=0x76227010, 
 e=0x75a1eb40, calling_code=5) at UnixEThread.cc:128
 #25 0x0079b047 in EThread::execute (this=0x76227010) at 
 UnixEThread.cc:252
 #26 0x00799f2c in spawn_thread_internal (a=0x1106a40) at Thread.cc:85
 #27 0x003d15407851 in start_thread () from /lib64/libpthread.so.0
 #28 0x003d150e767d in clone () from /lib64/libc.so.6
 (gdb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TS-3841) LogAccess.cc assert failed when enabling custom log and stats_over_http plugin

2015-08-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14698055#comment-14698055
 ] 

taoyunxing commented on TS-3841:


solution:
add a condition check in 
LogAccessHttp::marshal_client_req_unmapped_url_host(char *buf)
+ if (m_client_req_unmapped_url_host_len  0)
+   m_client_req_unmapped_url_host_len = 0;
len = round_strlen(m_client_req_unmapped_url_host_len + 1); // +1 for eos

then, the world is quiet !

 LogAccess.cc assert failed when enabling custom log and stats_over_http plugin
 --

 Key: TS-3841
 URL: https://issues.apache.org/jira/browse/TS-3841
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: taoyunxing

 when I enable both the custom log and stats_over_http plugin, I encounter the 
 following problem:
 FATAL: LogAccess.cc:816: failed assert `actual_len  padded_len`
 Program received signal SIGABRT, Aborted.
 (gdb) bt
 #0  0x003d150328a5 in raise () from /lib64/libc.so.6
 #1  0x003d15034085 in abort () from /lib64/libc.so.6
 #2  0x77dd8751 in ink_die_die_die () at ink_error.cc:43
 #3  0x77dd8808 in ink_fatal_va(const char *, typedef __va_list_tag 
 __va_list_tag *) (fmt=0x77dea738 %s:%d: failed assert `%s`, 
 ap=0x75e1f430) at ink_error.cc:65
 #4  0x77dd88cd in ink_fatal (message_format=0x77dea738 %s:%d: 
 failed assert `%s`) at ink_error.cc:73
 #5  0x77dd6272 in _ink_assert (expression=0x7e653b actual_len  
 padded_len, file=0x7e64c7 LogAccess.cc, line=816) at ink_assert.cc:37
 #6  0x0066a2e2 in LogAccess::marshal_mem (dest=0x7fffec004358 , 
 source=0x7e6539 -, actual_len=1, padded_len=0) at LogAccess.cc:816
 #7  0x0066d455 in LogAccessHttp::marshal_client_req_unmapped_url_host 
 (this=0x75e1f7d0, buf=0x7fffec004358 ) at LogAccessHttp.cc:472
 #8  0x0067b441 in LogField::marshal (this=0x1110e80, 
 lad=0x75e1f7d0, buf=0x7fffec004358 ) at LogField.cc:276
 #9  0x0067bf5b in LogFieldList::marshal (this=0x110d5b0, 
 lad=0x75e1f7d0, buf=0x7fffec004318 ) at LogField.cc:574
 #10 0x0068b408 in LogObject::log (this=0x110d4e0, lad=0x75e1f7d0, 
 text_entry=0x0) at LogObject.cc:623
 #11 0x0068da22 in LogObjectManager::log (this=0x1116da8, 
 lad=0x75e1f7d0) at LogObject.cc:1331
 #12 0x00666d0e in Log::access (lad=0x75e1f7d0) at Log.cc:927
 #13 0x005f630d in HttpSM::kill_this (this=0x70ef1aa0) at 
 HttpSM.cc:6571
 #14 0x005e7184 in HttpSM::main_handler (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at HttpSM.cc:2567
 #15 0x00506166 in Continuation::handleEvent (this=0x70ef1aa0, 
 event=2301, data=0x70ef2ef8) at ../iocore/eventsystem/I_Continuation.h:145
 #16 0x00635ad9 in HttpTunnel::main_handler (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at HttpTunnel.cc:1585
 #17 0x00506166 in Continuation::handleEvent (this=0x70ef2ef8, 
 event=103, data=0x718b2110) at ../iocore/eventsystem/I_Continuation.h:145
 #18 0x00778bdb in write_signal_and_update (event=103, 
 vc=0x718b1f80) at UnixNetVConnection.cc:170
 #19 0x00778df3 in write_signal_done (event=103, nh=0x7622a780, 
 vc=0x718b1f80) at UnixNetVConnection.cc:212
 #20 0x00779f92 in write_to_net_io (nh=0x7622a780, 
 vc=0x718b1f80, thread=0x76227010) at UnixNetVConnection.cc:530
 #21 0x007796e9 in write_to_net (nh=0x7622a780, vc=0x718b1f80, 
 thread=0x76227010) at UnixNetVConnection.cc:384
 #22 0x0077245a in NetHandler::mainNetEvent (this=0x7622a780, 
 event=5, e=0x75a1eb40) at UnixNet.cc:562
 #23 0x00506166 in Continuation::handleEvent (this=0x7622a780, 
 event=5, data=0x75a1eb40) at ../iocore/eventsystem/I_Continuation.h:145
 #24 0x0079aa26 in EThread::process_event (this=0x76227010, 
 e=0x75a1eb40, calling_code=5) at UnixEThread.cc:128
 #25 0x0079b047 in EThread::execute (this=0x76227010) at 
 UnixEThread.cc:252
 #26 0x00799f2c in spawn_thread_internal (a=0x1106a40) at Thread.cc:85
 #27 0x003d15407851 in start_thread () from /lib64/libpthread.so.0
 #28 0x003d150e767d in clone () from /lib64/libc.so.6
 (gdb)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2015-02-10 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing closed TS-3164.
--
Resolution: Unresolved

no one interested on it,so close this issue.

 why the load of trafficserver occurrs a abrupt rise on a occasion ?
 ---

 Key: TS-3164
 URL: https://issues.apache.org/jira/browse/TS-3164
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64bit, 8 cores, 128G mem 
Reporter: taoyunxing
 Fix For: sometime


 I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across 
 the following problem:
 {code}
 Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
 --sdc---  ---load- 
 Time util util   retranbytin  bytout util util
  util load1   
 03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68
  0.00  5.34   
 03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87
  0.00  5.79   
 03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37
  0.00  6.28   
 03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99
  0.00  7.37   
 03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77
  0.00  7.25   
 03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84
  0.00  7.10   
 03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99
  0.00  6.57   
 03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05
  0.00  6.09   
 03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67
  0.00  8.06   
 03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05
  0.00  9.98   
 03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80
  0.00  7.99   
 03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86
  0.00  7.47   
 03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75
  0.00  5.82   
 03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44
  0.00  7.61   
 03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02
  0.00  6.85   
 03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42
  0.00  7.04   
 03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23
  0.00 65.80   
 03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31
  0.00 11.56   
 03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33
  0.00  8.29
 {code}

 in addition, top command show
 {code}
 hi:0
 ni:0
 si:45.56
 st:0
 sy:13.92
 us:12.58
 wa:14.3
 id:15.96
 {code}
 who help me ? thanks in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread taoyunxing (JIRA)
taoyunxing created TS-3164:
--

 Summary: why the load of trafficserver occurrs a abrupt rise on a 
occasion ?
 Key: TS-3164
 URL: https://issues.apache.org/jira/browse/TS-3164
 Project: Traffic Server
  Issue Type: Bug
  Components: Core
Reporter: taoyunxing


I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? think in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TS-3164) why the load of trafficserver occurrs a abrupt rise on a occasion ?

2014-11-04 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-3164:
---
Description: 
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? thanks in advance.

  was:
I use Tsar to monitor the traffic status of the ATS 4.2.0, and come across the 
following problem:
Time   ---cpu-- ---mem-- ---tcp-- -traffic --sda--- --sdb--- 
--sdc---  ---load- 
Time util util   retranbytin  bytout util util 
util load1   
03/11/14-18:20  40.6787.19 3.3624.5M   43.9M13.0294.68 
0.00  5.34   
03/11/14-18:25  40.3087.20 3.2722.5M   42.6M12.3894.87 
0.00  5.79   
03/11/14-18:30  40.8484.67 3.4421.4M   42.0M13.2995.37 
0.00  6.28   
03/11/14-18:35  43.6387.36 3.2123.8M   45.0M13.2393.99 
0.00  7.37   
03/11/14-18:40  42.2587.37 3.0924.2M   44.8M12.8495.77 
0.00  7.25   
03/11/14-18:45  42.9687.44 3.4623.3M   46.0M12.9695.84 
0.00  7.10   
03/11/14-18:50  44.0087.42 3.4922.3M   43.0M14.1794.99 
0.00  6.57   
03/11/14-18:55  42.2087.44 3.4622.3M   43.6M13.1996.05 
0.00  6.09   
03/11/14-19:00  44.9087.53 3.6023.6M   46.5M13.6196.67 
0.00  8.06   
03/11/14-19:05  46.2687.73 3.2425.8M   49.1M15.3994.05 
0.00  9.98   
03/11/14-19:10  43.8587.69 3.1925.4M   50.9M12.8897.80 
0.00  7.99   
03/11/14-19:15  45.2887.69 3.3625.6M   49.6M13.1096.86 
0.00  7.47   
03/11/14-19:20  44.1185.20 3.2924.1M   47.8M14.2496.75 
0.00  5.82   
03/11/14-19:25  45.2687.78 3.5224.4M   47.7M13.2195.44 
0.00  7.61   
03/11/14-19:30  44.8387.80 3.6425.7M   50.8M13.2798.02 
0.00  6.85   
03/11/14-19:35  44.8987.78 3.6123.9M   49.0M13.3497.42 
0.00  7.04   
03/11/14-19:40  69.2188.88 0.5518.3M   33.7M11.3971.23 
0.00 65.80   
03/11/14-19:45  72.4788.66 0.2715.4M   31.6M11.5172.31 
0.00 11.56   
03/11/14-19:50  44.8788.72 4.1122.7M   46.3M12.9997.33 
0.00  8.29   
in addition, top command show
hi:0
ni:0
si:45.56
st:0
sy:13.92
us:12.58
wa:14.3
id:15.96

who help me ? think in advance.

Environment: CentOS 6.3 64bit, 8 cores, 128G mem 

 why the load of trafficserver occurrs a abrupt rise on a occasion ?
 ---

 Key: TS-3164

[jira] [Updated] (TS-1305) unable to get a complete http response body each time using a extended state machine !

2012-06-18 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-1305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing updated TS-1305:
---

Description: 
I wrote a third state machine FetchPageSM by modifying the SimpleCont in 
trafficserver-3.0.4\proxy\SimpleHttp.cc, and I create a http request like this

+ Request Header created for ProxyFetchPageSM +
-- State Machine Id: 0
GET http://www.cnbeta.com/articles/192944.htm HTTP/1.1
Host: www.cnbeta.com
Proxy-Connection: close
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) 
Chrome/17.0.963.83 Safari/535.11
Accept: \*/\*

, then dns parse url, connect to os, send the above request, finally receive 
the resposne header and body, surprisely enough I couldn't get the complete 
response body sometime, because the actual length of response body don't match 
the Content-Length field in response header, but sometime I get the complete 
one.

It seems random in this case, why ??
I'm very sad,and eager to get some help or hint from experts,help me!

  was:
I wrote a third state machine FetchPageSM by modifying the SimpleCont in 
trafficserver-3.0.4\proxy\SimpleHttp.cc, and I create a http request like this

+ Request Header created for ProxyFetchPageSM +
-- State Machine Id: 0
GET http://www.cnbeta.com/articles/192944.htm HTTP/1.1
Host: www.cnbeta.com
Proxy-Connection: close
User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like Gecko) 
Chrome/17.0.963.83 Safari/535.11
Accept: */*

, then dns parse url, connect to os, send the above request, finally receive 
the resposne header and body, surprisely enough I couldn't get the complete 
response body sometime, because the actual length of response body don't match 
the Content-Length field in response header, but sometime I get the complete 
one.

It seems random in this case, why ??
I'm very sad,and eager to get some help or hint from experts,help me!


 unable to get a complete http response body each time using a extended state 
 machine !
 --

 Key: TS-1305
 URL: https://issues.apache.org/jira/browse/TS-1305
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.4
 Environment: Ubuntu 10.10
Reporter: taoyunxing
  Labels: fullresponse, httptransaction
 Fix For: sometime


 I wrote a third state machine FetchPageSM by modifying the SimpleCont in 
 trafficserver-3.0.4\proxy\SimpleHttp.cc, and I create a http request like this
 + Request Header created for ProxyFetchPageSM +
 -- State Machine Id: 0
 GET http://www.cnbeta.com/articles/192944.htm HTTP/1.1
 Host: www.cnbeta.com
 Proxy-Connection: close
 User-Agent: Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.11 (KHTML, like 
 Gecko) Chrome/17.0.963.83 Safari/535.11
 Accept: \*/\*
 , then dns parse url, connect to os, send the above request, finally receive 
 the resposne header and body, surprisely enough I couldn't get the complete 
 response body sometime, because the actual length of response body don't 
 match the Content-Length field in response header, but sometime I get the 
 complete one.
 It seems random in this case, why ??
 I'm very sad,and eager to get some help or hint from experts,help me!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-921) TSIOBufferBlockReadStart() could'nt return the corresponding chunk data length when original server using the Transfer Encoding: chunked http header !

2011-08-18 Thread taoyunxing (JIRA)
TSIOBufferBlockReadStart() could'nt return the corresponding chunk data length 
when original server using the Transfer Encoding: chunked http header !


 Key: TS-921
 URL: https://issues.apache.org/jira/browse/TS-921
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.1
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0.1, 
Web Browser:firefox 6.0,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
500G
Reporter: taoyunxing
 Fix For: 3.0.2


Recently I meet with a strange proble when I manage to get the server response 
body in the Transfer Encoding: chunked mode: 
I couldn't get the corresponding chunk length in hexdecimal ! The details as 
follows:

I use the null-transform plugin modified to just output the server response 
body, when the web server uses  the Transfer Encoding: chunked header
to transfer chunked data to user agent, eg, I request the web page: 
http://www.qq.com/;, I just get the chunk body data, but get no chunk length 
in 
the hexdecimal format 383cb\r\n before the chunk body data. the chunk body 
data is uncompressed and like as !DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 
Transitional//EN 
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd;\r\n...

In addition, I capture the data packet using the Wireshark software, I sure 
that I see the chunk length  383cb\r\n before the chunk body data  !DOCTYPE 
html PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN 
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd;\r\n..., it is a 
complete chunk response !

I continue to check the code of null-transform.c, set a breakpoint at the 
function TSIOBufferBlockReadStart(), run ATS and debug it, when ATS stop at the 
breakpoint TSIOBufferBlockReadStart(),  I print the return string of 
TSIOBufferBlockReadStart(), there is no chunk data length at the head of output 
string, which implies the api TSIOBufferBlockReadStart() has bug! 


Breakpoint 1, TSIOBufferBlockReadStart (blockp=0x89d6570, readerp=0x89d5330, 
avail=0xb6c8e288) at InkIOCoreAPI.cc:659
659 {
(gdb) s
660   sdk_assert(sdk_sanity_check_iocore_structure(blockp) == TS_SUCCESS);
(gdb) n
667   p = blk-start();
(gdb) 
668   if (avail)
(gdb) p p
$3 = 0xb1312000 !DOCTYPE html PUBLIC \-//W3C//DTD XHTML 1.0 
Transitional//EN\ 
\http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\;\r\nhtml 
xmlns=\http://www.w3.org/1999/xhtml\;\r\nhead\r\nmeta 
http-equiv=\Conten...
(gdb)


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-922) TSVIONTodoGet() has bug!

2011-08-18 Thread taoyunxing (JIRA)
TSVIONTodoGet() has bug!


 Key: TS-922
 URL: https://issues.apache.org/jira/browse/TS-922
 Project: Traffic Server
  Issue Type: Bug
  Components: TS API
Affects Versions: 3.0.1
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0, Web 
Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
500G
Reporter: taoyunxing
 Fix For: 3.0.2


when I use null-transform plugin and print some debug info, I find the 
TSVIONTodoGet() return a huge member, I consider it maybe has a bug !
The details shows below:

code in null-transform.c:

  TSVIO input_vio;
  int64_t towrite = 0;
  int64_t avail = 0;

  towrite = TSVIONTodoGet(input_vio);
  TSDebug(null-transform, \ttoWrite is % PRId64 , towrite);

  if (towrite  0) {
/* The amount of data left to read needs to be truncated by
 * the amount of data actually in the read buffer.
 */
avail = TSIOBufferReaderAvail(TSVIOReaderGet(input_vio));
TSDebug(null-transform, \tavail is % PRId64 , avail);
#if MDSN_LOG
if (log) {
TSTextLogObjectWrite(log, handle_transform() with data to 
write  length: % PRId64 , IOBufferReader available data length: % PRId64 , 
towrite, avail);}
#endif

log info:

20110818.13h51m11s handle_transform() with data to write  length: 
9223372036854775807, IOBufferReader available data length: 1388
 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (TS-878) ATS3.0 frequently shows Broken pipe and exit !

2011-07-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13065136#comment-13065136
 ] 

taoyunxing commented on TS-878:
---

Leif, thank you ! It just a gdb configure issue, and I do this like you said, 
every thing is OK!

 ATS3.0 frequently shows Broken pipe and exit !
 --

 Key: TS-878
 URL: https://issues.apache.org/jira/browse/TS-878
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0, Web 
 Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
 500G
Reporter: taoyunxing
   Original Estimate: 96h
  Remaining Estimate: 96h

 when I cliclk some website using the debug mode of ATS 3.0, the web page 
 shows well, and I can see most content, but suddenly the following warning 
 occurs: 
 Program received signal SIGPIPE, Broken pipe.
 0x0012e416 in __kernel_vsyscall ()
 (gdb) bt
 #0  0x0012e416 in __kernel_vsyscall ()
 #1  0x0016754b in write () from /lib/libpthread.so.0
 #2  0x082be866 in write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 ../../iocore/eventsystem/P_UnixSocketManager.h:207
 #3  UnixNetVConnection::load_buffer_and_write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 UnixNetVConnection.cc:834
 #4  0x082c31a6 in write_to_net_io (nh=0xb759c528, vc=0x9562bb0, 
 thread=0xb759b008) at UnixNetVConnection.cc:439
 #5  0x082b977b in NetHandler::mainNetEvent (this=0xb759c528, event=5, 
 e=0x88af7a0) at UnixNet.cc:413
 #6  0x082e86c2 in handleEvent (this=0xb759b008, e=0x88af7a0, calling_code=5) 
 at I_Continuation.h:146
 #7  EThread::process_event (this=0xb759b008, e=0x88af7a0, calling_code=5) at 
 UnixEThread.cc:140
 #8  0x082e8fe3 in EThread::execute (this=0xb759b008) at UnixEThread.cc:262
 #9  0x080f68d6 in main (argc=1, argv=0xb424) at Main.cc:1958
 (gdb)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Closed] (TS-878) ATS3.0 frequently shows Broken pipe and exit !

2011-07-14 Thread taoyunxing (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

taoyunxing closed TS-878.
-

Backport to Version: 3.0.0

This is a bug, just a gdb configuration issue, and has been resolved !

 ATS3.0 frequently shows Broken pipe and exit !
 --

 Key: TS-878
 URL: https://issues.apache.org/jira/browse/TS-878
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0, Web 
 Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
 500G
Reporter: taoyunxing
   Original Estimate: 96h
  Remaining Estimate: 96h

 when I cliclk some website using the debug mode of ATS 3.0, the web page 
 shows well, and I can see most content, but suddenly the following warning 
 occurs: 
 Program received signal SIGPIPE, Broken pipe.
 0x0012e416 in __kernel_vsyscall ()
 (gdb) bt
 #0  0x0012e416 in __kernel_vsyscall ()
 #1  0x0016754b in write () from /lib/libpthread.so.0
 #2  0x082be866 in write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 ../../iocore/eventsystem/P_UnixSocketManager.h:207
 #3  UnixNetVConnection::load_buffer_and_write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 UnixNetVConnection.cc:834
 #4  0x082c31a6 in write_to_net_io (nh=0xb759c528, vc=0x9562bb0, 
 thread=0xb759b008) at UnixNetVConnection.cc:439
 #5  0x082b977b in NetHandler::mainNetEvent (this=0xb759c528, event=5, 
 e=0x88af7a0) at UnixNet.cc:413
 #6  0x082e86c2 in handleEvent (this=0xb759b008, e=0x88af7a0, calling_code=5) 
 at I_Continuation.h:146
 #7  EThread::process_event (this=0xb759b008, e=0x88af7a0, calling_code=5) at 
 UnixEThread.cc:140
 #8  0x082e8fe3 in EThread::execute (this=0xb759b008) at UnixEThread.cc:262
 #9  0x080f68d6 in main (argc=1, argv=0xb424) at Main.cc:1958
 (gdb)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (TS-878) ATS3.0 frequently shows Broken pipe and exit !

2011-07-14 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13065626#comment-13065626
 ] 

taoyunxing edited comment on TS-878 at 7/15/11 12:13 AM:
-

This is not a bug, just a gdb configuration issue, and has been resolved !

  was (Author: tao_627):
This is a bug, just a gdb configuration issue, and has been resolved !
  
 ATS3.0 frequently shows Broken pipe and exit !
 --

 Key: TS-878
 URL: https://issues.apache.org/jira/browse/TS-878
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0, Web 
 Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
 500G
Reporter: taoyunxing
   Original Estimate: 96h
  Remaining Estimate: 96h

 when I cliclk some website using the debug mode of ATS 3.0, the web page 
 shows well, and I can see most content, but suddenly the following warning 
 occurs: 
 Program received signal SIGPIPE, Broken pipe.
 0x0012e416 in __kernel_vsyscall ()
 (gdb) bt
 #0  0x0012e416 in __kernel_vsyscall ()
 #1  0x0016754b in write () from /lib/libpthread.so.0
 #2  0x082be866 in write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 ../../iocore/eventsystem/P_UnixSocketManager.h:207
 #3  UnixNetVConnection::load_buffer_and_write (this=0x9562bb0, towrite=5552, 
 wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
 UnixNetVConnection.cc:834
 #4  0x082c31a6 in write_to_net_io (nh=0xb759c528, vc=0x9562bb0, 
 thread=0xb759b008) at UnixNetVConnection.cc:439
 #5  0x082b977b in NetHandler::mainNetEvent (this=0xb759c528, event=5, 
 e=0x88af7a0) at UnixNet.cc:413
 #6  0x082e86c2 in handleEvent (this=0xb759b008, e=0x88af7a0, calling_code=5) 
 at I_Continuation.h:146
 #7  EThread::process_event (this=0xb759b008, e=0x88af7a0, calling_code=5) at 
 UnixEThread.cc:140
 #8  0x082e8fe3 in EThread::execute (this=0xb759b008) at UnixEThread.cc:262
 #9  0x080f68d6 in main (argc=1, argv=0xb424) at Main.cc:1958
 (gdb)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (TS-872) ATS 3.0 shows a http 502 error as a forward proxy server !

2011-07-13 Thread taoyunxing (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13064514#comment-13064514
 ] 

taoyunxing commented on TS-872:
---

I make a fresh operation including config,make and install, and a little revise 
in records.config as forward proxy,  then open the web pages successfully. 
Maybe I make some mistakes last time. thanks Leif, the ATS 3.0 is good! this is 
not a bug!

 ATS 3.0 shows a http 502 error as a forward proxy server !
 --

 Key: TS-872
 URL: https://issues.apache.org/jira/browse/TS-872
 Project: Traffic Server
  Issue Type: Bug
  Components: DNS
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10, Traffic Server version:.3.0, Web 
 Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
 500G
Reporter: taoyunxing
  Labels: patch
 Fix For: sometime

   Original Estimate: 48h
  Remaining Estimate: 48h

 when I set up a forward proxy with ATS 3.0 and firefox 4.0.1 and start the 
 proxy server as a root, it shows me the following info:
 root@tyx-System-Product-Name:/usr/local/bin# ./traffic_server 
 [TrafficServer] using root directory '/usr/local'
 [Jul  6 08:56:36.765] {3077691088} STATUS: opened 
 /usr/local/var/log/trafficserver/diags.log
 [Jul  6 08:56:36.765] {3077691088} NOTE: updated diags config
 [Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
 [HttpConfig::init_aeua_filter] - Config: 
 /usr/local/etc/trafficserver/ae_ua.config
 [Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
 [HttpConfig::init_aeua_filter] - Opening config 
 /usr/local/etc/trafficserver/ae_ua.config
 [Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
 [HttpConfig::init_aeua_filter] - Added 0 REGEXP filters
 [Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
 [init_http_aeua_filter] - Total loaded 0 REGEXP for 
 Accept-Enconding/User-Agent filtering
 [Jul  6 08:56:36.768] Server {3077691088} NOTE: cache clustering disabled
 [Jul  6 08:56:36.768] Server {3077691088} NOTE: clearing statistics
 [Jul  6 08:56:36.770] Server {3077691088} DEBUG: (dns) ink_dns_init: called 
 with init_called = 0
 [Jul  6 08:56:36.779] Server {3077691088} DEBUG: (dns) 
 localhost=tyx-System-Product-Name
 [Jul  6 08:56:36.779] Server {3077691088} DEBUG: (dns) Round-robin 
 nameservers = 0
 [Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) Storage path is 
 /usr/local/var/trafficserver
 [Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) Opening host.db, 
 size=20
 [Jul  6 08:56:36.779] Server {3077691088} WARNING: configuration changed: 
 [hostdb.config] : reinitializing database
 [Jul  6 08:56:36.779] Server {3077691088} NOTE: reconfiguring host database
 [Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) unable to unlink 
 /usr/local/etc/trafficserver/internal/hostdb.config
 [Jul  6 08:56:36.779] Server {3077691088} WARNING: Configured store too 
 small, unable to reconfigure
 [Jul  6 08:56:36.779] Server {3077691088} WARNING: unable to initialize 
 database (too little storage)
 : [hostdb.config] : disabling database
 You may need to 'reconfigure' your cache manually.  Please refer to
 the 'Configuration' chapter in the manual.
 [Jul  6 08:56:36.779] Server {3077691088} WARNING: could not initialize host 
 database. Host database will be disabled
 [Jul  6 08:56:36.779] Server {3077691088} WARNING: bad hostdb or storage 
 configuration, hostdb disabled
 [Jul  6 08:56:36.780] Server {3077691088} NOTE: cache clustering disabled
 [Jul  6 08:56:36.834] Server {3057408880} WARNING: disk header different for 
 disk /usr/local/var/trafficserver/cache.db: clearing the disk
 [Jul  6 08:56:36.884] Server {3077691088} NOTE: logging initialized[7], 
 logging_mode = 3
 [Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
 proxy.config.http.redirection_enabled = 0
 [Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
 proxy.config.http.number_of_redirections = 1
 [Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
 proxy.config.http.post_copy_size = 2048
 [Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_tproxy) Primary listen 
 socket transparency is off
 [Jul  6 08:56:36.890] Server {3077691088} NOTE: traffic server running
 [Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) 
 DNSHandler::startEvent: on thread 0
 [Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) open_con: opening 
 connection 8.8.8.8:53
 [Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) random port = 42595
 [Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) opening connection 
 8.8.8.8:53 SUCCEEDED for 0
 [Jul  6 08:56:36.918] Server {3058461552} NOTE: Clearing Disk: 
 /usr/local/var/trafficserver/cache.db
 [Jul  6 08:56:36.919] Server {3058461552} NOTE: clearing cache 

[jira] [Created] (TS-878) ATS3.0 frequently shows Broken pipe and exit !

2011-07-13 Thread taoyunxing (JIRA)
ATS3.0 frequently shows Broken pipe and exit !
--

 Key: TS-878
 URL: https://issues.apache.org/jira/browse/TS-878
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10 32bit, Traffic Server version:.3.0, Web 
Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
500G
Reporter: taoyunxing


when I cliclk some website using the debug mode of ATS 3.0, the web page shows 
well, and I can see most content, but suddenly the following warning occurs: 

Program received signal SIGPIPE, Broken pipe.
0x0012e416 in __kernel_vsyscall ()
(gdb) bt
#0  0x0012e416 in __kernel_vsyscall ()
#1  0x0016754b in write () from /lib/libpthread.so.0
#2  0x082be866 in write (this=0x9562bb0, towrite=5552, wattempted=@0xbfffdb20, 
total_wrote=@0xbfffdb28, buf=...) at 
../../iocore/eventsystem/P_UnixSocketManager.h:207
#3  UnixNetVConnection::load_buffer_and_write (this=0x9562bb0, towrite=5552, 
wattempted=@0xbfffdb20, total_wrote=@0xbfffdb28, buf=...) at 
UnixNetVConnection.cc:834
#4  0x082c31a6 in write_to_net_io (nh=0xb759c528, vc=0x9562bb0, 
thread=0xb759b008) at UnixNetVConnection.cc:439
#5  0x082b977b in NetHandler::mainNetEvent (this=0xb759c528, event=5, 
e=0x88af7a0) at UnixNet.cc:413
#6  0x082e86c2 in handleEvent (this=0xb759b008, e=0x88af7a0, calling_code=5) at 
I_Continuation.h:146
#7  EThread::process_event (this=0xb759b008, e=0x88af7a0, calling_code=5) at 
UnixEThread.cc:140
#8  0x082e8fe3 in EThread::execute (this=0xb759b008) at UnixEThread.cc:262
#9  0x080f68d6 in main (argc=1, argv=0xb424) at Main.cc:1958
(gdb)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-872) ATS 3.0 shows a http 502 error as a forward proxy server !

2011-07-08 Thread taoyunxing (JIRA)
ATS 3.0 shows a http 502 error as a forward proxy server !
--

 Key: TS-872
 URL: https://issues.apache.org/jira/browse/TS-872
 Project: Traffic Server
  Issue Type: Bug
  Components: DNS
Affects Versions: 3.0.0
 Environment: OS: Ubuntu 10.10, Traffic Server version:.3.0, Web 
Browser:firefox 4.0.1,CPU: Intel core i3-2100 3.10GHz, Memory: 2G, HardDisk: 
500G
Reporter: taoyunxing
 Fix For: sometime


when I set up a forward proxy with ATS 3.0 and firefox 4.0.1 and start the 
proxy server as a root, it shows me the following info:

root@tyx-System-Product-Name:/usr/local/bin# ./traffic_server 
[TrafficServer] using root directory '/usr/local'
[Jul  6 08:56:36.765] {3077691088} STATUS: opened 
/usr/local/var/log/trafficserver/diags.log
[Jul  6 08:56:36.765] {3077691088} NOTE: updated diags config
[Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
[HttpConfig::init_aeua_filter] - Config: 
/usr/local/etc/trafficserver/ae_ua.config
[Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
[HttpConfig::init_aeua_filter] - Opening config 
/usr/local/etc/trafficserver/ae_ua.config
[Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
[HttpConfig::init_aeua_filter] - Added 0 REGEXP filters
[Jul  6 08:56:36.766] Server {3077691088} DEBUG: (http_aeua) 
[init_http_aeua_filter] - Total loaded 0 REGEXP for Accept-Enconding/User-Agent 
filtering
[Jul  6 08:56:36.768] Server {3077691088} NOTE: cache clustering disabled
[Jul  6 08:56:36.768] Server {3077691088} NOTE: clearing statistics
[Jul  6 08:56:36.770] Server {3077691088} DEBUG: (dns) ink_dns_init: called 
with init_called = 0
[Jul  6 08:56:36.779] Server {3077691088} DEBUG: (dns) 
localhost=tyx-System-Product-Name
[Jul  6 08:56:36.779] Server {3077691088} DEBUG: (dns) Round-robin nameservers 
= 0
[Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) Storage path is 
/usr/local/var/trafficserver
[Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) Opening host.db, 
size=20
[Jul  6 08:56:36.779] Server {3077691088} WARNING: configuration changed: 
[hostdb.config] : reinitializing database
[Jul  6 08:56:36.779] Server {3077691088} NOTE: reconfiguring host database
[Jul  6 08:56:36.779] Server {3077691088} DEBUG: (hostdb) unable to unlink 
/usr/local/etc/trafficserver/internal/hostdb.config
[Jul  6 08:56:36.779] Server {3077691088} WARNING: Configured store too small, 
unable to reconfigure
[Jul  6 08:56:36.779] Server {3077691088} WARNING: unable to initialize 
database (too little storage)
: [hostdb.config] : disabling database
You may need to 'reconfigure' your cache manually.  Please refer to
the 'Configuration' chapter in the manual.
[Jul  6 08:56:36.779] Server {3077691088} WARNING: could not initialize host 
database. Host database will be disabled
[Jul  6 08:56:36.779] Server {3077691088} WARNING: bad hostdb or storage 
configuration, hostdb disabled
[Jul  6 08:56:36.780] Server {3077691088} NOTE: cache clustering disabled
[Jul  6 08:56:36.834] Server {3057408880} WARNING: disk header different for 
disk /usr/local/var/trafficserver/cache.db: clearing the disk
[Jul  6 08:56:36.884] Server {3077691088} NOTE: logging initialized[7], 
logging_mode = 3
[Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
proxy.config.http.redirection_enabled = 0
[Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
proxy.config.http.number_of_redirections = 1
[Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_init) 
proxy.config.http.post_copy_size = 2048
[Jul  6 08:56:36.887] Server {3077691088} DEBUG: (http_tproxy) Primary listen 
socket transparency is off
[Jul  6 08:56:36.890] Server {3077691088} NOTE: traffic server running
[Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) DNSHandler::startEvent: 
on thread 0
[Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) open_con: opening 
connection 8.8.8.8:53
[Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) random port = 42595
[Jul  6 08:56:36.890] Server {3077691088} DEBUG: (dns) opening connection 
8.8.8.8:53 SUCCEEDED for 0
[Jul  6 08:56:36.918] Server {3058461552} NOTE: Clearing Disk: 
/usr/local/var/trafficserver/cache.db
[Jul  6 08:56:36.919] Server {3058461552} NOTE: clearing cache directory 
'/usr/local/var/trafficserver/cache.db 16384:24575'
[Jul  6 08:56:37.056] Server {3055303536} NOTE: cache enabled
[Jul  6 08:56:45.632] Server {3002059632} DEBUG: (http_tproxy) Marking accepted 
connect on b328c6e8 as not outbound transparent.
[Jul  6 08:56:45.632] Server {3077691088} DEBUG: (http_seq) 
HttpAccept:mainEvent] accepted connection
[Jul  6 08:56:45.632] Server {3077691088} DEBUG: (http_cs) [0] session born, 
netvc 0xa431d60
[Jul  6 08:56:45.633] Server {3077691088} DEBUG: (http_cs) [0] using accept 
inactivity timeout [120 seconds]
[Jul  6 08:56:45.633] Server {3077691088} DEBUG: (http_cs) [0] Starting