[jira] [Created] (TS-848) Crash Report: ShowNet::showConnectionsOnThread - ShowCont::show

2011-06-21 Thread Zhao Yongming (JIRA)
Crash Report: ShowNet::showConnectionsOnThread - ShowCont::show


 Key: TS-848
 URL: https://issues.apache.org/jira/browse/TS-848
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.1.0
Reporter: Zhao Yongming


when we use the {net} http_ui network interface, it crashed with the following 
information

{code}
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/bin/traffic_server - STACK TRACE: 
/usr/bin/traffic_server[0x51ba3e]
/lib64/libpthread.so.0[0x3f89c0e7c0]
[0x7fffd20544f8]
/lib64/libc.so.6(vsnprintf+0x9a)[0x3f8906988a]
/usr/bin/traffic_server(ShowCont::show(char const*, ...)+0x262)[0x638184]
/usr/bin/traffic_server(ShowNet::showConnectionsOnThread(int, 
Event*)+0x481)[0x6ec7bf]
/usr/bin/traffic_server(Continuation::handleEvent(int, void*)+0x6f)[0x4d302f]
/usr/bin/traffic_server(EThread::process_event(Event*, int)+0x11e)[0x6f9978]
/usr/bin/traffic_server(EThread::execute()+0x94)[0x6f9b6a]
/usr/bin/traffic_server(main+0x10c7)[0x4ff74d]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x3f8901d994]
/usr/bin/traffic_server(__gxx_personality_v0+0x491)[0x4b2149]
/usr/bin/traffic_server(__gxx_personality_v0+0x491)[0x4b2149]


[New process 31182]
#0  0x003f890796d0 in strlen () from /lib64/libc.so.6
(gdb) bt
#0  0x003f890796d0 in strlen () from /lib64/libc.so.6
#1  0x003f89046b69 in vfprintf () from /lib64/libc.so.6
#2  0x003f8906988a in vsnprintf () from /lib64/libc.so.6
#3  0x00638184 in ShowCont::show (this=0x2aaab44af600, 
s=0x7732b8 
trtd%d/tdtd%s/tdtd%d/tdtd%d/tdtd%s/tdtd%d/tdtd%d 
secs 
ago/tdtd%d/tdtd%d/tdtd%d/tdtd%d/tdtd%d/tdtd%d/tdtd%d/tdtd%d
 secs/tdtd%d secs/td...) at ../../proxy/Show.h:62
#4  0x006ec7bf in ShowNet::showConnectionsOnThread 
(this=0x2aaab44af600, event=1, e=0x2aaab5cc2080) at UnixNetPages.cc:75
#5  0x004d302f in Continuation::handleEvent (this=0x2aaab44af600, 
event=1, data=0x2aaab5cc2080) at I_Continuation.h:146
#6  0x006f9978 in EThread::process_event (this=0x2ae29010, 
e=0x2aaab5cc2080, calling_code=1) at UnixEThread.cc:140
#7  0x006f9b6a in EThread::execute (this=0x2ae29010) at 
UnixEThread.cc:189
#8  0x004ff74d in main (argc=3, argv=0x7fffd2054d88) at Main.cc:1958

{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-849) proxy.config.http.slow.log.threshold is unable to set from traffic_line -s

2011-06-21 Thread Zhao Yongming (JIRA)
proxy.config.http.slow.log.threshold is unable to set from traffic_line -s
--

 Key: TS-849
 URL: https://issues.apache.org/jira/browse/TS-849
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Reporter: Zhao Yongming
Assignee: Zhao Yongming


I am wondering how many config items from records have the same situation?

{code}
[root@cache164 trafficserver]# traffic_line -s 
proxy.config.http.slow.log.threshold -v 30
Layout configuration
  --prefix = '/usr'
 --exec_prefix = '/usr'
  --bindir = '/usr/bin'
 --sbindir = '/usr/sbin'
  --sysconfdir = '/etc/trafficserver'
 --datadir = '/usr/share/trafficserver'
  --includedir = '/usr/include/trafficserver'
  --libdir = '/usr/lib64/trafficserver'
  --libexecdir = '/usr/lib64/trafficserver/plugins'
   --localstatedir = '/var/trafficserver'
  --runtimedir = '/var/run/trafficserver'
  --logdir = '/var/log/trafficserver'
  --mandir = '/usr/share/man'
 --infodir = '/usr/share/info'
--cachedir = '/var/cache/trafficserver'
traffic_line: Only configuration vars can be set
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-844) ReadFromWriter fail in CacheRead.cc

2011-06-21 Thread mohan_zl (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mohan_zl updated TS-844:


Attachment: TS-844.patch

Use TS-844.patch and test again, now no related problem coming.

 ReadFromWriter fail in CacheRead.cc
 ---

 Key: TS-844
 URL: https://issues.apache.org/jira/browse/TS-844
 Project: Traffic Server
  Issue Type: Bug
Reporter: mohan_zl
 Attachments: TS-844.patch


 {code}
 #6  0x006ab4d7 in CacheVC::openReadChooseWriter (this=0x2aaaf81523d0, 
 event=1, e=0x0) at CacheRead.cc:320
 #7  0x006abdc9 in CacheVC::openReadFromWriter (this=0x2aaaf81523d0, 
 event=1, e=0x0) at CacheRead.cc:411
 #8  0x004d302f in Continuation::handleEvent (this=0x2aaaf81523d0, 
 event=1, data=0x0) at I_Continuation.h:146
 #9  0x006ae2b9 in Cache::open_read (this=0x2aaab0001c40, 
 cont=0x2aaab4472aa0, key=0x42100b10, request=0x2aaab44710f0, 
 params=0x2aaab4470928, type=CACHE_FRAG_TYPE_HTTP,
 hostname=0x2aab09581049 
 js.tongji.linezing.comicon1.gifjs.tongji.linezing.com�ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿ï¿½Þ­ï¿½ï¿½ï¿½...,
  host_len=22) at CacheRead.cc:228
 #10 0x0068da30 in Cache::open_read (this=0x2aaab0001c40, 
 cont=0x2aaab4472aa0, url=0x2aaab4471108, request=0x2aaab44710f0, 
 params=0x2aaab4470928,
 type=CACHE_FRAG_TYPE_HTTP) at P_CacheInternal.h:1068
 #11 0x0067d32f in CacheProcessor::open_read (this=0xf2c030, 
 cont=0x2aaab4472aa0, url=0x2aaab4471108, request=0x2aaab44710f0, 
 params=0x2aaab4470928, pin_in_cache=0,
 type=CACHE_FRAG_TYPE_HTTP) at Cache.cc:3011
 #12 0x0054e058 in HttpCacheSM::do_cache_open_read 
 (this=0x2aaab4472aa0) at HttpCacheSM.cc:220
 #13 0x0054e1a7 in HttpCacheSM::open_read (this=0x2aaab4472aa0, 
 url=0x2aaab4471108, hdr=0x2aaab44710f0, params=0x2aaab4470928, 
 pin_in_cache=0) at HttpCacheSM.cc:252
 #14 0x00568404 in HttpSM::do_cache_lookup_and_read 
 (this=0x2aaab4470830) at HttpSM.cc:3893
 #15 0x005734b5 in HttpSM::set_next_state (this=0x2aaab4470830) at 
 HttpSM.cc:6436
 #16 0x0056115a in HttpSM::call_transact_and_set_next_state 
 (this=0x2aaab4470830, f=0) at HttpSM.cc:6328
 #17 0x00574b78 in HttpSM::handle_api_return (this=0x2aaab4470830) at 
 HttpSM.cc:1516
 #18 0x0056dbe7 in HttpSM::state_api_callout (this=0x2aaab4470830, 
 event=0, data=0x0) at HttpSM.cc:1448
 #19 0x0056de77 in HttpSM::do_api_callout_internal 
 (this=0x2aaab4470830) at HttpSM.cc:4345
 #20 0x00578c89 in HttpSM::do_api_callout (this=0x2aaab4470830) at 
 HttpSM.cc:497
 #21 0x00572e93 in HttpSM::set_next_state (this=0x2aaab4470830) at 
 HttpSM.cc:6362
 #22 0x0056115a in HttpSM::call_transact_and_set_next_state 
 (this=0x2aaab4470830, f=0) at HttpSM.cc:6328
 #23 0x00572faf in HttpSM::set_next_state (this=0x2aaab4470830) at 
 HttpSM.cc:6378
 #24 0x0056115a in HttpSM::call_transact_and_set_next_state 
 (this=0x2aaab4470830, f=0) at HttpSM.cc:6328
 #25 0x00574b78 in HttpSM::handle_api_return (this=0x2aaab4470830) at 
 HttpSM.cc:1516
 #26 0x0056dbe7 in HttpSM::state_api_callout (this=0x2aaab4470830, 
 event=0, data=0x0) at HttpSM.cc:1448
 #27 0x0056de77 in HttpSM::do_api_callout_internal 
 (this=0x2aaab4470830) at HttpSM.cc:4345
 #28 0x00578c89 in HttpSM::do_api_callout (this=0x2aaab4470830) at 
 HttpSM.cc:497
 #29 0x00572e93 in HttpSM::set_next_state (this=0x2aaab4470830) at 
 HttpSM.cc:6362
 #30 0x0056115a in HttpSM::call_transact_and_set_next_state 
 (this=0x2aaab4470830, f=0) at HttpSM.cc:6328
 #31 0x00574b78 in HttpSM::handle_api_return (this=0x2aaab4470830) at 
 HttpSM.cc:1516
 #32 0x0056dbe7 in HttpSM::state_api_callout (this=0x2aaab4470830, 
 event=0, data=0x0) at HttpSM.cc:1448
 #33 0x0056de77 in HttpSM::do_api_callout_internal 
 (this=0x2aaab4470830) at HttpSM.cc:4345
 #34 0x00578c89 in HttpSM::do_api_callout (this=0x2aaab4470830) at 
 HttpSM.cc:497
 #35 0x00572e93 in HttpSM::set_next_state (this=0x2aaab4470830) at 
 HttpSM.cc:6362
 #36 0x0056115a in HttpSM::call_transact_and_set_next_state 
 (this=0x2aaab4470830, f=0x59e52e 
 HttpTransact::ModifyRequest(HttpTransact::State*)) at HttpSM.cc:6328
 #37 0x0057490c in HttpSM::state_read_client_request_header 
 (this=0x2aaab4470830, event=100, data=0x2049f5e8) at HttpSM.cc:780
 #38 0x0056e49f in HttpSM::main_handler (this=0x2aaab4470830, 
 event=100, data=0x2049f5e8) at HttpSM.cc:2436
 #39 

[jira] [Work started] (TS-813) http_ui /stat/ should response with content type

2011-06-21 Thread Zhao Yongming (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on TS-813 started by Zhao Yongming.

 http_ui /stat/ should response with content type
 

 Key: TS-813
 URL: https://issues.apache.org/jira/browse/TS-813
 Project: Traffic Server
  Issue Type: Improvement
  Components: HTTP
Affects Versions: 2.1.8
Reporter: Zhao Yongming
Assignee: Zhao Yongming
Priority: Minor
  Labels: http_ui
 Fix For: 3.1.0


 when requesting /stat/
 the header missing content type, and that will confusing browsers
 {code}
 HTTP/1.0 200 Ok^M
 Date: Wed, 01 Jun 2011 02:34:02 GMT^M
 Connection: close^M
 Server: ATS/2.1.9-unstable^M
 Content-Length: 54952^M
 ^M
 pre
 proxy.config.proxy_name=
 {code}
 the others should be checked too.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (TS-847) Forward proxy: Can't create SSL connection to older Subversion Servers.

2011-06-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/TS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13052450#comment-13052450
 ] 

Igor Galić commented on TS-847:
---

tested trunk with multiple different servers: works every time.

 Forward proxy: Can't create SSL connection to older Subversion Servers.
 ---

 Key: TS-847
 URL: https://issues.apache.org/jira/browse/TS-847
 Project: Traffic Server
  Issue Type: Bug
Affects Versions: 3.1.0, 3.0.0
Reporter: Igor Galić
 Fix For: 3.1.0

 Attachments: 01_fail_ats_sfnet.cap, 02_pass_squid_sfnet.cap, 
 03_pass_ats_asf.cap, 04_pass_squid_asf.cap, TS-847.diff


 When trying to access older Subversion (1.6.9, 1.5.1 verified) servers 
 through SSL via the Forward proxy, I'll get a failure such as:
 {noformat}
 igalic@knock ~/src % svn co 
 https://gar.svn.sourceforge.net/svnroot/gar/csw/mgar/gar/
 svn: PROPFIND of '/svnroot/gar/!svn/bln/14844': Could not create SSL 
 connection through proxy server: 502 Tunnel Connection Failed 
 (https://gar.svn.sourceforge.net)
 1 igalic@knock ~/src %
 {noformat}
 The squid.blog says:
 {noformat}
 1308609250.117 1004 127.0.0.1 TCP_MISS/200 4664 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609250.642 524 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609251.167 525 127.0.0.1 TCP_MISS/200 1031 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609251.689 522 127.0.0.1 TCP_MISS/200 1095 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609252.231 541 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609252.756 524 127.0.0.1 TCP_MISS/200 1031 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609253.285 528 127.0.0.1 TCP_MISS/200 1095 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609253.814 528 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609254.345 530 127.0.0.1 TCP_MISS/200  CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609254.416 70 127.0.0.1 ERR_CONNECT_FAIL/502 454 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net text/html -
 {noformat}
 While the error log says:
 {noformat}
 20110621.00h25m14s RESPONSE: sent 127.0.0.1 status 502 (Tunnel Connection 
 Failed) for 'gar.svn.sourceforge.net:443/'
 {noformat}
 With newer versions of the Subversion server this works out fine, example the 
 ASF's server:
 {noformat}
 igalic@knock ~/src % svn co 
 https://svn.apache.org/repos/asf/trafficserver/plugins/header_filter/
 Aheader_filter/example.conf
 Aheader_filter/rules.h
 Aheader_filter/NOTICE
 Aheader_filter/header_filter.cc
 Aheader_filter/LICENSE
 Aheader_filter/STATUS
 Aheader_filter/lulu.h
 Aheader_filter/CHANGES
 Aheader_filter/Makefile
 Aheader_filter/README
 Aheader_filter/rules.cc
 Checked out revision 1137808.
 igalic@knock ~/src %
 {noformat}
 I wouldn't submit this bug in the first place, if it didn't work with Squid 
 either. Alas Squid passes with flying colours! Attatched you can find 
 wireshark captures for the four scenarios:
 * Failure with ATS (old subversion server: sf.net)
 * Success with Squid (same old subversion server: sf.net)
 * Success with ATS (new Subversion server: ASF)
 * Success with Squid (same new Subversion server: ASF)
 To force subversion through a proxy you need to edit ~/.subversion/servers
 {noformat}
 [global]
 http-proxy-host = localhost
 http-proxy-port = 8080
 {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-847) Forward proxy: Can't create SSL connection to older Subversion Servers.

2011-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić updated TS-847:
--

Backport to Version:   (was: 3.0.1)
  Fix Version/s: 3.0.1

 Forward proxy: Can't create SSL connection to older Subversion Servers.
 ---

 Key: TS-847
 URL: https://issues.apache.org/jira/browse/TS-847
 Project: Traffic Server
  Issue Type: Bug
Affects Versions: 3.1.0, 3.0.0
Reporter: Igor Galić
 Fix For: 3.1.0, 3.0.1

 Attachments: 01_fail_ats_sfnet.cap, 02_pass_squid_sfnet.cap, 
 03_pass_ats_asf.cap, 04_pass_squid_asf.cap, TS-847.diff


 When trying to access older Subversion (1.6.9, 1.5.1 verified) servers 
 through SSL via the Forward proxy, I'll get a failure such as:
 {noformat}
 igalic@knock ~/src % svn co 
 https://gar.svn.sourceforge.net/svnroot/gar/csw/mgar/gar/
 svn: PROPFIND of '/svnroot/gar/!svn/bln/14844': Could not create SSL 
 connection through proxy server: 502 Tunnel Connection Failed 
 (https://gar.svn.sourceforge.net)
 1 igalic@knock ~/src %
 {noformat}
 The squid.blog says:
 {noformat}
 1308609250.117 1004 127.0.0.1 TCP_MISS/200 4664 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609250.642 524 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609251.167 525 127.0.0.1 TCP_MISS/200 1031 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609251.689 522 127.0.0.1 TCP_MISS/200 1095 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609252.231 541 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609252.756 524 127.0.0.1 TCP_MISS/200 1031 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609253.285 528 127.0.0.1 TCP_MISS/200 1095 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609253.814 528 127.0.0.1 TCP_MISS/200 1335 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609254.345 530 127.0.0.1 TCP_MISS/200  CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net - -
 1308609254.416 70 127.0.0.1 ERR_CONNECT_FAIL/502 454 CONNECT 
 gar.svn.sourceforge.net:443/ - DIRECT/gar.svn.sourceforge.net text/html -
 {noformat}
 While the error log says:
 {noformat}
 20110621.00h25m14s RESPONSE: sent 127.0.0.1 status 502 (Tunnel Connection 
 Failed) for 'gar.svn.sourceforge.net:443/'
 {noformat}
 With newer versions of the Subversion server this works out fine, example the 
 ASF's server:
 {noformat}
 igalic@knock ~/src % svn co 
 https://svn.apache.org/repos/asf/trafficserver/plugins/header_filter/
 Aheader_filter/example.conf
 Aheader_filter/rules.h
 Aheader_filter/NOTICE
 Aheader_filter/header_filter.cc
 Aheader_filter/LICENSE
 Aheader_filter/STATUS
 Aheader_filter/lulu.h
 Aheader_filter/CHANGES
 Aheader_filter/Makefile
 Aheader_filter/README
 Aheader_filter/rules.cc
 Checked out revision 1137808.
 igalic@knock ~/src %
 {noformat}
 I wouldn't submit this bug in the first place, if it didn't work with Squid 
 either. Alas Squid passes with flying colours! Attatched you can find 
 wireshark captures for the four scenarios:
 * Failure with ATS (old subversion server: sf.net)
 * Success with Squid (same old subversion server: sf.net)
 * Success with ATS (new Subversion server: ASF)
 * Success with Squid (same new Subversion server: ASF)
 To force subversion through a proxy you need to edit ~/.subversion/servers
 {noformat}
 [global]
 http-proxy-host = localhost
 http-proxy-port = 8080
 {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Closed] (TS-840) Regression checks fail (again) due to faulty assert use

2011-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić closed TS-840.
-


 Regression checks fail (again) due to faulty assert use
 ---

 Key: TS-840
 URL: https://issues.apache.org/jira/browse/TS-840
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Affects Versions: 3.0.0
Reporter: Arno Toell
Priority: Minor
 Fix For: 3.1.0, 3.0.1

 Attachments: ink_assert.patch


 When trying to compile regression checks, they fail again for ATS 3.0. They 
 were fixed in TS-738 some time ago, but are now failing again, this time 
 because of a build failure. Problem is the usage of _assert()_:
 {code}
 libtool: link: g++ -g -O2 -pipe -Wall -Werror -O3 
 -feliminate-unused-debug-symbols -fno-strict-aliasing -Wno-invalid-offsetof 
 -o .libs/test_Map test_Map.o  ./.libs/libtsutil.so -L/usr/lib -lpcre -lssl 
 -lcrypto -ltcl8.4 -lresolv -lrt -lnsl -lcap -lz -Wl,-rpath 
 -Wl,/usr/lib/trafficserver
 g++ -DHAVE_CONFIG_H -I.   -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 
 -D_GNU_SOURCE -D_REENTRANT -Dlinux -I/usr/include/tcl8.4  -g -O2 -pipe -Wall 
 -Werror -O3 -feliminate-unused-debug-symbols -fno-strict-aliasing 
 -Wno-invalid-offsetof -c -o test_Vec.o test_Vec.cc
 test_Vec.cc: In function 'int main(int, char**)':
 test_Vec.cc:38: error: '__DONT_USE_BARE_assert_USE_ink_assert__' was not 
 declared in this scope
 {code}
 This is due to following definition in _lib/ts/ink_assert.h_:
 {code}
 #undef assert
 #define assert __DONT_USE_BARE_assert_USE_ink_assert__
 {code}
 I attached a patch which is fixing this issue. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-840) Regression checks fail (again) due to faulty assert use

2011-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić updated TS-840:
--

Fix Version/s: 3.0.1

 Regression checks fail (again) due to faulty assert use
 ---

 Key: TS-840
 URL: https://issues.apache.org/jira/browse/TS-840
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Affects Versions: 3.0.0
Reporter: Arno Toell
Priority: Minor
 Fix For: 3.1.0, 3.0.1

 Attachments: ink_assert.patch


 When trying to compile regression checks, they fail again for ATS 3.0. They 
 were fixed in TS-738 some time ago, but are now failing again, this time 
 because of a build failure. Problem is the usage of _assert()_:
 {code}
 libtool: link: g++ -g -O2 -pipe -Wall -Werror -O3 
 -feliminate-unused-debug-symbols -fno-strict-aliasing -Wno-invalid-offsetof 
 -o .libs/test_Map test_Map.o  ./.libs/libtsutil.so -L/usr/lib -lpcre -lssl 
 -lcrypto -ltcl8.4 -lresolv -lrt -lnsl -lcap -lz -Wl,-rpath 
 -Wl,/usr/lib/trafficserver
 g++ -DHAVE_CONFIG_H -I.   -D_LARGEFILE64_SOURCE=1 -D_COMPILE64BIT_SOURCE=1 
 -D_GNU_SOURCE -D_REENTRANT -Dlinux -I/usr/include/tcl8.4  -g -O2 -pipe -Wall 
 -Werror -O3 -feliminate-unused-debug-symbols -fno-strict-aliasing 
 -Wno-invalid-offsetof -c -o test_Vec.o test_Vec.cc
 test_Vec.cc: In function 'int main(int, char**)':
 test_Vec.cc:38: error: '__DONT_USE_BARE_assert_USE_ink_assert__' was not 
 declared in this scope
 {code}
 This is due to following definition in _lib/ts/ink_assert.h_:
 {code}
 #undef assert
 #define assert __DONT_USE_BARE_assert_USE_ink_assert__
 {code}
 I attached a patch which is fixing this issue. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-839) Build errors when specifying lmza location

2011-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić updated TS-839:
--

Backport to Version:   (was: 3.0.1)
  Fix Version/s: 3.0.1

 Build errors when specifying lmza location
 --

 Key: TS-839
 URL: https://issues.apache.org/jira/browse/TS-839
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Reporter: Leif Hedstrom
Assignee: Leif Hedstrom
 Fix For: 3.1.0, 3.0.1


 When using a specific lmza location with ./configure, we have a typo that 
 prevents proper building (it'll end up giving a -R option with an empty 
 argument).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (TS-849) proxy.config.http.slow.log.threshold is unable to set from traffic_line -s

2011-06-21 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13052569#comment-13052569
 ] 

Leif Hedstrom commented on TS-849:
--

There are lots of configs that can not be updated on a live system. 
RecordsConfig.cc should indicate if an option is reloadable or not.

 proxy.config.http.slow.log.threshold is unable to set from traffic_line -s
 --

 Key: TS-849
 URL: https://issues.apache.org/jira/browse/TS-849
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Reporter: Zhao Yongming
Assignee: Zhao Yongming

 I am wondering how many config items from records have the same situation?
 {code}
 [root@cache164 trafficserver]# traffic_line -s 
 proxy.config.http.slow.log.threshold -v 30
 Layout configuration
   --prefix = '/usr'
  --exec_prefix = '/usr'
   --bindir = '/usr/bin'
  --sbindir = '/usr/sbin'
   --sysconfdir = '/etc/trafficserver'
  --datadir = '/usr/share/trafficserver'
   --includedir = '/usr/include/trafficserver'
   --libdir = '/usr/lib64/trafficserver'
   --libexecdir = '/usr/lib64/trafficserver/plugins'
--localstatedir = '/var/trafficserver'
   --runtimedir = '/var/run/trafficserver'
   --logdir = '/var/log/trafficserver'
   --mandir = '/usr/share/man'
  --infodir = '/usr/share/info'
 --cachedir = '/var/cache/trafficserver'
 traffic_line: Only configuration vars can be set
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-849) proxy.config.http.slow.log.threshold is unable to set from traffic_line -s

2011-06-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/TS-849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Galić updated TS-849:
--

Affects Version/s: 3.0.1
   3.1.0

 proxy.config.http.slow.log.threshold is unable to set from traffic_line -s
 --

 Key: TS-849
 URL: https://issues.apache.org/jira/browse/TS-849
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Affects Versions: 3.1.0, 3.0.1
Reporter: Zhao Yongming
Assignee: Zhao Yongming

 I am wondering how many config items from records have the same situation?
 {code}
 [root@cache164 trafficserver]# traffic_line -s 
 proxy.config.http.slow.log.threshold -v 30
 Layout configuration
   --prefix = '/usr'
  --exec_prefix = '/usr'
   --bindir = '/usr/bin'
  --sbindir = '/usr/sbin'
   --sysconfdir = '/etc/trafficserver'
  --datadir = '/usr/share/trafficserver'
   --includedir = '/usr/include/trafficserver'
   --libdir = '/usr/lib64/trafficserver'
   --libexecdir = '/usr/lib64/trafficserver/plugins'
--localstatedir = '/var/trafficserver'
   --runtimedir = '/var/run/trafficserver'
   --logdir = '/var/log/trafficserver'
   --mandir = '/usr/share/man'
  --infodir = '/usr/share/info'
 --cachedir = '/var/cache/trafficserver'
 traffic_line: Only configuration vars can be set
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-850) Update scheduling uses the hour offset as a start hour.

2011-06-21 Thread Alan M. Carroll (JIRA)
Update scheduling uses the hour offset as a start hour.
---

 Key: TS-850
 URL: https://issues.apache.org/jira/browse/TS-850
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Affects Versions: 3.0.0
Reporter: Alan M. Carroll
Assignee: Alan M. Carroll
Priority: Minor
 Fix For: 3.1.0


For update.config the current logic uses the hour offset as a starting hour 
such that if ATS is started after that hour of the day, then the next scheduled 
event is deferred until the next day. E.g., if the offset is 6, and ATS is 
started after 6AM, then the next even won't be until after 6AM the next day. 
Yet after the first day, events happen every interval.

Also, blank lines affect parsing, in that any configuration line following a 
blank line will be handled incorrectly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (TS-850) Update scheduling uses the hour offset as a start hour.

2011-06-21 Thread Alan M. Carroll (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan M. Carroll updated TS-850:
---

Attachment: patch-850.txt

Changes scheduling so the next event is the next slot as determined by the 
offset and interval.

 Update scheduling uses the hour offset as a start hour.
 ---

 Key: TS-850
 URL: https://issues.apache.org/jira/browse/TS-850
 Project: Traffic Server
  Issue Type: Bug
  Components: Configuration
Affects Versions: 3.0.0
Reporter: Alan M. Carroll
Assignee: Alan M. Carroll
Priority: Minor
 Fix For: 3.1.0

 Attachments: patch-850.txt


 For update.config the current logic uses the hour offset as a starting hour 
 such that if ATS is started after that hour of the day, then the next 
 scheduled event is deferred until the next day. E.g., if the offset is 6, and 
 ATS is started after 6AM, then the next even won't be until after 6AM the 
 next day. Yet after the first day, events happen every interval.
 Also, blank lines affect parsing, in that any configuration line following a 
 blank line will be handled incorrectly.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (TS-851) unable to run TS without a real interface

2011-06-21 Thread Zhao Yongming (JIRA)
unable to run TS without a real interface
-

 Key: TS-851
 URL: https://issues.apache.org/jira/browse/TS-851
 Project: Traffic Server
  Issue Type: Bug
  Components: Management
Affects Versions: 3.1.0
 Environment: trunk after TS-845
Reporter: Zhao Yongming


it seems like that start_HttpProxyServerBackDoor with port 
8084(proxy.config.process_manager.mgmt_port) does some dirty job, we need to 
track it down.

{code}
[Jun 21 22:51:02.915] Server {47475602368256} NOTE: cache clustering disabled
[Jun 21 22:51:02.915] Server {47475602368256} NOTE: clearing statistics
[Jun 21 22:51:02.916] Server {47475602368256} DEBUG: (dns) ink_dns_init: called 
with init_called = 0
[Jun 21 22:51:02.926] Server {47475602368256} DEBUG: (dns) localhost=zym6400
[Jun 21 22:51:02.927] Server {47475602368256} DEBUG: (dns) Round-robin 
nameservers = 0
[Jun 21 22:51:02.932] Server {47475602368256} NOTE: cache clustering disabled
[Jun 21 22:51:02.984] Server {47475602368256} NOTE: logging initialized[7], 
logging_mode = 3
[Jun 21 22:51:02.989] Server {47475602368256} DEBUG: (http_init) 
proxy.config.http.redirection_enabled = 0
[Jun 21 22:51:02.989] Server {47475602368256} DEBUG: (http_init) 
proxy.config.http.number_of_redirections = 1
[Jun 21 22:51:02.989] Server {47475602368256} DEBUG: (http_init) 
proxy.config.http.post_copy_size = 2048
[Jun 21 22:51:02.989] Server {47475602368256} DEBUG: (http_tproxy) Primary 
listen socket transparency is off
[Jun 21 22:51:02.992] Server {47475602368256} ERROR: getaddrinfo error -2: Name 
or service not known
[Jun 21 22:51:02.992] Server {47475602368256} WARNING: unable to listen on port 
8084: -1 2, No such file or directory
[Jun 21 22:51:02.993] Server {47475602368256} NOTE: traffic server running
[Jun 21 22:51:02.993] Server {47475602368256} DEBUG: (dns) 
DNSHandler::startEvent: on thread 0
[Jun 21 22:51:02.993] Server {47475602368256} DEBUG: (dns) open_con: opening 
connection 127.0.0.1:53
[Jun 21 22:51:02.993] Server {47475602368256} DEBUG: (dns) random port = 28547
[Jun 21 22:51:02.993] Server {47475602368256} DEBUG: (dns) opening connection 
127.0.0.1:53 SUCCEEDED for 0
[Jun 21 22:51:03.308] Server {47475617863424} NOTE: cache enabled
[Jun 21 22:51:21.870] Manager {140119188903712} FATAL: 
[LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
[Jun 21 22:51:21.870] Manager {140119188903712} FATAL:  (last system error 104: 
Connection reset by peer)
[Jun 21 22:51:21.870] Manager {140119188903712} NOTE: 
[LocalManager::mgmtShutdown] Executing shutdown request.
[Jun 21 22:51:21.870] Manager {140119188903712} NOTE: 
[LocalManager::processShutdown] Executing process shutdown request.
[Jun 21 22:51:21.870] Manager {140119188903712} ERROR: 
[LocalManager::sendMgmtMsgToProcesses] Error writing message
[Jun 21 22:51:21.870] Manager {140119188903712} ERROR:  (last system error 32: 
Broken pipe)
[E. Mgmt] log == [TrafficManager] using root directory '/usr'
{code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (TS-833) Crash Report: Continuation::handleEvent, event=2, 0xdeadbeef, ink_freelist_free related

2011-06-21 Thread Zhao Yongming (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13053038#comment-13053038
 ] 

Zhao Yongming commented on TS-833:
--

John:
I got the case you comment on 17/Jun/11 22:07, the gdb shows:

{code}

[New process 12674]
#0  0x0063f9f5 in get_dns (h=0x18ea3070, id=27816) at DNS.cc:752
752 if (e-once_written_flag)
(gdb) bt
#0  0x0063f9f5 in get_dns (h=0x18ea3070, id=27816) at DNS.cc:752
#1  0x00643e33 in dns_process (handler=0x18ea3070, buf=0x2aaab1292010, 
len=159) at DNS.cc:1170
#2  0x00645cfc in DNSHandler::recv_dns (this=0x18ea3070, event=5, 
e=0x18e7df50) at DNS.cc:690
#3  0x0064655f in DNSHandler::mainEvent (this=0x18ea3070, event=5, 
e=0x18e7df50) at DNS.cc:703
#4  0x004d302f in Continuation::handleEvent (this=0x18ea3070, event=5, 
data=0x18e7df50) at I_Continuation.h:146
#5  0x006f9978 in EThread::process_event (this=0x2ae29010, 
e=0x18e7df50, calling_code=5) at UnixEThread.cc:140
#6  0x006f9e96 in EThread::execute (this=0x2ae29010) at 
UnixEThread.cc:262
#7  0x004ff74d in main (argc=3, argv=0x7fff21439ac8) at Main.cc:1958
(gdb) print e
$1 = (DNSEntry *) 0xefbeaddeefbeadde
(gdb) print h-in_flight
$2 = 4
(gdb) 
{code}

and the traffic.out looks
{code}
[TrafficServer] using root directory '/usr'
[Jun 20 10:47:48.118] Manager {47791815218176} NOTE: 
[LocalManager::pollMgmtProcessServer] New process connecting fd '9'
[Jun 20 10:47:48.118] Manager {47791815218176} NOTE: [Alarms::signalAlarm] 
Server Process born
[Jun 20 10:47:49.141] {47286116713584} STATUS: opened 
/var/log/trafficserver/diags.log
[Jun 20 10:47:49.142] {47286116713584} NOTE: updated diags config
[Jun 20 10:47:49.146] Server {47286116713584} NOTE: cache clustering disabled
[Jun 20 10:47:49.169] Server {47286116713584} NOTE: cache clustering disabled
[Jun 20 10:47:49.639] Server {47286116713584} NOTE: logging initialized[7], 
logging_mode = 3
[Jun 20 10:47:49.680] Server {47286116713584} NOTE: traffic server running
[Jun 20 10:47:49.735] Server {1099794752} WARNING: failover: connection to DNS 
server 127.0.0.1 lost, move to 121.14.89.156
[Jun 20 10:47:50.243] Server {1124858176} WARNING: Access logging to local log 
directory suspended - configured space allocation exhausted.
[Jun 20 10:47:55.446] Server {47286116713584} NOTE: cache enabled
[Jun 20 10:47:56.001] Server {47286116713584} NOTE: [log-coll] host up 
[121.14.89.156:8085]
[Jun 21 00:00:00.001] Server {1124858176} STATUS: The logfile 
/var/log/trafficserver/error.log was rolled to 
/var/log/trafficserver/error.log_cache174.cn62.20110620.10h47m49s-20110621.00h00m00s.old.
[Jun 21 00:00:05.001] Server {1124858176} STATUS: The rolled logfile, 
/var/log/trafficserver/error.log_cache174.cn62.20110620.10h47m49s-20110621.00h00m00s.old,
 was auto-deleted; 0 bytes were reclaimed.
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/bin/traffic_server - STACK TRACE: 
/usr/bin/traffic_server[0x51ba3e]
/lib64/libpthread.so.0[0x36af20e7c0]
[0x2aaab015ac70]
/usr/bin/traffic_server(_ZN10DNSHandler8recv_dnsEiP5Event+0x6a0)[0x645cfc]
/usr/bin/traffic_server(_ZN10DNSHandler9mainEventEiP5Event+0x39)[0x64655f]
/usr/bin/traffic_server(_ZN12Continuation11handleEventEiPv+0x6f)[0x4d302f]
/usr/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x11e)[0x6f9978]
/usr/bin/traffic_server(_ZN7EThread7executeEv+0x3c0)[0x6f9e96]
/usr/bin/traffic_server(main+0x10c7)[0x4ff74d]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x36ae61d994]
/usr/bin/traffic_server(__gxx_personality_v0+0x491)[0x4b2149]
/usr/bin/traffic_server(__gxx_personality_v0+0x491)[0x4b2149]
[Jun 21 01:02:34.760] Manager {47791815218176} FATAL: 
[LocalManager::pollMgmtProcessServer] Error in read (errno: 104)
[Jun 21 01:02:36.778] Manager {47791815218176} FATAL:  (last system error 104: 
Connection reset by peer)
[Jun 21 01:02:36.778] Manager {47791815218176} NOTE: 
[LocalManager::mgmtShutdown] Executing shutdown request.
[Jun 21 01:02:36.778] Manager {47791815218176} NOTE: 
[LocalManager::processShutdown] Executing process shutdown request.
[Jun 21 01:02:36.784] Manager {47791815218176} ERROR: 
[LocalManager::sendMgmtMsgToProcesses] Error writing message
[Jun 21 01:02:36.785] Manager {47791815218176} ERROR:  (last system error 32: 
Broken pipe)
[E. Mgmt] log == [TrafficManager] using root directory '/usr'
[Jun 21 01:02:40.791] {46942772802560} NOTE: updated diags config
[Jun 21 01:02:51.503] Manager {46942772802560} NOTE: [ClusterCom::ClusterCom] 
Node running on OS: 'Linux' Release: '2.6.18-164.11.1.el5'
[Jun 21 01:02:51.531] Manager {46942772802560} NOTE: 
[LocalManager::listenForProxy] Listening on port: 8080
[Jun 21 01:02:51.531] Manager {46942772802560} NOTE: 
[LocalManager::listenForProxy] Listening on port: 80
[Jun 21 01:02:51.531] Manager {46942772802560} NOTE: [TrafficManager] Setup 
complete
[Jun 21 01:02:52.594] Manager