[jira] [Created] (TS-2374) http_sm hung when all consumers aborted but the producer still alive

2013-11-20 Thread weijin (JIRA)
weijin created TS-2374:
--

 Summary: http_sm hung when all consumers aborted but the producer 
still alive
 Key: TS-2374
 URL: https://issues.apache.org/jira/browse/TS-2374
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: weijin


HttpSM will kill_this when the tunnel is not alive (none of consumer alive, nor 
the producer). But when the cache_vc (for write) is the only consumer of the 
server_session, and the consumer is aborted because of write failed, the httpsm 
hung.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2373) the cache is not initialized properly

2013-11-20 Thread Zhao Yongming (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao Yongming updated TS-2373:
--

Labels: T  (was: )

 the cache is not initialized properly
 -

 Key: TS-2373
 URL: https://issues.apache.org/jira/browse/TS-2373
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: xiongzongtao
Assignee: xiongzongtao
  Labels: T

 in recent days, there are several ATS servers on line failed to initialize 
 cache system when restart, so, all requests are simply forwarded to origins.
Note:  the storage is configured to use raw devices, like this:
 /dev/raw/raw1 146163105792
 /dev/raw/raw2 146163105792
 /dev/raw/raw3 146163105792
 /dev/raw/raw4 146163105792
 /dev/raw/raw5 146163105792
 volume.config is not configured. so, one vol per raw device
 1.  checking the traffic.out, found that:  cache enabled line that 
 should follow the traffic server running line
 2.  echo show:cache-stats|traffic_shell shows that all stats are ZEROs
 3.  dump the memory of traffic_server by gdb gcore for convenience of 
 debuging
 4.  use gdb to debug:
  a)  all vols are set in global gvol except /dev/raw/raw4.
  b)  one  vol (/dev/raw/raw4) hunged in function handle_recover_from_data
  c) check aio_req, found that, one io event are queued in 
   aio_req[4]'s aio_temp_list
  d) check io threads,  all are in pthread_cond_wait status.
 here, we can see that, handle_recover_from_data issued an io request 
 and queued it in aio_req[4]'s aio_temp_list, but all io threads are 
 in pthread_cond_wait status, no one to process this request, so that, 
 handle_recover_from_data waiting endless. And cache will not enabled 
 until all the vols are inited successfully.
 inspecting the code, found that, 
 code snippet1  from aio_thread_main
 1  if (!INK_ATOMICLIST_EMPTY(my_aio_req-aio_temp_list))
 2  aio_move(my_aio_req);
 3  if (!(op = my_aio_req-aio_todo.pop())  !(op = 
 my_aio_req-http_aio_todo.pop()))
 4   break;
 code snippet2 from aio_queue_req
 1.  if (!ink_mutex_try_acquire(req-aio_mutex)) {
 ink_atomiclist_push(req-aio_temp_list, op);
 2. } else {
 /* check if any pending requests on the atomic list */
 if (!INK_ATOMICLIST_EMPTY(req-aio_temp_list))
   aio_move(req);
 /* now put the new request */
 aio_insert(op, req);
 ink_cond_signal(req-aio_cond);
 ink_mutex_release(req-aio_mutex);
   }
 now suppose aio_temp_list is empty, thread A running code snippet1, thread B 
 running code snippet2, and just when thread A is between 1 and  2 in 
 snippet1, 
 thread B is executing 1 in snippet2. here, thread B insert io request to 
 aio_temp_list, but no notification, and thread A continue to 3, and then 
 4(break), 
 finally fall in pthread_cond_wait. thus,no io thread will process this request
 something about my debuging
 (gdb) p theCache
 $1 = (Cache *) 0xd828cb0
 (gdb) p *theCache
 $2 = {cache_read_done = 1, total_good_nvol = 4, total_nvol = 5, ready = 0, 
 cache_size = 89169900, 
   hosttable = 0x0, total_initialized_vol = 4, scheme = 1}
 (gdb) set $i=0
 (gdb) p gvol[$i++]-path
 $3 = 0xd826ca0 /dev/raw/raw2
 (gdb) p gvol[$i++]-path
 $4 = 0xd8267a0 /dev/raw/raw1
 (gdb) p gvol[$i++]-path
 $5 = 0xd826a00 /dev/raw/raw5
 (gdb) p gvol[$i++]-path
 $6 = 0xd8269c0 /dev/raw/raw3
 (gdb) p gvol[$i++]-path
 Cannot access memory at address 0x0
 (gdb) p cp_list-head-vols[0]
 $7 = (Vol *) 0xd828d10
 (gdb) set $i=0
 (gdb) p *cp_list-head-vols[4]  
 $8 = {Continuation = {force_VFPT_to_top = {_vptr.force_VFPT_to_top = 
 0x713010}, 
 handler = 0x672070 Vol::aggWrite(int, void*), mutex = {m_ptr = 
 0xd806ac0}, 
 link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, path = 
 0xd826a00 /dev/raw/raw5, 
   hash_id = 0xd825db0 /dev/raw/raw5 40960:17833980, hash_id_md5 = {b = 
 {16306325817679019090, 
   17491422742868856741}}, fd = 38, raw_dir = 0x2aaaf0efb000 \r025, dir 
 = 0x2aaaf0efd000, 
   header = 0x2aaaf0efb000, footer = 0x2aaafbcb9000, segments = 278, buckets = 
 16382, 
   recover_pos = 68007976448, prev_recover_pos = 67994133504, scan_pos = 
 76968569856, skip = 40960, 
   start = 364421120, len = 146095964160, data_blocks = 17789500, 
 hit_evacuate_window = 0, 
   io = {AIOCallback = {Continuation = {force_VFPT_to_top = 
 {_vptr.force_VFPT_to_top = 0x712d70}, 
 handler = 0x647640 AIOCallbackInternal::io_complete(int, void*), 
 mutex = {m_ptr = 0xd806ac0}, 
 link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, aiocb = 
 {aio_fildes = 0, 
 aio_buf = 0x2aaabc47e000, aio_nbytes = 8388608, aio_offset = 
 67994133504, aio_reqprio = 0, 
 aio_lio_opcode = 1, aio_state = 0, aio__pad = {0}}, action = 
 {_vptr.Action = 0x0, 
 continuation = 0xd8bc010, mutex = {m_ptr = 0xd806ac0}, cancelled = 

[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Affects Version/s: 4.1.1
   4.0.2

 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 [Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more field 
 markers than fields; cannot process log entry
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
 additional error information:
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more field 
 markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 why I didn't run into this in the past.  I then changed the log format back 
 to binary, and I was unable to reproduce the issue - so it seems related to 
 ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Fix Version/s: 4.2.0

 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 [Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more field 
 markers than fields; cannot process log entry
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
 additional error information:
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more field 
 markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 why I didn't run into this in the past.  I then changed the log format back 
 to binary, and I was unable to reproduce the issue - so it seems related to 
 ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)
David Carlin created TS-2375:


 Summary: Error when using ascii logging format: There are more 
field markers than fields
 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Reporter: David Carlin


I noticed the following on an ats 4.0.2 host in diags.log:

[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more field 
markers than fields; cannot process log entry

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
additional error information:

[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more field 
markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Description: 
I noticed the following on an ats 4.0.2 host in diags.log:

{noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
field markers than fields; cannot process log entry{noformat}

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
additional error information:

{noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
field markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.{noformat}

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}

  was:
I noticed the following on an ats 4.0.2 host in diags.log:

[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more field 
markers than fields; cannot process log entry

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
additional error information:

[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more field 
markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}


 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
Assignee: Leif Hedstrom
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 {noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
 field markers than fields; cannot process log entry{noformat}
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
 additional error information:
 {noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
 field markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.{noformat}
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 why I didn't run into this in the past.  I then changed the log format back 
 to binary, and I was unable to reproduce the issue - so it seems related to 
 ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Assignee: Leif Hedstrom

 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
Assignee: Leif Hedstrom
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 [Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more field 
 markers than fields; cannot process log entry
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
 additional error information:
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more field 
 markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 why I didn't run into this in the past.  I then changed the log format back 
 to binary, and I was unable to reproduce the issue - so it seems related to 
 ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Description: 
I noticed the following on an ats 4.0.2 host in diags.log:

{noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
field markers than fields; cannot process log entry{noformat}

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened except there 
was additional error information:

{noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
field markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.{noformat}

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps this 
is why I didn't run into this in the past.  I then changed the log format back 
to binary, and I was unable to reproduce the issue - so it seems related to 
ascii logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}

  was:
I noticed the following on an ats 4.0.2 host in diags.log:

{noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
field markers than fields; cannot process log entry{noformat}

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened except there 
was additional error information:

{noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
field markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.{noformat}

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}


 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
Assignee: Leif Hedstrom
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 {noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
 field markers than fields; cannot process log entry{noformat}
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened except there 
 was additional error information:
 {noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
 field markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.{noformat}
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 this is why I didn't run into this in the past.  I then changed the log 
 format back to binary, and I was unable to reproduce the issue - so it seems 
 related to ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2375) Error when using ascii logging format: There are more field markers than fields

2013-11-20 Thread David Carlin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Carlin updated TS-2375:
-

Description: 
I noticed the following on an ats 4.0.2 host in diags.log:

{noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
field markers than fields; cannot process log entry{noformat}

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened except there 
was additional error information:

{noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
field markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.{noformat}

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}

  was:
I noticed the following on an ats 4.0.2 host in diags.log:

{noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
field markers than fields; cannot process log entry{noformat}

It only happens about every fifth time you start ATS.  That message will fill 
diags.log and nothing is written to squid.log

I then upgraded to 4.1.1 as a test, and the same thing happened on there was 
additional error information:

{noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
field markers than fields; cannot process log entry
[Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
LogBuffer to ascii, have dropped (232) bytes.{noformat}

The convert to ascii message tipped me off that this could be the source of the 
problem.  Up until now we've been using the binary log format, so perhaps why I 
didn't run into this in the past.  I then changed the log format back to 
binary, and I was unable to reproduce the issue - so it seems related to ascii 
logging.

Here is our logs_xml.config:

{noformat}
LogFormat
  Name = ats_generic_config/
  Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
%caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
/LogFormat

LogObject
Format = ats_generic_config/
Filename = squid/
Mode = ascii/
/LogObject
{noformat}


 Error when using ascii logging format: There are more field markers than 
 fields
 ---

 Key: TS-2375
 URL: https://issues.apache.org/jira/browse/TS-2375
 Project: Traffic Server
  Issue Type: Bug
  Components: Logging
Affects Versions: 4.0.2, 4.1.1
Reporter: David Carlin
Assignee: Leif Hedstrom
 Fix For: 4.2.0


 I noticed the following on an ats 4.0.2 host in diags.log:
 {noformat}[Nov 20 15:08:29.627] Server {0x2b732fe9c700} NOTE: There are more 
 field markers than fields; cannot process log entry{noformat}
 It only happens about every fifth time you start ATS.  That message will fill 
 diags.log and nothing is written to squid.log
 I then upgraded to 4.1.1 as a test, and the same thing happened except there 
 was additional error information:
 {noformat}[Nov 20 15:40:53.656] Server {0x2b568aac8700} NOTE: There are more 
 field markers than fields; cannot process log entry
 [Nov 20 15:40:53.656] Server {0x2b568aac8700} ERROR: Failed to convert 
 LogBuffer to ascii, have dropped (232) bytes.{noformat}
 The convert to ascii message tipped me off that this could be the source of 
 the problem.  Up until now we've been using the binary log format, so perhaps 
 why I didn't run into this in the past.  I then changed the log format back 
 to binary, and I was unable to reproduce the issue - so it seems related to 
 ascii logging.
 Here is our logs_xml.config:
 {noformat}
 LogFormat
   Name = ats_generic_config/
   Format = %cqtq %ttms %chi %crc %pssc %psql %cqhm %cquc 
 %caun %phr/%pqsn %psct %cquuc f1 f2 f3 f4/
 /LogFormat
 LogObject
 Format = ats_generic_config/
 Filename = squid/
 Mode = ascii/
 /LogObject
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2373) the cache is not initialized properly

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2373:


Description: 
in recent days, there are several ATS servers on line failed to initialize 
cache system when restart, so, all requests are simply forwarded to origins.
   Note:  the storage is configured to use raw devices, like this:
/dev/raw/raw1 146163105792
/dev/raw/raw2 146163105792
/dev/raw/raw3 146163105792
/dev/raw/raw4 146163105792
/dev/raw/raw5 146163105792
volume.config is not configured. so, one vol per raw device

1.  checking the traffic.out, found that:  cache enabled line that 
should follow the traffic server running line
2.  echo show:cache-stats|traffic_shell shows that all stats are ZEROs
3.  dump the memory of traffic_server by gdb gcore for convenience of debuging
4.  use gdb to debug:
 a)  all vols are set in global gvol except /dev/raw/raw4.
 b)  one  vol (/dev/raw/raw4) hunged in function handle_recover_from_data
 c) check aio_req, found that, one io event are queued in 
aio_req[4]'s aio_temp_list
 d) check io threads,  all are in pthread_cond_wait status.
here, we can see that, handle_recover_from_data issued an io request 
and queued it in aio_req[4]'s aio_temp_list, but all io threads are 
in pthread_cond_wait status, no one to process this request, so that, 
handle_recover_from_data waiting endless. And cache will not enabled 
until all the vols are inited successfully.

inspecting the code, found that, 
code snippet1  from aio_thread_main
1  if (!INK_ATOMICLIST_EMPTY(my_aio_req-aio_temp_list))
2  aio_move(my_aio_req);
3  if (!(op = my_aio_req-aio_todo.pop())  !(op = 
my_aio_req-http_aio_todo.pop()))
4   break;

code snippet2 from aio_queue_req:
{code}
1.  if (!ink_mutex_try_acquire(req-aio_mutex)) {
ink_atomiclist_push(req-aio_temp_list, op);
2. } else {
/* check if any pending requests on the atomic list */
if (!INK_ATOMICLIST_EMPTY(req-aio_temp_list))
  aio_move(req);
/* now put the new request */
aio_insert(op, req);
ink_cond_signal(req-aio_cond);
ink_mutex_release(req-aio_mutex);
  }
{code}

now suppose aio_temp_list is empty, thread A running code snippet1, thread B 
running code snippet2, and just when thread A is between 1 and  2 in snippet1, 
thread B is executing 1 in snippet2. here, thread B insert io request to 
aio_temp_list, but no notification, and thread A continue to 3, and then 
4(break), 
finally fall in pthread_cond_wait. thus,no io thread will process this request



something about my debugging:
{code}
(gdb) p theCache
$1 = (Cache *) 0xd828cb0
(gdb) p *theCache
$2 = {cache_read_done = 1, total_good_nvol = 4, total_nvol = 5, ready = 0, 
cache_size = 89169900, 
  hosttable = 0x0, total_initialized_vol = 4, scheme = 1}
(gdb) set $i=0
(gdb) p gvol[$i++]-path
$3 = 0xd826ca0 /dev/raw/raw2
(gdb) p gvol[$i++]-path
$4 = 0xd8267a0 /dev/raw/raw1
(gdb) p gvol[$i++]-path
$5 = 0xd826a00 /dev/raw/raw5
(gdb) p gvol[$i++]-path
$6 = 0xd8269c0 /dev/raw/raw3
(gdb) p gvol[$i++]-path
Cannot access memory at address 0x0
(gdb) p cp_list-head-vols[0]
$7 = (Vol *) 0xd828d10
(gdb) set $i=0
(gdb) p *cp_list-head-vols[4]  
$8 = {Continuation = {force_VFPT_to_top = {_vptr.force_VFPT_to_top = 
0x713010}, 
handler = 0x672070 Vol::aggWrite(int, void*), mutex = {m_ptr = 
0xd806ac0}, 
link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, path = 
0xd826a00 /dev/raw/raw5, 
  hash_id = 0xd825db0 /dev/raw/raw5 40960:17833980, hash_id_md5 = {b = 
{16306325817679019090, 
  17491422742868856741}}, fd = 38, raw_dir = 0x2aaaf0efb000 \r025, dir = 
0x2aaaf0efd000, 
  header = 0x2aaaf0efb000, footer = 0x2aaafbcb9000, segments = 278, buckets = 
16382, 
  recover_pos = 68007976448, prev_recover_pos = 67994133504, scan_pos = 
76968569856, skip = 40960, 
  start = 364421120, len = 146095964160, data_blocks = 17789500, 
hit_evacuate_window = 0, 
  io = {AIOCallback = {Continuation = {force_VFPT_to_top = 
{_vptr.force_VFPT_to_top = 0x712d70}, 
handler = 0x647640 AIOCallbackInternal::io_complete(int, void*), 
mutex = {m_ptr = 0xd806ac0}, 
link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, aiocb = 
{aio_fildes = 0, 
aio_buf = 0x2aaabc47e000, aio_nbytes = 8388608, aio_offset = 
67994133504, aio_reqprio = 0, 
aio_lio_opcode = 1, aio_state = 0, aio__pad = {0}}, action = 
{_vptr.Action = 0x0, 
continuation = 0xd8bc010, mutex = {m_ptr = 0xd806ac0}, cancelled = 0}, 
thread = 0x0, then = 0x0, 
  aio_result = 8388608}, first = 0x0, aio_req = 0xc6532f0, sleep_time = 0}, 
  agg = {DLLCacheVC,Continuation::Link_link = {head = 0x0}, tail = 0x0}, 
  stat_cache_vcs = {DLLCacheVC,Continuation::Link_link = {head = 0x0}, tail 
= 0x0}, 
  sync = {DLLCacheVC,Continuation::Link_link = {head = 0x0}, tail = 0x0}, 
agg_buffer = 0x2aaaf07be000 , 
  agg_todo_size = 0, agg_buf_pos = 0, trigger = 0x0, open_dir = 

[jira] [Updated] (TS-2299) ATS seg faults

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2299:


Description: 
I'm seeing segmentation faults in ATS 4.0.1, these seem to happen randomly:

{code}
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0[0x2aafe810eca0]
/usr/local/bin/traffic_server(_Z16mime_scanner_getP11MIMEScannerPPKcS2_S3_S3_Pbbi+0x2c2)[0x5cf752]
/usr/local/bin/traffic_server(_Z21http_parser_parse_reqP10HTTPParserP7HdrHeapP11HTTPHdrImplPPKcS6_bb+0x113)[0x5c4e73]
/usr/local/bin/traffic_server(_ZN7HTTPHdr9parse_reqEP10HTTPParserP14IOBufferReaderPib+0x1a7)[0x5c11d7]
/usr/local/bin/traffic_server(_ZN6HttpSM32state_read_client_request_headerEiPv+0x100)[0x5311c0]
/usr/local/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xcc)[0x5383ac]
/usr/local/bin/traffic_server(_ZN8PluginVC17process_read_sideEb+0x425)[0x4d1535]
/usr/local/bin/traffic_server(_ZN8PluginVC18process_write_sideEb+0x5ac)[0x4d1e6c]
/usr/local/bin/traffic_server(_ZN8PluginVC12main_handlerEiPv+0x46e)[0x4d389e]
/usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x238)[0x6cb258]
/usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x3a9)[0x6cb979]
/usr/local/bin/traffic_server[0x6cad1e]
/lib64/libpthread.so.0[0x2aafe810683d]
/lib64/libc.so.6(clone+0x6d)[0x313bad503d]
{code}

(demangled)
{code}
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0[0x2aafe810eca0]
/usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, char 
const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
/usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/bin/traffic_server - STACK TRACE: 
/usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, IOBufferReader*, 
int*, bool)+0x1a7)[0x5c11d7]
/lib64/libpthread.so.0[0x2ba86e67aca0]
/usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, char 
const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
/usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
void*)+0x100)[0x5311c0]
/usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
/usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
/usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, IOBufferReader*, 
int*, bool)+0x1a7)[0x5c11d7]
/usr/local/bin/traffic_server(PluginVC::process_read_side(bool)+0x425)[0x4d1535]
/usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
void*)+0x100)[0x5311c0]
/usr/local/bin/traffic_server(PluginVC::main_handler(int, 
void*)+0x39f)[0x4d37cf]
/usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
/usr/local/bin/traffic_server(_ZN8Plu
ginVC17process_read_sideEb+0x425)[0x4d1535]
/usr/local/bin/traffic_server(EThread::process_event(Event*, 
int)+0x238)[0x6cb258]
/usr/local/bin/traffic_server(EThread::execute()+0x707)[0x6cbcd7]
/usr/local/bin/traffic_server[0x6cad1e]
/lib64/libpthread.so.0[0x2ba86e67283d]
/usr/local/bin/traffic_server(PluginVC::process_write_side(bool)+0x5ac)[0x4d1e6c]
/usr/local/bin/traffic_server(PluginVC::main_handler(int, 
void*)+0x46e)[0x4d389e]
/lib64/libc.so.6(clone+0x6d)[0x313bad503d]
{code}

  was:
I'm seeing segmentation faults in ATS 4.0.1, these seem to happen randomly:

NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0[0x2aafe810eca0]
/usr/local/bin/traffic_server(_Z16mime_scanner_getP11MIMEScannerPPKcS2_S3_S3_Pbbi+0x2c2)[0x5cf752]
/usr/local/bin/traffic_server(_Z21http_parser_parse_reqP10HTTPParserP7HdrHeapP11HTTPHdrImplPPKcS6_bb+0x113)[0x5c4e73]
/usr/local/bin/traffic_server(_ZN7HTTPHdr9parse_reqEP10HTTPParserP14IOBufferReaderPib+0x1a7)[0x5c11d7]
/usr/local/bin/traffic_server(_ZN6HttpSM32state_read_client_request_headerEiPv+0x100)[0x5311c0]
/usr/local/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xcc)[0x5383ac]
/usr/local/bin/traffic_server(_ZN8PluginVC17process_read_sideEb+0x425)[0x4d1535]
/usr/local/bin/traffic_server(_ZN8PluginVC18process_write_sideEb+0x5ac)[0x4d1e6c]
/usr/local/bin/traffic_server(_ZN8PluginVC12main_handlerEiPv+0x46e)[0x4d389e]
/usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x238)[0x6cb258]
/usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x3a9)[0x6cb979]
/usr/local/bin/traffic_server[0x6cad1e]
/lib64/libpthread.so.0[0x2aafe810683d]
/lib64/libc.so.6(clone+0x6d)[0x313bad503d]

(demangled)

NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/local/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0[0x2aafe810eca0]

[jira] [Commented] (TS-2373) the cache is not initialized properly

2013-11-20 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13827806#comment-13827806
 ] 

James Peach commented on TS-2373:
-

We had a bug with a similar symptoms. I took a cache diagnostic trace. Let me 
see whether it is the same issue.

 the cache is not initialized properly
 -

 Key: TS-2373
 URL: https://issues.apache.org/jira/browse/TS-2373
 Project: Traffic Server
  Issue Type: Bug
  Components: Cache
Reporter: xiongzongtao
Assignee: xiongzongtao
  Labels: T

 in recent days, there are several ATS servers on line failed to initialize 
 cache system when restart, so, all requests are simply forwarded to origins.
Note:  the storage is configured to use raw devices, like this:
 /dev/raw/raw1 146163105792
 /dev/raw/raw2 146163105792
 /dev/raw/raw3 146163105792
 /dev/raw/raw4 146163105792
 /dev/raw/raw5 146163105792
 volume.config is not configured. so, one vol per raw device
 1.  checking the traffic.out, found that:  cache enabled line that 
 should follow the traffic server running line
 2.  echo show:cache-stats|traffic_shell shows that all stats are ZEROs
 3.  dump the memory of traffic_server by gdb gcore for convenience of 
 debuging
 4.  use gdb to debug:
  a)  all vols are set in global gvol except /dev/raw/raw4.
  b)  one  vol (/dev/raw/raw4) hunged in function handle_recover_from_data
  c) check aio_req, found that, one io event are queued in 
   aio_req[4]'s aio_temp_list
  d) check io threads,  all are in pthread_cond_wait status.
 here, we can see that, handle_recover_from_data issued an io request 
 and queued it in aio_req[4]'s aio_temp_list, but all io threads are 
 in pthread_cond_wait status, no one to process this request, so that, 
 handle_recover_from_data waiting endless. And cache will not enabled 
 until all the vols are inited successfully.
 inspecting the code, found that, 
 code snippet1  from aio_thread_main
 1  if (!INK_ATOMICLIST_EMPTY(my_aio_req-aio_temp_list))
 2  aio_move(my_aio_req);
 3  if (!(op = my_aio_req-aio_todo.pop())  !(op = 
 my_aio_req-http_aio_todo.pop()))
 4   break;
 code snippet2 from aio_queue_req:
 {code}
 1.  if (!ink_mutex_try_acquire(req-aio_mutex)) {
 ink_atomiclist_push(req-aio_temp_list, op);
 2. } else {
 /* check if any pending requests on the atomic list */
 if (!INK_ATOMICLIST_EMPTY(req-aio_temp_list))
   aio_move(req);
 /* now put the new request */
 aio_insert(op, req);
 ink_cond_signal(req-aio_cond);
 ink_mutex_release(req-aio_mutex);
   }
 {code}
 now suppose aio_temp_list is empty, thread A running code snippet1, thread B 
 running code snippet2, and just when thread A is between 1 and  2 in 
 snippet1, 
 thread B is executing 1 in snippet2. here, thread B insert io request to 
 aio_temp_list, but no notification, and thread A continue to 3, and then 
 4(break), 
 finally fall in pthread_cond_wait. thus,no io thread will process this request
 something about my debugging:
 {code}
 (gdb) p theCache
 $1 = (Cache *) 0xd828cb0
 (gdb) p *theCache
 $2 = {cache_read_done = 1, total_good_nvol = 4, total_nvol = 5, ready = 0, 
 cache_size = 89169900, 
   hosttable = 0x0, total_initialized_vol = 4, scheme = 1}
 (gdb) set $i=0
 (gdb) p gvol[$i++]-path
 $3 = 0xd826ca0 /dev/raw/raw2
 (gdb) p gvol[$i++]-path
 $4 = 0xd8267a0 /dev/raw/raw1
 (gdb) p gvol[$i++]-path
 $5 = 0xd826a00 /dev/raw/raw5
 (gdb) p gvol[$i++]-path
 $6 = 0xd8269c0 /dev/raw/raw3
 (gdb) p gvol[$i++]-path
 Cannot access memory at address 0x0
 (gdb) p cp_list-head-vols[0]
 $7 = (Vol *) 0xd828d10
 (gdb) set $i=0
 (gdb) p *cp_list-head-vols[4]  
 $8 = {Continuation = {force_VFPT_to_top = {_vptr.force_VFPT_to_top = 
 0x713010}, 
 handler = 0x672070 Vol::aggWrite(int, void*), mutex = {m_ptr = 
 0xd806ac0}, 
 link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, path = 
 0xd826a00 /dev/raw/raw5, 
   hash_id = 0xd825db0 /dev/raw/raw5 40960:17833980, hash_id_md5 = {b = 
 {16306325817679019090, 
   17491422742868856741}}, fd = 38, raw_dir = 0x2aaaf0efb000 \r025, dir 
 = 0x2aaaf0efd000, 
   header = 0x2aaaf0efb000, footer = 0x2aaafbcb9000, segments = 278, buckets = 
 16382, 
   recover_pos = 68007976448, prev_recover_pos = 67994133504, scan_pos = 
 76968569856, skip = 40960, 
   start = 364421120, len = 146095964160, data_blocks = 17789500, 
 hit_evacuate_window = 0, 
   io = {AIOCallback = {Continuation = {force_VFPT_to_top = 
 {_vptr.force_VFPT_to_top = 0x712d70}, 
 handler = 0x647640 AIOCallbackInternal::io_complete(int, void*), 
 mutex = {m_ptr = 0xd806ac0}, 
 link = {SLinkContinuation = {next = 0x0}, prev = 0x0}}, aiocb = 
 {aio_fildes = 0, 
 aio_buf = 0x2aaabc47e000, aio_nbytes = 8388608, aio_offset = 
 67994133504, aio_reqprio = 0, 
  

[jira] [Updated] (TS-2299) ATS seg faults

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2299:


Component/s: MIME
 HTTP

 ATS seg faults
 --

 Key: TS-2299
 URL: https://issues.apache.org/jira/browse/TS-2299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, HTTP, MIME
Affects Versions: 4.0.1
 Environment: RHEL 5.9
Reporter: John Paul Vasicek

 I'm seeing segmentation faults in ATS 4.0.1, these seem to happen randomly:
 {code}
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2aafe810eca0]
 /usr/local/bin/traffic_server(_Z16mime_scanner_getP11MIMEScannerPPKcS2_S3_S3_Pbbi+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(_Z21http_parser_parse_reqP10HTTPParserP7HdrHeapP11HTTPHdrImplPPKcS6_bb+0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(_ZN7HTTPHdr9parse_reqEP10HTTPParserP14IOBufferReaderPib+0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(_ZN6HttpSM32state_read_client_request_headerEiPv+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(_ZN8PluginVC17process_read_sideEb+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(_ZN8PluginVC18process_write_sideEb+0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(_ZN8PluginVC12main_handlerEiPv+0x46e)[0x4d389e]
 /usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x238)[0x6cb258]
 /usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x3a9)[0x6cb979]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2aafe810683d]
 /lib64/libc.so.6(clone+0x6d)[0x313bad503d]
 {code}
 (demangled)
 {code}
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2aafe810eca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool)+0x1a7)[0x5c11d7]
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*)+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool)+0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool)+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*)+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, 
 void*)+0x39f)[0x4d37cf]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(_ZN8Plu
 ginVC17process_read_sideEb+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, 
 int)+0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute()+0x707)[0x6cbcd7]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2ba86e67283d]
 /usr/local/bin/traffic_server(PluginVC::process_write_side(bool)+0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, 
 void*)+0x46e)[0x4d389e]
 /lib64/libc.so.6(clone+0x6d)[0x313bad503d]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2371) crash in mime_scanner_get

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2371:


Summary: crash in mime_scanner_get  (was: ATS Segmentation fault)

 crash in mime_scanner_get
 -

 Key: TS-2371
 URL: https://issues.apache.org/jira/browse/TS-2371
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, HTTP, MIME
Affects Versions: 4.0.1
Reporter: John Paul Vasicek
  Labels: T, crash

 We have seen some segmentation faults in our logs:
 {code}
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int) 0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool) 0x113)[0x5c4e73]
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool) 0x1a7)[0x5c11d7]
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int) 0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*) 0x100)[0x5311c0]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool) 0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*) 0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool) 0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool) 
 0x425)[0x4d1535]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*) 0x100)[0x5311c0]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, void*) 
 0x39f)[0x4d37cf]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*) 0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool) 
 0x425)[0x4d1535]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, int) 
 0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute() 0x707)[0x6cbcd7]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2ba86e67283d]
 /usr/local/bin/traffic_server(PluginVC::process_write_side(bool) 
 0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, void*) 
 0x46e)[0x4d389e]
 /lib64/libc.so.6(clone 0x6d)[0x313bad503d]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, int) 
 0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute() 0x3a9)[0x6cb979]
 [E. Mgmt] log == [TrafficManager] using root directory '/usr/local'
 [TrafficServer] using root directory '/usr/local'
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (TS-2371) ATS Segmentation fault

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-2371.
-

Resolution: Duplicate

Duplicate of TS-2299

 ATS Segmentation fault
 --

 Key: TS-2371
 URL: https://issues.apache.org/jira/browse/TS-2371
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, HTTP, MIME
Affects Versions: 4.0.1
Reporter: John Paul Vasicek
  Labels: T, crash

 We have seen some segmentation faults in our logs:
 {code}
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int) 0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool) 0x113)[0x5c4e73]
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool) 0x1a7)[0x5c11d7]
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int) 0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*) 0x100)[0x5311c0]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool) 0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*) 0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool) 0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool) 
 0x425)[0x4d1535]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*) 0x100)[0x5311c0]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, void*) 
 0x39f)[0x4d37cf]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*) 0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool) 
 0x425)[0x4d1535]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, int) 
 0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute() 0x707)[0x6cbcd7]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2ba86e67283d]
 /usr/local/bin/traffic_server(PluginVC::process_write_side(bool) 
 0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, void*) 
 0x46e)[0x4d389e]
 /lib64/libc.so.6(clone 0x6d)[0x313bad503d]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, int) 
 0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute() 0x3a9)[0x6cb979]
 [E. Mgmt] log == [TrafficManager] using root directory '/usr/local'
 [TrafficServer] using root directory '/usr/local'
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2299) ATS seg faults in MIMEScanner::mime_scanner_get

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2299:


Summary: ATS seg faults in MIMEScanner::mime_scanner_get  (was: ATS seg 
faults)

 ATS seg faults in MIMEScanner::mime_scanner_get
 ---

 Key: TS-2299
 URL: https://issues.apache.org/jira/browse/TS-2299
 Project: Traffic Server
  Issue Type: Bug
  Components: Core, HTTP, MIME
Affects Versions: 4.0.1
 Environment: RHEL 5.9
Reporter: John Paul Vasicek

 I'm seeing segmentation faults in ATS 4.0.1, these seem to happen randomly:
 {code}
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2aafe810eca0]
 /usr/local/bin/traffic_server(_Z16mime_scanner_getP11MIMEScannerPPKcS2_S3_S3_Pbbi+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(_Z21http_parser_parse_reqP10HTTPParserP7HdrHeapP11HTTPHdrImplPPKcS6_bb+0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(_ZN7HTTPHdr9parse_reqEP10HTTPParserP14IOBufferReaderPib+0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(_ZN6HttpSM32state_read_client_request_headerEiPv+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(_ZN8PluginVC17process_read_sideEb+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(_ZN8PluginVC18process_write_sideEb+0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(_ZN8PluginVC12main_handlerEiPv+0x46e)[0x4d389e]
 /usr/local/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x238)[0x6cb258]
 /usr/local/bin/traffic_server(_ZN7EThread7executeEv+0x3a9)[0x6cb979]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2aafe810683d]
 /lib64/libc.so.6(clone+0x6d)[0x313bad503d]
 {code}
 (demangled)
 {code}
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0[0x2aafe810eca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/local/bin/traffic_server - STACK TRACE: 
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool)+0x1a7)[0x5c11d7]
 /lib64/libpthread.so.0[0x2ba86e67aca0]
 /usr/local/bin/traffic_server(mime_scanner_get(MIMEScanner*, char const**, 
 char const*, char const**, char const**, bool*, bool, int)+0x2c2)[0x5cf752]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*)+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(http_parser_parse_req(HTTPParser*, HdrHeap*, 
 HTTPHdrImpl*, char const**, char const*, bool, bool)+0x113)[0x5c4e73]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(HTTPHdr::parse_req(HTTPParser*, 
 IOBufferReader*, int*, bool)+0x1a7)[0x5c11d7]
 /usr/local/bin/traffic_server(PluginVC::process_read_side(bool)+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(HttpSM::state_read_client_request_header(int, 
 void*)+0x100)[0x5311c0]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, 
 void*)+0x39f)[0x4d37cf]
 /usr/local/bin/traffic_server(HttpSM::main_handler(int, void*)+0xcc)[0x5383ac]
 /usr/local/bin/traffic_server(_ZN8Plu
 ginVC17process_read_sideEb+0x425)[0x4d1535]
 /usr/local/bin/traffic_server(EThread::process_event(Event*, 
 int)+0x238)[0x6cb258]
 /usr/local/bin/traffic_server(EThread::execute()+0x707)[0x6cbcd7]
 /usr/local/bin/traffic_server[0x6cad1e]
 /lib64/libpthread.so.0[0x2ba86e67283d]
 /usr/local/bin/traffic_server(PluginVC::process_write_side(bool)+0x5ac)[0x4d1e6c]
 /usr/local/bin/traffic_server(PluginVC::main_handler(int, 
 void*)+0x46e)[0x4d389e]
 /lib64/libc.so.6(clone+0x6d)[0x313bad503d]
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2376) some makefiles don't get LDFLAGS right

2013-11-20 Thread Daniel Vitor Morilha (JIRA)
Daniel Vitor Morilha created TS-2376:


 Summary: some makefiles don't get LDFLAGS right
 Key: TS-2376
 URL: https://issues.apache.org/jira/browse/TS-2376
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Reporter: Daniel Vitor Morilha


I am trying to set a global LDFLAGS -rpath=...

some of the libraries get it right, most of the binaries spit:

c++: unrecognized option '-rpath=...'

I believe it is lacking the -Wl in front, so the compiler can properly pass the 
argument to the linker.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2376) some makefiles don't get LDFLAGS right

2013-11-20 Thread James Peach (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828004#comment-13828004
 ] 

James Peach commented on TS-2376:
-

Please include the exact configure command you are using, along with a log of 
the {{make V=1}} output.

 some makefiles don't get LDFLAGS right
 --

 Key: TS-2376
 URL: https://issues.apache.org/jira/browse/TS-2376
 Project: Traffic Server
  Issue Type: Bug
  Components: Build
Reporter: Daniel Vitor Morilha

 I am trying to set a global LDFLAGS -rpath=...
 some of the libraries get it right, most of the binaries spit:
 c++: unrecognized option '-rpath=...'
 I believe it is lacking the -Wl in front, so the compiler can properly pass 
 the argument to the linker.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (TS-617) document proxy.config.plugin.extensions_dir

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach resolved TS-617.


Resolution: Invalid

The extensions.config stuff was removes in TS-1778. The 
{{proxy.config.plugin.extensions_dir}} setting no longer exists.

 document proxy.config.plugin.extensions_dir
 ---

 Key: TS-617
 URL: https://issues.apache.org/jira/browse/TS-617
 Project: Traffic Server
  Issue Type: Improvement
  Components: Documentation
 Environment: any
Reporter: Igor Galić
Assignee: Igor Galić
 Fix For: Docs


 Document {{extensions.config}} -- allows a plugin to expose its symbols as 
 globals, thereby allowing it to expose it to others as API
 Add an empty default config file, with coments depicting example usage.
 Similar to plugin.config, {{extensions.config}} lists one plugin per line, 
 with an additional boolean flag regarding how to open the .so file.
 {noformat}
 # Example
 someplugin.so true
 {noformat}
 Would pass {{true}} as {{argv[1]}} to {{someplugin.so}}'s {{plugin_init()}}.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2377) DOC: install man pages as part of the build

2013-11-20 Thread James Peach (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Peach updated TS-2377:


Fix Version/s: 4.2.0
 Assignee: James Peach

I pretty much have this working.

 DOC: install man pages as part of the build
 ---

 Key: TS-2377
 URL: https://issues.apache.org/jira/browse/TS-2377
 Project: Traffic Server
  Issue Type: Bug
  Components: Documentation, Quality
Reporter: James Peach
Assignee: James Peach
 Fix For: 4.2.0


 We should install the sphinx-generated man pages as part of the build. This 
 would make it easy for both vendors and users to package the documentation 
 with the binary products.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2377) DOC: install man pages as part of the build

2013-11-20 Thread James Peach (JIRA)
James Peach created TS-2377:
---

 Summary: DOC: install man pages as part of the build
 Key: TS-2377
 URL: https://issues.apache.org/jira/browse/TS-2377
 Project: Traffic Server
  Issue Type: Bug
  Components: Documentation, Quality
Reporter: James Peach


We should install the sphinx-generated man pages as part of the build. This 
would make it easy for both vendors and users to package the documentation with 
the binary products.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2378) merge_response_header_with_cached_header

2013-11-20 Thread bettydramit (JIRA)
bettydramit created TS-2378:
---

 Summary: merge_response_header_with_cached_header
 Key: TS-2378
 URL: https://issues.apache.org/jira/browse/TS-2378
 Project: Traffic Server
  Issue Type: Bug
Reporter: bettydramit


ats 4.0.1 centos x86_64
NOTE: Traffic Server received Sig 11: Segmentation fault
/usr/bin/traffic_server - STACK TRACE: 
/lib64/libpthread.so.0(+0xf500)[0x2b98fc525500]
/lib64/libc.so.6(memcpy+0x35)[0x2b98fd162895]
/usr/bin/traffic_server(_ZN11MIMEHdrImpl12move_stringsEP10HdrStrHeap+0x8a)[0x5bf29a]
/usr/bin/traffic_server(_ZN7HdrHeap18coalesce_str_heapsEi+0xd3)[0x5b1cc3]
/usr/bin/traffic_server(_ZN7HdrHeap12allocate_strEi+0x9e)[0x5b224e]
/usr/bin/traffic_server(_ZN7HdrHeap13duplicate_strEPKci+0x5e)[0x5b246e]
/usr/bin/traffic_server(_Z20mime_field_value_setP7HdrHeapP11MIMEHdrImplP9MIMEFieldPKcib+0x37)[0x5c0597]
/usr/bin/traffic_server(_ZN12HttpTransact40merge_response_header_with_cached_headerEP7HTTPHdrS1_+0x2de)[0x545c5e]
/usr/bin/traffic_server(_ZN12HttpTransact41merge_and_update_headers_for_cache_updateEPNS_5StateE+0xe3)[0x548993]
/usr/bin/traffic_server(_ZN12HttpTransact49handle_cache_operation_on_forward_server_responseEPNS_5StateE+0x892)[0x555712]
/usr/bin/traffic_server(_ZN12HttpTransact14HandleResponseEPNS_5StateE+0x191)[0x5609d1]
/usr/bin/traffic_server(_ZN6HttpSM32call_transact_and_set_next_stateEPFvPN12HttpTransact5StateEE+0x66)[0x51afb6]
/usr/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3ba)[0x53579a]
/usr/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52b8f0]
/usr/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x5318eb]
/usr/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b8d44]
/usr/lib64/trafficserver/plugins/localdown.so(+0x2ebdc)[0x2b996c02ebdc]
/usr/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x116)[0x52b756]
/usr/bin/traffic_server(_ZN6HttpSM33state_read_server_response_headerEiPv+0x390)[0x52ef60]
/usr/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x5316f8]
/usr/bin/traffic_server[0x6805bb]
/usr/bin/traffic_server[0x682f54]
/usr/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1f2)[0x67bb92]
/usr/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a356f]
/usr/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a3f53]
/usr/bin/traffic_server[0x6a240a]
/lib64/libpthread.so.0(+0x7851)[0x2b98fc51d851]
/lib64/libc.so.6(clone+0x6d)[0x2b98fd1c194d]




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)
Yunkai Zhang created TS-2379:


 Summary: Add a new field: '%chp', client_host_port to LogFormat
 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang


Add a new field: '%chp', client_host_port to LogFormat.

In China, the Net supervisor bureau of government request us to report access 
log with client host port:(.

I wonder other users of ATS in China will need patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-1468) vary and accept* are ignored in cache for non-200 responses from the origin - webdav

2013-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-1468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828427#comment-13828427
 ] 

ASF subversion and git services commented on TS-1468:
-

Commit 46adeb3d0e4b8a927c24a479f2c1a91b28358c6e in branch refs/heads/master 
from [~bcall]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=46adeb3 ]

TS-1468: Check vary and accept headers on non-200 responses in cache


 vary and accept* are ignored in cache for non-200 responses from the origin - 
 webdav
 

 Key: TS-1468
 URL: https://issues.apache.org/jira/browse/TS-1468
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Affects Versions: 3.3.0, 3.2.0
Reporter: Bryan Call
Assignee: Bryan Call
 Fix For: 4.2.0

 Attachments: ts1468.diff


 ATS doesn't look at the Accept* and vary headers when trying to find an 
 alternate in cache for non-200 responses from the origin.
 Webdav is effected because of 207 response from the origin and ATS doesn't 
 check the Accept* and vary headers.  If there is gzipped data in cache and a 
 non-gzipped request comes in, the gzipped version will be handed back to the 
 client.
 There is an option for YTS to always look at the Accept* and vary headers, 
 but it is not in ATS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunkai Zhang updated TS-2379:
-

Attachment: 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch

 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 In China, the Net supervisor bureau of government request us to report access 
 log with client host port:(.
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunkai Zhang updated TS-2379:
-

Description: 
Add a new field: '%chp', client_host_port to LogFormat.

In China, the Net supervisor bureau of government request us to report access 
log with client host port:(.

I wonder other users of ATS in China will need this patch.

  was:
Add a new field: '%chp', client_host_port to LogFormat.

In China, the Net supervisor bureau of government request us to report access 
log with client host port:(.

I wonder other users of ATS in China will need patch.


 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 In China, the Net supervisor bureau of government request us to report access 
 log with client host port:(.
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunkai Zhang updated TS-2379:
-

Description: 
Add a new field: '%chp', client_host_port to LogFormat.

You may ask, who will care about client host port?

But In China, a special country, the Net supervisor bureau of government 
sometimes will request us to report some access logs with client host port(for 
unknown reason).

I wonder other users of ATS in China will need this patch.

  was:
Add a new field: '%chp', client_host_port to LogFormat.

In China, the Net supervisor bureau of government request us to report access 
log with client host port:(.

I wonder other users of ATS in China will need this patch.


 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 You may ask, who will care about client host port?
 But In China, a special country, the Net supervisor bureau of government 
 sometimes will request us to report some access logs with client host 
 port(for unknown reason).
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2327) TSRedirectUrlSet does not perform DNS lookup of redirected OS

2013-11-20 Thread Bryan Call (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Call updated TS-2327:
---

Fix Version/s: 4.2.0

 TSRedirectUrlSet does not perform DNS lookup of redirected OS
 -

 Key: TS-2327
 URL: https://issues.apache.org/jira/browse/TS-2327
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Ron Barber
Assignee: Bryan Call
 Fix For: 4.2.0

 Attachments: ts2327.diff


 TSRedirectUrlSet does not perform DNS lookup of redirected OS, it instead 
 reuses the connection to the original OS and makes a request for the 
 redirected object.  This, of course, would normally result in a 404 but could 
 easily just return the wrong content.
 I will attach my attempt at a patch momentarily which works, however, it may 
 not be the *best* way to make ATS perform the DNS lookup for the redirected 
 OS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunkai Zhang reassigned TS-2379:


Assignee: Yunkai Zhang

 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
Assignee: Yunkai Zhang
 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 You may ask, who will care about client host port?
 But In China, a special country, the Net supervisor bureau of government 
 sometimes will request us to report some access logs with client host 
 port(for unknown reason).
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2327) TSRedirectUrlSet does not perform DNS lookup of redirected OS

2013-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828438#comment-13828438
 ] 

ASF subversion and git services commented on TS-2327:
-

Commit 9f05ebe81f5ceca844ae34bb51691afa198a39db in branch refs/heads/master 
from [~rwbarber2]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=9f05ebe ]

TS-2327: TSRedirectUrlSet does not perform DNS lookup of redirected OS


 TSRedirectUrlSet does not perform DNS lookup of redirected OS
 -

 Key: TS-2327
 URL: https://issues.apache.org/jira/browse/TS-2327
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: Ron Barber
Assignee: Bryan Call
 Fix For: 4.2.0

 Attachments: ts2327.diff


 TSRedirectUrlSet does not perform DNS lookup of redirected OS, it instead 
 reuses the connection to the original OS and makes a request for the 
 redirected object.  This, of course, would normally result in a 404 but could 
 easily just return the wrong content.
 I will attach my attempt at a patch momentarily which works, however, it may 
 not be the *best* way to make ATS perform the DNS lookup for the redirected 
 OS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread Yunkai Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunkai Zhang updated TS-2379:
-

Fix Version/s: 4.2.0

 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
Assignee: Yunkai Zhang
 Fix For: 4.2.0

 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 You may ask, who will care about client host port?
 But In China, a special country, the Net supervisor bureau of government 
 sometimes will request us to report some access logs with client host 
 port(for unknown reason).
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2374) http_sm hung when all consumers aborted but the producer still alive

2013-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828454#comment-13828454
 ] 

ASF subversion and git services commented on TS-2374:
-

Commit ae51d4ed7b1bf4e162b2c26819b5a88c5cf9ece4 in branch refs/heads/master 
from [~taorui]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=ae51d4e ]

TS-2374: abort the producer to avoid httpsm hung if none of it`s
consumer is alive.


 http_sm hung when all consumers aborted but the producer still alive
 

 Key: TS-2374
 URL: https://issues.apache.org/jira/browse/TS-2374
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: weijin

 HttpSM will kill_this when the tunnel is not alive (none of consumer alive, 
 nor the producer). But when the cache_vc (for write) is the only consumer of 
 the server_session, and the consumer is aborted because of write failed, the 
 httpsm hung.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (TS-2374) http_sm hung when all consumers aborted but the producer still alive

2013-11-20 Thread weijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/TS-2374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weijin updated TS-2374:
---

Backport to Version: 4.0.2
  Fix Version/s: 4.2.0
   Assignee: weijin

 http_sm hung when all consumers aborted but the producer still alive
 

 Key: TS-2374
 URL: https://issues.apache.org/jira/browse/TS-2374
 Project: Traffic Server
  Issue Type: Bug
  Components: HTTP
Reporter: weijin
Assignee: weijin
 Fix For: 4.2.0


 HttpSM will kill_this when the tunnel is not alive (none of consumer alive, 
 nor the producer). But when the cache_vc (for write) is the only consumer of 
 the server_session, and the consumer is aborted because of write failed, the 
 httpsm hung.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2378) merge_response_header_with_cached_header

2013-11-20 Thread Leif Hedstrom (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828463#comment-13828463
 ] 

Leif Hedstrom commented on TS-2378:
---

Can you please update the bug with a demangled trace, and descriptive summary, 
which version of ATS you are using,  and perhaps some details on what could 
possible have triggered this.

 merge_response_header_with_cached_header
 

 Key: TS-2378
 URL: https://issues.apache.org/jira/browse/TS-2378
 Project: Traffic Server
  Issue Type: Bug
Reporter: bettydramit

 ats 4.0.1 centos x86_64
 NOTE: Traffic Server received Sig 11: Segmentation fault
 /usr/bin/traffic_server - STACK TRACE: 
 /lib64/libpthread.so.0(+0xf500)[0x2b98fc525500]
 /lib64/libc.so.6(memcpy+0x35)[0x2b98fd162895]
 /usr/bin/traffic_server(_ZN11MIMEHdrImpl12move_stringsEP10HdrStrHeap+0x8a)[0x5bf29a]
 /usr/bin/traffic_server(_ZN7HdrHeap18coalesce_str_heapsEi+0xd3)[0x5b1cc3]
 /usr/bin/traffic_server(_ZN7HdrHeap12allocate_strEi+0x9e)[0x5b224e]
 /usr/bin/traffic_server(_ZN7HdrHeap13duplicate_strEPKci+0x5e)[0x5b246e]
 /usr/bin/traffic_server(_Z20mime_field_value_setP7HdrHeapP11MIMEHdrImplP9MIMEFieldPKcib+0x37)[0x5c0597]
 /usr/bin/traffic_server(_ZN12HttpTransact40merge_response_header_with_cached_headerEP7HTTPHdrS1_+0x2de)[0x545c5e]
 /usr/bin/traffic_server(_ZN12HttpTransact41merge_and_update_headers_for_cache_updateEPNS_5StateE+0xe3)[0x548993]
 /usr/bin/traffic_server(_ZN12HttpTransact49handle_cache_operation_on_forward_server_responseEPNS_5StateE+0x892)[0x555712]
 /usr/bin/traffic_server(_ZN12HttpTransact14HandleResponseEPNS_5StateE+0x191)[0x5609d1]
 /usr/bin/traffic_server(_ZN6HttpSM32call_transact_and_set_next_stateEPFvPN12HttpTransact5StateEE+0x66)[0x51afb6]
 /usr/bin/traffic_server(_ZN6HttpSM17handle_api_returnEv+0x3ba)[0x53579a]
 /usr/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x2b0)[0x52b8f0]
 /usr/bin/traffic_server(_ZN6HttpSM18state_api_callbackEiPv+0x8b)[0x5318eb]
 /usr/bin/traffic_server(TSHttpTxnReenable+0x404)[0x4b8d44]
 /usr/lib64/trafficserver/plugins/localdown.so(+0x2ebdc)[0x2b996c02ebdc]
 /usr/bin/traffic_server(_ZN6HttpSM17state_api_calloutEiPv+0x116)[0x52b756]
 /usr/bin/traffic_server(_ZN6HttpSM33state_read_server_response_headerEiPv+0x390)[0x52ef60]
 /usr/bin/traffic_server(_ZN6HttpSM12main_handlerEiPv+0xd8)[0x5316f8]
 /usr/bin/traffic_server[0x6805bb]
 /usr/bin/traffic_server[0x682f54]
 /usr/bin/traffic_server(_ZN10NetHandler12mainNetEventEiP5Event+0x1f2)[0x67bb92]
 /usr/bin/traffic_server(_ZN7EThread13process_eventEP5Eventi+0x8f)[0x6a356f]
 /usr/bin/traffic_server(_ZN7EThread7executeEv+0x4a3)[0x6a3f53]
 /usr/bin/traffic_server[0x6a240a]
 /lib64/libpthread.so.0(+0x7851)[0x2b98fc51d851]
 /lib64/libc.so.6(clone+0x6d)[0x2b98fd1c194d]



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (TS-2379) Add a new field: '%chp', client_host_port to LogFormat

2013-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/TS-2379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13828465#comment-13828465
 ] 

ASF subversion and git services commented on TS-2379:
-

Commit 51a9252dcd6fa76fb7846c3ca10cf1651fa55f83 in branch refs/heads/master 
from [~yunkai]
[ https://git-wip-us.apache.org/repos/asf?p=trafficserver.git;h=51a9252 ]

Add CHANGES for TS-2379


 Add a new field: '%chp', client_host_port to LogFormat
 --

 Key: TS-2379
 URL: https://issues.apache.org/jira/browse/TS-2379
 Project: Traffic Server
  Issue Type: New Feature
  Components: Logging
Reporter: Yunkai Zhang
Assignee: Yunkai Zhang
 Fix For: 4.2.0

 Attachments: 
 0001-Add-a-new-field-chp-client_host_port-to-LogFormat.patch


 Add a new field: '%chp', client_host_port to LogFormat.
 You may ask, who will care about client host port?
 But In China, a special country, the Net supervisor bureau of government 
 sometimes will request us to report some access logs with client host 
 port(for unknown reason).
 I wonder other users of ATS in China will need this patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)