[trafficserver] branch master updated: cachekey/cacheurl key string compatibility enhancements.

2017-08-02 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 244288f  cachekey/cacheurl key string compatibility enhancements.
244288f is described below

commit 244288fab01bdad823f9de19dcece62a7e2a0c11
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Mon Jul 31 14:43:33 2017 -0700

cachekey/cacheurl key string compatibility enhancements.

- Added a parameter to override the default cache key element separator.
  --separator=

- Added flags allowing path and prefix not to be appended by default.
  --remove-path=<true|false|yes|no|0|1>
  --remove-prefix=<true|false|yes|no|0|1>

- Documentated the new features and provided cacheurl to cachekey 
migrationion examples.

- If /replacement/ in  is empty don't add any captures, 
i.e.
  --capture-path=/.*// - would remove the whole path (nothing captured);
  --capture-prefix=/(.*)// - would remove the whole prefix (nothing 
captured);
---
 doc/admin-guide/plugins/cachekey.en.rst   | 52 +++
 plugins/experimental/cachekey/cachekey.cc | 13 
 plugins/experimental/cachekey/cachekey.h  |  5 +--
 plugins/experimental/cachekey/configs.cc  | 40 +++-
 plugins/experimental/cachekey/configs.h   | 26 +++-
 plugins/experimental/cachekey/pattern.cc  | 24 --
 plugins/experimental/cachekey/pattern.h   |  5 ++-
 plugins/experimental/cachekey/plugin.cc   | 12 ---
 8 files changed, 151 insertions(+), 26 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index 75e134d..39f7c55 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -74,6 +74,8 @@ Cache key structure and related plugin parameters
 * ``--capture-prefix=`` (default: empty string) - if 
specified and not empty then strings are captured from ``host:port`` based on 
the  and are added to the cache key.
 * ``--capture-prefix-uri=`` (default: empty string) - if 
specified and not empty then strings are captured from the entire URI based on 
the  and are added to the cache key.
 * If any of the "Prefix" related plugin parameters are used together in the 
plugin configuration they are added to the cache key in the order shown in the 
diagram.
+* ``--remove-prefix=<true|false|yes|no|0|1`` (default: false) - if specified 
the prefix elements (host, port) are not processed nor appended to the 
cachekey. All prefix related plugin paramenters are ignored if this parameter 
is ``true``, ``yes`` or ``1``.
+
 
 
 "User-Agent" section
@@ -139,6 +141,7 @@ Cache key structure and related plugin parameters
 * if no path related plugin parameters are used, the URI path string is 
included in the cache key.
 * ``--capture-path=`` (default: empty string) - if 
specified and not empty then strings are captured from URI path based on the 
 and are added to the cache key.
 * ``--capture-path-uri=`` (default: empty string) - if 
specified and not empty then strings are captured from the entire URI based on 
the  and are added to the cache key.
+* ``--remove-path=<true|false|yes|no|0|1`` (default: false) - if specified the 
HTTP URI path element is not processed nor appended to the cachekey. All path 
related plugin paramenters are ignored if this parameter is ``true``, ``yes`` 
or ``1``.
 
 "Query" section
 ^^^
@@ -162,6 +165,11 @@ All parameters are optional, and if not used, their 
default values are as mentio
 * ``///`` -  defines regex capturing 
groups,  defines a pattern where the captured strings 
referenced with ``$0`` ... ``$9`` will be substituted and the result will be 
added to the cache key.
 
 
+Cache key elements separator
+
+* ``--separator=`` - the cache key is constructed by extracting 
elements from HTTP URI and headers or by using the UA classifiers and they are 
appended during the key construction and separated by ``/`` (by default). This 
options allows to override the dafault separator to any string (including an 
empty string)
+
+
 Detailed examples and troubleshooting
 =
 
@@ -504,3 +512,47 @@ and if ``tool_agents.config`` contains: ::
   ^curl.*
 
 then ``browser`` will be used when constructing the key.
+
+
+Cacheurl plugin to cachekey plugin migration
+
+The plugin `cachekey` was not meant to replace the cacheurl plugin in terms of 
having exactly the same cache key strings generated. It just allows the 
operator to exctract elements from the HTTP URI in the same way the `cacheurl` 
does (through a regular expression, please see `` above).
+
+The following examples demonstra

[trafficserver] branch master updated: Coverity 1022122, 1022123, 1022125: Dereference before null check

2017-05-10 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  d7e85e5   Coverity 1022122, 1022123, 1022125: Dereference before 
null check
d7e85e5 is described below

commit d7e85e51e79e5c326e255d6f0caab0a7c9238e5e
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Wed May 10 12:38:00 2017 -0700

Coverity 1022122, 1022123, 1022125: Dereference before null check

Problem:
CID 1022122 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking txn_sm->q_client_request_buffer suggests 
that it may be null, but it has already been dereferenced on all paths leading 
to the check.
CID 1022123 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking txn_sm->q_client_response_buffer suggests 
that it may be null, but it has already been dereferenced on all paths leading 
to the check.
CID 1022125 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking txn_sm->q_cache_read_buffer suggests that 
it may be null, but it has already been dereferenced on all paths leading to 
the check

Fix:
Checked the return values on each step instead of all at the end.
---
 example/protocol/TxnSM.c | 27 ++-
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/example/protocol/TxnSM.c b/example/protocol/TxnSM.c
index db3005f..316fd72 100644
--- a/example/protocol/TxnSM.c
+++ b/example/protocol/TxnSM.c
@@ -160,10 +160,12 @@ state_start(TSCont contp, TSEvent event ATS_UNUSED, void 
*data ATS_UNUSED)
 return prepare_to_die(contp);
   }
 
-  txn_sm->q_client_request_buffer= TSIOBufferCreate();
+  txn_sm->q_client_request_buffer = TSIOBufferCreate();
+  if (!txn_sm->q_client_request_buffer) {
+return prepare_to_die(contp);
+  }
   txn_sm->q_client_request_buffer_reader = 
TSIOBufferReaderAlloc(txn_sm->q_client_request_buffer);
-
-  if (!txn_sm->q_client_request_buffer || 
!txn_sm->q_client_request_buffer_reader) {
+  if (!txn_sm->q_client_request_buffer_reader) {
 return prepare_to_die(contp);
   }
 
@@ -286,13 +288,20 @@ state_handle_cache_lookup(TSCont contp, TSEvent event, 
TSVConn vc)
 response_size = TSVConnCacheObjectSizeGet(txn_sm->q_cache_vc);
 
 /* Allocate IOBuffer to store data from the cache. */
-txn_sm->q_client_response_buffer= TSIOBufferCreate();
+txn_sm->q_client_response_buffer = TSIOBufferCreate();
+if (!txn_sm->q_client_response_buffer) {
+  return prepare_to_die(contp);
+}
 txn_sm->q_client_response_buffer_reader = 
TSIOBufferReaderAlloc(txn_sm->q_client_response_buffer);
-txn_sm->q_cache_read_buffer = TSIOBufferCreate();
-txn_sm->q_cache_read_buffer_reader  = 
TSIOBufferReaderAlloc(txn_sm->q_cache_read_buffer);
-
-if (!txn_sm->q_client_response_buffer || 
!txn_sm->q_client_response_buffer_reader || !txn_sm->q_cache_read_buffer ||
-!txn_sm->q_cache_read_buffer_reader) {
+if (!txn_sm->q_client_response_buffer_reader) {
+  return prepare_to_die(contp);
+}
+txn_sm->q_cache_read_buffer = TSIOBufferCreate();
+if (!txn_sm->q_cache_read_buffer) {
+  return prepare_to_die(contp);
+}
+txn_sm->q_cache_read_buffer_reader = 
TSIOBufferReaderAlloc(txn_sm->q_cache_read_buffer);
+if (!txn_sm->q_cache_read_buffer_reader) {
   return prepare_to_die(contp);
 }
 

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1022126 and 1022127: Dereference before null check

2017-05-10 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  1f4831c   Coverity 1022126 and 1022127: Dereference before null 
check
1f4831c is described below

commit 1f4831cbfa2c9416a9e8f32a06a2d945521fd342
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Wed May 10 12:07:40 2017 -0700

Coverity 1022126 and 1022127: Dereference before null check

Problem:
  CID 1022126 (#1 of 1): Dereference before null check (REVERSE_INULL)
  check_after_deref: Null-checking txn_sm->q_server_response_buffer 
suggests that it may be null, but it has already been dereferenced on all paths 
leading to the check.
  CID 1022127 (#1 of 1): Dereference before null check (REVERSE_INULL)
  check_after_deref: Null-checking txn_sm->q_server_request_buffer suggests 
that it may be null, but it has already been dereferenced on all paths leading 
to the check.

Fix:
  Checked the return values on each step instead of all at the end.
---
 example/protocol/TxnSM.c | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/example/protocol/TxnSM.c b/example/protocol/TxnSM.c
index 316fd72..80f72df 100644
--- a/example/protocol/TxnSM.c
+++ b/example/protocol/TxnSM.c
@@ -441,14 +441,20 @@ state_build_and_send_request(TSCont contp, TSEvent event 
ATS_UNUSED, void *data
 
   txn_sm->q_pending_action = NULL;
 
-  txn_sm->q_server_request_buffer= TSIOBufferCreate();
+  txn_sm->q_server_request_buffer = TSIOBufferCreate();
+  if (!txn_sm->q_server_request_buffer) {
+return prepare_to_die(contp);
+  }
   txn_sm->q_server_request_buffer_reader = 
TSIOBufferReaderAlloc(txn_sm->q_server_request_buffer);
-
-  txn_sm->q_server_response_buffer   = TSIOBufferCreate();
+  if (!txn_sm->q_server_request_buffer_reader) {
+return prepare_to_die(contp);
+  }
+  txn_sm->q_server_response_buffer = TSIOBufferCreate();
+  if (!txn_sm->q_server_response_buffer) {
+return prepare_to_die(contp);
+  }
   txn_sm->q_cache_response_buffer_reader = 
TSIOBufferReaderAlloc(txn_sm->q_server_response_buffer);
-
-  if (!txn_sm->q_server_request_buffer || 
!txn_sm->q_server_request_buffer_reader || !txn_sm->q_server_response_buffer ||
-  !txn_sm->q_cache_response_buffer_reader) {
+  if (!txn_sm->q_cache_response_buffer_reader) {
 return prepare_to_die(contp);
   }
 

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1022124: Dereference before null check

2017-05-10 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  3b3a379   Coverity 1022124: Dereference before null check
3b3a379 is described below

commit 3b3a37985164b077fed8a6ce4b0554941a17090e
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Wed May 10 12:59:28 2017 -0700

Coverity 1022124: Dereference before null check

Problem:
  CID 1022124 (#1 of 1): Dereference before null check (REVERSE_INULL)
  check_after_deref: Null-checking txn_sm suggests that it may be null, but 
it has already been dereferenced on all paths leading to the check.

Fix:
  Removed the check for NULL.
---
 example/protocol/TxnSM.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/example/protocol/TxnSM.c b/example/protocol/TxnSM.c
index 7978d31..db3005f 100644
--- a/example/protocol/TxnSM.c
+++ b/example/protocol/TxnSM.c
@@ -873,10 +873,9 @@ state_done(TSCont contp, TSEvent event ATS_UNUSED, TSVIO 
vio ATS_UNUSED)
 txn_sm->q_server_response = NULL;
   }
 
-  if (txn_sm) {
-txn_sm->q_magic = TXN_SM_DEAD;
-TSfree(txn_sm);
-  }
+  txn_sm->q_magic = TXN_SM_DEAD;
+  TSfree(txn_sm);
+
   TSContDestroy(contp);
   return TS_EVENT_NONE;
 }

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1373289: Explicit null dereferenced

2017-05-10 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  1d173a7   Coverity 1373289: Explicit null dereferenced
1d173a7 is described below

commit 1d173a7cc8f2c82608d4151c12b07593296a73f2
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Tue May 9 07:48:32 2017 -0700

Coverity 1373289: Explicit null dereferenced

Problem:
  CID 1373289 (#1 of 1): Explicit null dereferenced (FORWARD_NULL)
  21. var_deref_model: Passing null pointer token to strcmp, which 
dereferences it.

Fix:
  Check if token is NULL and if yes it cannot be equal to what is in hash
  (also hash is never NULL)
---
 example/secure_link/secure_link.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/example/secure_link/secure_link.c 
b/example/secure_link/secure_link.c
index 3fe417d..7116021 100644
--- a/example/secure_link/secure_link.c
+++ b/example/secure_link/secure_link.c
@@ -118,7 +118,7 @@ TSRemapDoRemap(void *ih, TSHttpTxn rh, TSRemapRequestInfo 
*rri)
   time();
   e = (NULL == expire ? 0 : strtol(expire, NULL, 16));
   i = TSREMAP_DID_REMAP;
-  if (e < t || strcmp(hash, token) != 0) {
+  if (e < t || (NULL == token || 0 != strcmp(hash, token))) {
 if (e < t) {
   TSDebug(PLUGIN_NAME, "link expired: [%lu] vs [%lu]", t, e);
 } else {

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1200018 & 1200017

2017-05-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  da94824   Coverity 1200018 & 1200017
da94824 is described below

commit da948249f541812a5d6fea6cd93fed45ea1ad52f
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Tue May 9 11:26:46 2017 -0700

Coverity 1200018 & 1200017

Problem:
  CID 1200018 (#1 of 1): Resource leak (RESOURCE_LEAK)
  13. overwrite_var: Overwriting token in token = _TSstrdup(val, -1L, 
"memory/secure-link/secure-link.c:70") leaks the storage that token points to.

  CID 1200017 (#1 of 1): Resource leak (RESOURCE_LEAK)
  19. overwrite_var: Overwriting expire in expire = _TSstrdup(val, -1L, 
"memory/secure-link/secure-link.c:72") leaks the storage that expire points to

Fix:
  Took the TSstrdup() for expire and token outside the loop to avoid the 
leak.
---
 example/secure-link/secure-link.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/example/secure-link/secure-link.c 
b/example/secure-link/secure-link.c
index 7837bf1..d2b691a 100644
--- a/example/secure-link/secure-link.c
+++ b/example/secure-link/secure-link.c
@@ -50,7 +50,7 @@ TSRemapDoRemap(void *ih, TSHttpTxn rh, TSRemapRequestInfo 
*rri)
   const char *qh, *ph, *ip;
   unsigned char md[MD5_DIGEST_LENGTH];
   secure_link_info *sli = (secure_link_info *)ih;
-  char *token = NULL, *expire = NULL, *path = NULL;
+  char *token = NULL, *tokenptr = NULL, *expire = NULL, *expireptr = NULL, 
*path = NULL;
   char *s, *ptr, *saveptr = NULL, *val, hash[32] = "";
 
   in = (struct sockaddr_in *)TSHttpTxnClientAddrGet(rh);
@@ -67,15 +67,17 @@ TSRemapDoRemap(void *ih, TSHttpTxn rh, TSRemapRequestInfo 
*rri)
 if ((val = strchr(ptr, '=')) != NULL) {
   *val++ = '\0';
   if (strcmp(ptr, "st") == 0) {
-token = TSstrdup(val);
+tokenptr = val;
   } else if (strcmp(ptr, "ex") == 0) {
-expire = TSstrdup(val);
+expireptr = val;
   }
 } else {
   TSError("[secure_link] Invalid parameter [%s]", ptr);
   break;
 }
   } while ((ptr = strtok_r(NULL, "&", )) != NULL);
+  token  = (NULL == tokenptr ? NULL : TSstrdup(tokenptr));
+  expire = (NULL == expireptr ? NULL : TSstrdup(expireptr));
 } else {
   TSError("[secure_link] strtok didn't find a & in the query string");
   /* this is just example, so set fake params to prevent plugin crash */

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Converity 1373288: Dereference after null check

2017-05-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  440d290   Converity 1373288: Dereference after null check
440d290 is described below

commit 440d290c53a74d517f3b3d73548fd0e6967336a3
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Tue May 9 08:44:27 2017 -0700

Converity 1373288: Dereference after null check

Problem:
  CID 1373288 (#1 of 1): Dereference after null check (FORWARD_NULL)
  20. var_deref_model: Passing null pointer expire to strtol, which 
dereferences it.

Fix:
  Missing expiration query parameter and missing expiration query parameter 
value
  should be treated the same (expire=0).
---
 example/secure-link/secure-link.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/example/secure-link/secure-link.c 
b/example/secure-link/secure-link.c
index d2b691a..b38d9d6 100644
--- a/example/secure-link/secure-link.c
+++ b/example/secure-link/secure-link.c
@@ -116,7 +116,7 @@ TSRemapDoRemap(void *ih, TSHttpTxn rh, TSRemapRequestInfo 
*rri)
 sprintf([i * 2], "%02x", md[i]);
   }
   time();
-  e = strtol(expire, NULL, 16);
+  e = (NULL == expire ? 0 : strtol(expire, NULL, 16));
   i = TSREMAP_DID_REMAP;
   if (e < t || strcmp(hash, token) != 0) {
 if (e < t) {

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1356971: Unchecked return value

2017-05-10 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  182754d   Coverity 1356971: Unchecked return value
182754d is described below

commit 182754d0f7257a1dcbe208f9d759d2fa81037444
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Wed May 10 09:49:32 2017 -0700

Coverity 1356971: Unchecked return value

Problem:
  CID 1356971 (#1 of 1): Unchecked return value (CHECKED_RETURN)
  13. check_return: Calling TSUrlCreate without checking return value

Fix:
  Added the result check and also fixed a resource leak.
---
 example/cache_scan/cache_scan.cc | 23 ++-
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/example/cache_scan/cache_scan.cc b/example/cache_scan/cache_scan.cc
index 3a96c34..22f457a 100644
--- a/example/cache_scan/cache_scan.cc
+++ b/example/cache_scan/cache_scan.cc
@@ -444,16 +444,21 @@ setup_request(TSCont contp, TSHttpTxn txnp)
 TSMBuffer urlBuf = TSMBufferCreate();
 TSMLoc urlLoc;
 
-TSUrlCreate(urlBuf, );
-if (TSUrlParse(urlBuf, urlLoc, (const char **), end) != 
TS_PARSE_DONE ||
-TSCacheKeyDigestFromUrlSet(cstate->key_to_delete, urlLoc) != 
TS_SUCCESS) {
-  TSError("[%s] CacheKeyDigestFromUrlSet failed", PLUGIN_NAME);
-  TSCacheKeyDestroy(cstate->key_to_delete);
-  TSfree(cstate);
-  TSHandleMLocRelease(urlBuf, nullptr, urlLoc);
-  goto Ldone;
+if (TS_SUCCESS == TSUrlCreate(urlBuf, )) {
+  if (TSUrlParse(urlBuf, urlLoc, (const char **), end) != 
TS_PARSE_DONE ||
+  TSCacheKeyDigestFromUrlSet(cstate->key_to_delete, urlLoc) != 
TS_SUCCESS) {
+TSError("[%s] CacheKeyDigestFromUrlSet failed", PLUGIN_NAME);
+TSCacheKeyDestroy(cstate->key_to_delete);
+TSfree(cstate);
+TSHandleMLocRelease(urlBuf, TS_NULL_MLOC, urlLoc);
+TSMBufferDestroy(urlBuf);
+goto Ldone;
+  }
+  TSHandleMLocRelease(urlBuf, TS_NULL_MLOC, urlLoc);
+} else {
+  TSError("[%s] TSUrlCreate failed", PLUGIN_NAME);
 }
-TSHandleMLocRelease(urlBuf, nullptr, urlLoc);
+TSMBufferDestroy(urlBuf);
   }
 }
 

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Fix version and virtualhost config cache sync.

2017-05-15 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  13804cc   Fix version and virtualhost config cache sync.
13804cc is described below

commit 13804cc4d863cabefdbf2ef4bb5d15d13d6339e8
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Mon May 15 10:25:34 2017 -0700

Fix version and virtualhost config cache sync.
---
 plugins/s3_auth/s3_auth.cc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/plugins/s3_auth/s3_auth.cc b/plugins/s3_auth/s3_auth.cc
index 772bb55..d00a87b 100644
--- a/plugins/s3_auth/s3_auth.cc
+++ b/plugins/s3_auth/s3_auth.cc
@@ -131,10 +131,10 @@ public:
   _keyid_len = src->_keyid_len;
 }
 
-if (_version_modified) {
+if (src->_version_modified) {
   _version = src->_version;
 }
-if (_virt_host_modified) {
+if (src->_virt_host_modified) {
   _virt_host = src->_virt_host;
 }
   }

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1363659: Dereference before null check

2017-05-11 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  51ba47b   Coverity 1363659: Dereference before null check
51ba47b is described below

commit 51ba47b122e0e78e39aae7dd06d1ac91bbb9f411
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Thu May 11 12:23:31 2017 -0700

Coverity 1363659: Dereference before null check

Problem:
CID 1363659 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking sp->server.vconn suggests that it may be 
null, but it has already been dereferenced on all paths leading to the check.

Fix:
Removed the check the TSReleaseAssert(sp->server.vconn != nullptr); should 
make sure we don't dereference nullptr.
---
 example/passthru/passthru.cc | 42 --
 1 file changed, 20 insertions(+), 22 deletions(-)

diff --git a/example/passthru/passthru.cc b/example/passthru/passthru.cc
index a42ad20..14a0a2c 100644
--- a/example/passthru/passthru.cc
+++ b/example/passthru/passthru.cc
@@ -198,35 +198,33 @@ PassthruSessionEvent(TSCont cont, TSEvent event, void 
*edata)
   sp->server.writeio.write(sp->server.vconn, sp->contp);
 }
 
-if (sp->server.vconn != nullptr) {
-  int64_t nbytes;
-
-  nbytes = sp->client.readio.transfer_to(sp->server.writeio);
-  PassthruSessionDebug(sp, "proxied %" PRId64 " bytes from client vconn=%p 
to server vconn=%p", nbytes, sp->client.vconn,
-   sp->server.vconn);
-  if (nbytes) {
-TSVIOReenable(sp->client.readio.vio);
-TSVIOReenable(sp->server.writeio.vio);
-  }
-
-  nbytes = sp->server.readio.transfer_to(sp->client.writeio);
-  PassthruSessionDebug(sp, "proxied %" PRId64 " bytes from server vconn=%p 
to client vconn=%p", nbytes, sp->server.vconn,
-   sp->client.vconn);
-  if (nbytes) {
-TSVIOReenable(sp->server.readio.vio);
-TSVIOReenable(sp->client.writeio.vio);
-  }
+int64_t nbytes;
+
+nbytes = sp->client.readio.transfer_to(sp->server.writeio);
+PassthruSessionDebug(sp, "proxied %" PRId64 " bytes from client vconn=%p 
to server vconn=%p", nbytes, sp->client.vconn,
+ sp->server.vconn);
+if (nbytes) {
+  TSVIOReenable(sp->client.readio.vio);
+  TSVIOReenable(sp->server.writeio.vio);
 }
 
-if (PassthruSessionIsFinished(sp)) {
-  delete sp;
-  return TS_EVENT_NONE;
+nbytes = sp->server.readio.transfer_to(sp->client.writeio);
+PassthruSessionDebug(sp, "proxied %" PRId64 " bytes from server vconn=%p 
to client vconn=%p", nbytes, sp->server.vconn,
+ sp->client.vconn);
+if (nbytes) {
+  TSVIOReenable(sp->server.readio.vio);
+  TSVIOReenable(sp->client.writeio.vio);
 }
+  }
 
-TSVIOReenable(arg.vio);
+  if (PassthruSessionIsFinished(sp)) {
+delete sp;
 return TS_EVENT_NONE;
   }
 
+  TSVIOReenable(arg.vio);
+  return TS_EVENT_NONE;
+
   if (event == TS_EVENT_VCONN_WRITE_READY) {
 if (PassthruSessionIsFinished(sp)) {
   delete sp;

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Issue #1605 AWS Signature Version 4

2017-05-30 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  fa73136   Issue #1605 AWS Signature Version 4
fa73136 is described below

commit fa731369b474cb3c46b7b8ddf98490c198cd4605
Author: Gancho Tenev <gtte...@gmail.com>
AuthorDate: Tue Apr 25 13:02:44 2017 -0700

Issue #1605 AWS Signature Version 4

Signature Calculations for the Authorization Header:
Transferring Payload in a Single Chunk (Unsigned payload option)

http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
---
 doc/admin-guide/plugins/s3_auth.en.rst | 114 --
 plugins/s3_auth/Makefile.inc   |   2 +-
 plugins/s3_auth/aws_auth_v4.cc | 691 +
 plugins/s3_auth/aws_auth_v4.h  | 207 ++
 plugins/s3_auth/s3_auth.cc | 273 -
 5 files changed, 1247 insertions(+), 40 deletions(-)

diff --git a/doc/admin-guide/plugins/s3_auth.en.rst 
b/doc/admin-guide/plugins/s3_auth.en.rst
index d6f5709..7293188 100644
--- a/doc/admin-guide/plugins/s3_auth.en.rst
+++ b/doc/admin-guide/plugins/s3_auth.en.rst
@@ -27,57 +27,119 @@ to use ``S3`` as your origin server, yet want to avoid 
direct user access to
 the content.
 
 Using the plugin
-
+
 
-There are three configuration options for this plugin::
-
---access_key
---secret_key
---virtual_host
---config
 
+Using the plugin in a remap rule would be e.g.::
 
-Using the first two in a remap rule would be e.g.::
+   # remap.config
 
...  @plugin=s3_auth @pparam=--access_key @pparam=my-key \
 @pparam=--secret_key @pparam=my-secret \
@pparam=--virtual_host
 
 
-Alternatively, you can store the access key and secret in an external
-configuration file, and point the remap rule(s) to it:
+Alternatively, you can store the access key and secret in an external 
configuration file, and point the remap rule(s) to it::
 
-   ...  @plugin=s3_auth @pparam=--config @pparam=s3.config
+   # remap.config
 
+   ...  @plugin=s3_auth @pparam=--config @pparam=s3_auth_v2.config
 
-Where s3.config would look like::
 
-# AWS S3 authentication
-access_key=my-key
-secret_key=my-secret
-virtual_host=yes
+Where ``s3.config`` could look like::
 
+# s3_auth_v2.config
 
-For more details on the S3 auth, see::
+access_key=my-key
+secret_key=my-secret
+version=2
+virtual_host=yes
 
-  http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html
+Both ways could be combined as well
 
 
-ToDo
-
+AWS Authentication version 4
+
 
-This is a pretty barebone start for the S3 services, it is missing a number of 
features:
+The s3_auth plugin fully implements: `AWS Signing Version 4 
<http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html>`_
 / `Authorization Header 
<http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html>`_
 / `Transferring Payload in a Single Chunk 
<http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html>`_
 / Unsigned Payload Option
 
-- It does not do UTF8 encoding (as required)
+Configuration options::
 
-- It only implements the v2 authentication mechanism. For details on v4, see
+# Mandatory options
+--access_key=
+--secret_key=
+--version=4
 
-  
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
+# Optional
+--v4-include-headers=
+--v4-exclude-headers=
+--v4-region-map=region_map.config
 
-- It does not deal with canonicalization of AMZ headers.
 
-- It does not handle POST requests (but do we need to ?)
+If the following option is used then the options could be specified in a file::
+
+--config=s3_auth_v4.config
+
+
+The ``s3_auth_v4.config`` config file could look like this::
+
+# s3_auth_v4.config
+
+access_key=
+secret_key=
+version=4
+v4-include-headers=
+v4-exclude-headers=
+v4-region-map=region_map.config
+
+Where the ``region_map.config`` defines the entry-point hostname to region 
mapping i.e.::
+
+# region_map.config
+
+# "us-east-1"
+s3.amazonaws.com : us-east-1
+s3-external-1.amazonaws.com  : us-east-1
+s3.dualstack.us-east-1.amazonaws.com : us-east-1
+
+# us-west-1
+s3-us-west-1.amazonaws.com   : us-west-1
+s3.dualstack.us-west-1.amazonaws.com : us-west-1
+
+# Default region if no entry-point matches:
+: s3.amazonaws.com
+
+If ``--v4-region-map`` is not specified the plugin defaults to the mapping 
defined in `"Regions and Endpoints - S3" 
<http://docs.aws.amazon.com/general/latest/gr/rande.

[trafficserver] branch master updated: cid 1375841: Replace strncpy with memcpy

2017-05-31 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  9a4aa33   cid 1375841: Replace strncpy with memcpy
9a4aa33 is described below

commit 9a4aa33c713e68dc4c10d60f49f6a4e58d2c11f1
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Wed May 31 07:09:58 2017 -0700

cid 1375841: Replace strncpy with memcpy

Issue:
  CID 1375841 (#1 of 1): Buffer not null terminated (BUFFER_SIZE)
  1. buffer_size: Calling strncpy with a source string whose length
  (4 chars) is greater than or equal to the size argument (4) will
  fail to null-terminate key.

Fix:
  The code is correct. The buffer is not meant to be NULL-terminated.
  It seems Coverity thinks that since strncpy is used NULL-terminated
  buffer is expected.  Changing strncpy to memcpy.

Also removing unnecessary #undef
---
 plugins/s3_auth/aws_auth_v4.cc | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/plugins/s3_auth/aws_auth_v4.cc b/plugins/s3_auth/aws_auth_v4.cc
index 65c65b9..386bf69 100644
--- a/plugins/s3_auth/aws_auth_v4.cc
+++ b/plugins/s3_auth/aws_auth_v4.cc
@@ -29,7 +29,6 @@
 #include   /* SHA(), sha256_Update(), SHA256_Final, etc. */
 #include  /* HMAC() */
 
-#undef AWS_AUTH_V4_DETAILED_DEBUG_OUTPUT
 #ifdef AWS_AUTH_V4_DETAILED_DEBUG_OUTPUT
 #include 
 #endif
@@ -565,8 +564,8 @@ getSignature(const char *awsSecret, size_t awsSecretLen, 
const char *awsRegion,
 
   size_t keyLen = 4 + awsSecretLen;
   char key[keyLen];
-  strncpy(key, "AWS4", 4);
-  strncpy(key + 4, awsSecret, awsSecretLen);
+  memcpy(key, "AWS4", 4);
+  memcpy(key + 4, awsSecret, awsSecretLen);
 
   unsigned int len = signatureLen;
   if (HMAC(EVP_sha256(), key, keyLen, (unsigned char *)dateTime, dateTimeLen, 
dateKey, ) &&

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Coverity 1022060: Unchecked return value

2017-05-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  cf27627   Coverity 1022060: Unchecked return value
cf27627 is described below

commit cf2762719e8dfc4d68f5e267ee2ec033ccb95f6b
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Mon May 8 11:03:30 2017 -0700

Coverity 1022060: Unchecked return value

Problem:
  CID 1022060 (#1 of 1): Unchecked return value (CHECKED_RETURN)

Solution:
  Added a check for the return value.
---
 example/cache_scan/cache_scan.cc | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/example/cache_scan/cache_scan.cc b/example/cache_scan/cache_scan.cc
index f14e96e..3a96c34 100644
--- a/example/cache_scan/cache_scan.cc
+++ b/example/cache_scan/cache_scan.cc
@@ -131,15 +131,16 @@ handle_scan(TSCont contp, TSEvent event, void *edata)
 const char s1[] = "URL: ", s2[] = "\n";
 cstate->total_bytes += TSIOBufferWrite(cstate->resp_buffer, s1, sizeof(s1) 
- 1);
 TSCacheHttpInfoReqGet(cache_infop, _bufp, _hdr_loc);
-TSHttpHdrUrlGet(req_bufp, req_hdr_loc, _loc);
-url = TSUrlStringGet(req_bufp, url_loc, _len);
+if (TS_SUCCESS == TSHttpHdrUrlGet(req_bufp, req_hdr_loc, _loc)) {
+  url = TSUrlStringGet(req_bufp, url_loc, _len);
 
-cstate->total_bytes += TSIOBufferWrite(cstate->resp_buffer, url, url_len);
-cstate->total_bytes += TSIOBufferWrite(cstate->resp_buffer, s2, sizeof(s2) 
- 1);
+  cstate->total_bytes += TSIOBufferWrite(cstate->resp_buffer, url, 
url_len);
+  cstate->total_bytes += TSIOBufferWrite(cstate->resp_buffer, s2, 
sizeof(s2) - 1);
 
-TSfree(url);
-TSHandleMLocRelease(req_bufp, req_hdr_loc, url_loc);
-TSHandleMLocRelease(req_bufp, TS_NULL_MLOC, req_hdr_loc);
+  TSfree(url);
+  TSHandleMLocRelease(req_bufp, req_hdr_loc, url_loc);
+  TSHandleMLocRelease(req_bufp, TS_NULL_MLOC, req_hdr_loc);
+}
 
 // print the response headers
 TSCacheHttpInfoRespGet(cache_infop, _bufp, _hdr_loc);

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: coverity 1021925: Missing break in switch

2017-05-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  31be131   coverity 1021925: Missing break in switch
31be131 is described below

commit 31be1319223d111e9a5509300b0d463e832f2567
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Mon May 8 10:33:33 2017 -0700

coverity 1021925: Missing break in switch

Problem:
  CID 1021925 (#1 of 1): Missing break in switch (MISSING_BREAK)
  unterminated_case: The case for value TS_EVENT_VCONN_WRITE_COMPLETE is 
not terminated by a 'break' statement.

Solution:
  It was intended not to have a break (not a bug), so refactored the code 
to make coverity happy.
---
 example/cache_scan/cache_scan.cc | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/example/cache_scan/cache_scan.cc b/example/cache_scan/cache_scan.cc
index 9172aca..f14e96e 100644
--- a/example/cache_scan/cache_scan.cc
+++ b/example/cache_scan/cache_scan.cc
@@ -281,7 +281,7 @@ handle_io(TSCont contp, TSEvent event, void * /* edata 
ATS_UNUSED */)
 }
 
 return 0;
-  }
+  } break;
   case TS_EVENT_VCONN_WRITE_READY: {
 TSDebug(PLUGIN_NAME, "ndone: %" PRId64 " total_bytes: % " PRId64, 
TSVIONDoneGet(cstate->write_vio), cstate->total_bytes);
 cstate->write_pending = false;
@@ -289,13 +289,17 @@ handle_io(TSCont contp, TSEvent event, void * /* edata 
ATS_UNUSED */)
 // available data
 // TSVIOReenable(cstate->write_vio);
 return 0;
-  }
-  case TS_EVENT_VCONN_WRITE_COMPLETE:
+  } break;
+  case TS_EVENT_VCONN_WRITE_COMPLETE: {
 TSDebug(PLUGIN_NAME, "write complete");
+cstate->done = 1;
+cleanup(contp);
+  } break;
   case TS_EVENT_VCONN_EOS:
-  default:
+  default: {
 cstate->done = 1;
 cleanup(contp);
+  } break;
   }
 
   return 0;

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: coverity 1021924: Missing break in switch

2017-05-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

The following commit(s) were added to refs/heads/master by this push:
   new  98860a7   coverity 1021924: Missing break in switch
98860a7 is described below

commit 98860a747d1d660c1d996330b8e155a887872517
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Mon May 8 08:43:34 2017 -0700

coverity 1021924: Missing break in switch

Problem:
  CID 1021924 (#1 of 1): Missing break in switch (MISSING_BREAK)
  unterminated_case: The case for value TS_EVENT_VCONN_WRITE_READY is not 
terminated by a 'break' statement.

Solution:
  It was intended not to have a break (not a bug), so refactored the code 
to make coverity happy.
---
 example/null_transform/null_transform.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/example/null_transform/null_transform.c 
b/example/null_transform/null_transform.c
index 7a55525..cf60424 100644
--- a/example/null_transform/null_transform.c
+++ b/example/null_transform/null_transform.c
@@ -232,14 +232,17 @@ null_transform(TSCont contp, TSEvent event, void *edata 
ATS_UNUSED)
*/
   TSVConnShutdown(TSTransformOutputVConnGet(contp), 0, 1);
   break;
+
+/* If we get a WRITE_READY event or any other type of
+ * event (sent, perhaps, because we were re-enabled) then
+ * we'll attempt to transform more data.
+ */
 case TS_EVENT_VCONN_WRITE_READY:
   TSDebug(PLUGIN_NAME, "\tEvent is TS_EVENT_VCONN_WRITE_READY");
+  handle_transform(contp);
+  break;
 default:
   TSDebug(PLUGIN_NAME, "\t(event is %d)", event);
-  /* If we get a WRITE_READY event or any other type of
-   * event (sent, perhaps, because we were reenabled) then
-   * we'll attempt to transform more data.
-   */
   handle_transform(contp);
   break;
 }

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated (0d1ebfa -> 3e20b07)

2017-08-30 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 0d1ebfa  Ran clan-tidy with modernize-use-nullptr
 add 3e20b07  AWS auth v4: fixed query param value URI-encoding

No new revisions were added by this update.

Summary of changes:
 plugins/s3_auth/aws_auth_v4.cc | 46 -
 plugins/s3_auth/unit-tests/test_aws_auth_v4.cc | 91 ++
 plugins/s3_auth/unit-tests/test_aws_auth_v4.h  |  1 +
 3 files changed, 137 insertions(+), 1 deletion(-)

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: Move Cachekey plugin to stable plugins.

2018-05-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new f4bf7e5  Move Cachekey plugin to stable plugins.
f4bf7e5 is described below

commit f4bf7e56117a0994dfee40f93c4910f72fab139f
Author: Gancho Tenev <gan...@apache.org>
AuthorDate: Mon May 7 14:11:43 2018 +0100

Move Cachekey plugin to stable plugins.
---
 doc/admin-guide/plugins/index.en.rst  |  8 
 plugins/Makefile.am   |  2 +-
 plugins/{experimental => }/cachekey/Makefile.inc  | 14 +++---
 plugins/{experimental => }/cachekey/README.md |  2 +-
 plugins/{experimental => }/cachekey/cachekey.cc   |  0
 plugins/{experimental => }/cachekey/cachekey.h|  0
 plugins/{experimental => }/cachekey/common.cc |  0
 plugins/{experimental => }/cachekey/common.h  |  0
 plugins/{experimental => }/cachekey/configs.cc|  0
 plugins/{experimental => }/cachekey/configs.h |  0
 plugins/{experimental => }/cachekey/pattern.cc|  0
 plugins/{experimental => }/cachekey/pattern.h |  0
 plugins/{experimental => }/cachekey/plugin.cc |  0
 plugins/{experimental => }/cachekey/tests/pattern_test.cc |  0
 14 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/doc/admin-guide/plugins/index.en.rst 
b/doc/admin-guide/plugins/index.en.rst
index bc1c4ec..6891d4e 100644
--- a/doc/admin-guide/plugins/index.en.rst
+++ b/doc/admin-guide/plugins/index.en.rst
@@ -48,6 +48,7 @@ Plugins that are considered stable are installed by default 
in |TS| releases.
AWS S3 Authentication 
AuthProxy 
Background Fetch 
+   Cache Key Manipulation 
Combo Handler 
Configuration Remap 
ESI 
@@ -71,6 +72,9 @@ Plugins that are considered stable are installed by default 
in |TS| releases.
 :doc:`Background Fetch `
Proactively fetch content from Origin in a way that it will fill the object 
into cache.
 
+:doc:`Cache Key Manipulation `
+   Allows some common cache key manipulations based on various HTTP request 
elements.
+
 :doc:`Combo Handler `
Provides an intelligent way to combine multiple URLs into a single URL, and 
have Apache Traffic Server combine the components into one response.
 
@@ -125,7 +129,6 @@ directory of the |TS| source tree. Experimental plugins can 
be compiled by passi
 
Balancer 
Buffer Upload 
-   Cache Key Manipulation 
Cache Promote 
Collapsed-Forwarding 
Epic 
@@ -153,9 +156,6 @@ directory of the |TS| source tree. Experimental plugins can 
be compiled by passi
 :doc:`Buffer Upload `
Buffers POST data before connecting to the Origin server.
 
-:doc:`Cache Key Manipulation `
-   Allows some common cache key manipulations based on various HTTP request 
elements.
-
 :doc:`Cache Promote `
Provides additional control over when an object should be allowed into the 
cache.
 
diff --git a/plugins/Makefile.am b/plugins/Makefile.am
index 5c08e8e..d634754 100644
--- a/plugins/Makefile.am
+++ b/plugins/Makefile.am
@@ -29,6 +29,7 @@ AM_LDFLAGS = $(TS_PLUGIN_LD_FLAGS)
 
 include authproxy/Makefile.inc
 include background_fetch/Makefile.inc
+include cachekey/Makefile.inc
 include conf_remap/Makefile.inc
 include esi/Makefile.inc
 include generator/Makefile.inc
@@ -51,7 +52,6 @@ include experimental/balancer/Makefile.inc
 include experimental/buffer_upload/Makefile.inc
 include experimental/cache_promote/Makefile.inc
 include experimental/cache_range_requests/Makefile.inc
-include experimental/cachekey/Makefile.inc
 include experimental/collapsed_connection/Makefile.inc
 include experimental/collapsed_forwarding/Makefile.inc
 include experimental/custom_redirect/Makefile.inc
diff --git a/plugins/experimental/cachekey/Makefile.inc 
b/plugins/cachekey/Makefile.inc
similarity index 73%
rename from plugins/experimental/cachekey/Makefile.inc
rename to plugins/cachekey/Makefile.inc
index 6bc2e33..e0da967 100644
--- a/plugins/experimental/cachekey/Makefile.inc
+++ b/plugins/cachekey/Makefile.inc
@@ -14,10 +14,10 @@
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
 
-pkglib_LTLIBRARIES += experimental/cachekey/cachekey.la
-experimental_cachekey_cachekey_la_SOURCES = \
-  experimental/cachekey/cachekey.cc \
-  experimental/cachekey/common.cc \
-  experimental/cachekey/configs.cc \
-  experimental/cachekey/pattern.cc \
-  experimental/cachekey/plugin.cc
+pkglib_LTLIBRARIES += cachekey/cachekey.la
+cachekey_cachekey_la_SOURCES = \
+  cachekey/cachekey.cc \
+  cachekey/common.cc \
+  cachekey/configs.cc \
+  cachekey/pattern.cc \
+  cachekey/plugin.cc
diff --git a/plugins/experimental/cachekey/README.md 
b/plugins/cachekey/README.md
similarity index 92%

[trafficserver] branch master updated: background_fetch heap-buffer-overflow fix

2018-06-19 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 9a8ecf4  background_fetch heap-buffer-overflow fix
9a8ecf4 is described below

commit 9a8ecf4a17e0c2ed58286198064798820f1a2060
Author: Gancho Tenev 
AuthorDate: Tue Jun 19 13:00:58 2018 -0700

background_fetch heap-buffer-overflow fix
---
 plugins/background_fetch/rules.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/plugins/background_fetch/rules.cc 
b/plugins/background_fetch/rules.cc
index 3e7bfa5..e513424 100644
--- a/plugins/background_fetch/rules.cc
+++ b/plugins/background_fetch/rules.cc
@@ -23,6 +23,7 @@
 */
 
 #include 
+#include 
 
 #include "configs.h"
 #include "rules.h"
@@ -131,7 +132,7 @@ BgFetchRule::check_field_configured(TSHttpTxn txnp) const
   TSDebug(PLUGIN_NAME, "invalid field");
 } else {
   TSDebug(PLUGIN_NAME, "comparing with %s", _value);
-  if (nullptr != strstr(val_str, _value)) {
+  if (std::string_view::npos != std::string_view(val_str, 
val_len).find(_value)) {
 hdr_found = true;
   }
 }



[trafficserver] branch master updated: Option to enable stand-alone Leak Sanitizer

2018-05-02 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 09ea78f  Option to enable stand-alone Leak Sanitizer
09ea78f is described below

commit 09ea78fa6c55b793c46c616d1365ba68b0f18374
Author: Gancho Tenev <gan...@apache.org>
AuthorDate: Wed May 2 19:29:34 2018 -0700

Option to enable stand-alone Leak Sanitizer

If leak detection is needed without the Address Sanitizer slowdown.
---
 configure.ac | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/configure.ac b/configure.ac
index de25bac..5992bc4 100644
--- a/configure.ac
+++ b/configure.ac
@@ -205,6 +205,15 @@ AC_ARG_ENABLE([asan],
 )
 AC_MSG_RESULT([$enable_asan])
 
+# Enable LSAN in stand-alone mode for the builds
+AC_MSG_CHECKING([whether to enable lsan])
+AC_ARG_ENABLE([lsan],
+  [AS_HELP_STRING([--enable-lsan],[enable stand-alone Leak Sanitizer])],
+  [],
+  [enable_lsan=no]
+)
+AC_MSG_RESULT([$enable_lsan])
+
 # Enable TSAN for the builds
 AC_MSG_CHECKING([whether to enable tsan])
 AC_ARG_ENABLE([tsan],
@@ -982,6 +991,15 @@ if test "x${enable_asan}" = "xyes"; then
   TS_ADDTO(AM_CXXFLAGS, [-fno-omit-frame-pointer -fsanitize=address])
 fi
 
+# Flags for LSAN stand-alone mode
+if test "x${enable_lsan}" = "xyes"; then
+  if test "x${enable_asan}" = "xyes"; then
+AC_ERROR([ASAN already specified, --enable-lsan is meant only for lsan 
stand-alone mode])
+  fi
+  TS_ADDTO(AM_CFLAGS, [-fno-omit-frame-pointer -fsanitize=leak])
+  TS_ADDTO(AM_CXXFLAGS, [-fno-omit-frame-pointer -fsanitize=leak])
+fi
+
 # Flags for TSAN
 if test "x${enable_tsan}" = "xyes"; then
   TS_ADDTO(AM_CFLAGS, [-fsanitize=thread])

-- 
To stop receiving notification emails like this one, please contact
gan...@apache.org.


[trafficserver] branch master updated: cachekey: ability to base the key on pristine URI.

2017-10-20 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 3ad5bf5  cachekey: ability to base the key on pristine URI.
3ad5bf5 is described below

commit 3ad5bf5fcc5e99e7493befff59dd8a6e18df107c
Author: Gancho Tenev <gan...@apache.com>
AuthorDate: Thu Oct 19 13:31:39 2017 -0700

cachekey: ability to base the key on pristine URI.

Added ability to base the cache key on the pristine URI
rather then on the URI set during remap.

New option added (see plugin doc):
--uri-type=[remap|pristine] (default:remap)

In addition changed the CacheKey constructor to use in-class member 
initialization.
---
 doc/admin-guide/plugins/cachekey.en.rst  |  7 +++
 plugins/experimental/cachekey/configs.cc | 28 
 plugins/experimental/cachekey/configs.h  | 24 
 plugins/experimental/cachekey/plugin.cc  | 26 +-
 4 files changed, 80 insertions(+), 5 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index b36e6ee..bc126ca 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -39,6 +39,13 @@ This plugin allows some common cache key manipulations based 
on various HTTP req
 * capture and replace strings from the URI and include them in the cache key
 * do more - please find more examples below.
 
+URI type
+
+
+The plugin manipulates the ``remap`` URI (value set during URI remap) by 
default. If manipulation needs to be based on the ``pristine`` URI (the value 
before URI remapping takes place) one could use the following option:
+
+* ``--uri-type=[remap|pristine]`` (default: ``remap``)
+
 Cache key structure and related plugin parameters
 =
 
diff --git a/plugins/experimental/cachekey/configs.cc 
b/plugins/experimental/cachekey/configs.cc
index 788b56d..80f2e54 100644
--- a/plugins/experimental/cachekey/configs.cc
+++ b/plugins/experimental/cachekey/configs.cc
@@ -344,6 +344,7 @@ Configs::init(int argc, char *argv[])
 {const_cast("remove-prefix"), optional_argument, nullptr, 'q'},
 {const_cast("remove-path"), optional_argument, nullptr, 'r'},
 {const_cast("separator"), optional_argument, nullptr, 's'},
+{const_cast("uri-type"), optional_argument, nullptr, 't'},
 {nullptr, 0, nullptr, 0},
   };
 
@@ -442,6 +443,9 @@ Configs::init(int argc, char *argv[])
 case 's': /* separator */
   setSeparator(optarg);
   break;
+case 't': /* uri-type */
+  setUriType(optarg);
+  break;
 }
   }
 
@@ -486,3 +490,27 @@ Configs::getSeparator()
 {
   return _separator;
 }
+
+void
+Configs::setUriType(const char *arg)
+{
+  if (nullptr != arg) {
+if (5 == strlen(arg) && 0 == strncasecmp(arg, "remap", 5)) {
+  _uriType = CacheKeyUriType::REMAP;
+  CacheKeyDebug("using remap URI type");
+} else if (8 == strlen(arg) && 0 == strncasecmp(arg, "pristine", 8)) {
+  _uriType = CacheKeyUriType::PRISTINE;
+  CacheKeyDebug("using pristine URI type");
+} else {
+  CacheKeyError("unrecognized URI type '%s', using default 'remap'", arg);
+}
+  } else {
+CacheKeyError("found an empty URI type, using default 'remap'");
+  }
+}
+
+CacheKeyUriType
+Configs::getUriType()
+{
+  return _uriType;
+}
diff --git a/plugins/experimental/cachekey/configs.h 
b/plugins/experimental/cachekey/configs.h
index 9bf71cb..9ae29f9 100644
--- a/plugins/experimental/cachekey/configs.h
+++ b/plugins/experimental/cachekey/configs.h
@@ -27,6 +27,11 @@
 #include "pattern.h"
 #include "common.h"
 
+enum CacheKeyUriType {
+  REMAP,
+  PRISTINE,
+};
+
 /**
  * @brief Plug-in configuration elements (query / headers / cookies).
  *
@@ -122,7 +127,7 @@ private:
 class Configs
 {
 public:
-  Configs() : _prefixToBeRemoved(false), _pathToBeRemoved(false), 
_separator("/") {}
+  Configs() {}
   /**
* @brief initializes plugin configuration.
* @param argc number of plugin parameters
@@ -157,6 +162,16 @@ public:
*/
   const String ();
 
+  /**
+   * @brief sets the URI Type.
+   */
+  void setUriType(const char *arg);
+
+  /**
+   * @brief get URI type.
+   */
+  CacheKeyUriType getUriType();
+
   /* Make the following members public to avoid unnecessary accessors */
   ConfigQuery _query;/**< @brief query parameter related configuration 
*/
   ConfigHeaders _headers;/**< @brief headers related configuration */
@@ -178,9 +193,10 @@ private:
*/
   bool loadClassifiers(const String , bool blacklist = true);
 
-  bool _prefixToBeRemoved; /**< @brief instructs the

[trafficserver] branch master updated: Reset water_mark in new_empty_MIOBuffer_internal

2018-01-16 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new c7b2c64  Reset water_mark in new_empty_MIOBuffer_internal
c7b2c64 is described below

commit c7b2c6459e3f63043787bf4fc3328dcad9771d47
Author: Gancho Tenev <gan...@apache.org>
AuthorDate: Sat Jan 13 14:34:49 2018 -0800

Reset water_mark in new_empty_MIOBuffer_internal

Moving water_mark reset one level up from MIOBuffer::alloc()
to new_MIOBuffer_internal() and adding it "symmetrically" to
new_empty_MIOBuffer_internal().
This will allow new_empty_MIOBuffer() to benefit from the
fix in commit db81103d0 as well.
---
 iocore/eventsystem/P_IOBuffer.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/iocore/eventsystem/P_IOBuffer.h b/iocore/eventsystem/P_IOBuffer.h
index 748811c..5e063ff 100644
--- a/iocore/eventsystem/P_IOBuffer.h
+++ b/iocore/eventsystem/P_IOBuffer.h
@@ -789,6 +789,7 @@ new_MIOBuffer_internal(
   b->_location = location;
 #endif
   b->alloc(size_index);
+  b->water_mark = 0;
   return b;
 }
 
@@ -809,6 +810,7 @@ new_empty_MIOBuffer_internal(
 {
   MIOBuffer *b  = THREAD_ALLOC(ioAllocator, this_thread());
   b->size_index = size_index;
+  b->water_mark = 0;
 #ifdef TRACK_BUFFER_USER
   b->_location = location;
 #endif
@@ -1123,7 +1125,6 @@ MIOBuffer::alloc(int64_t i)
 #endif
   _writer->alloc(i);
   size_index = i;
-  water_mark = 0;
   init_readers();
 }
 

-- 
To stop receiving notification emails like this one, please contact
['"commits@trafficserver.apache.org" <commits@trafficserver.apache.org>'].


[trafficserver] branch master updated: cachekey: handle empty regex group captures

2018-08-13 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 4d5790f  cachekey: handle empty regex group captures
4d5790f is described below

commit 4d5790f796c90db9b8ee245ff179a5a5f66c2608
Author: Gancho Tenev 
AuthorDate: Sat Aug 11 14:31:12 2018 -0700

cachekey: handle empty regex group captures
---
 plugins/cachekey/pattern.cc | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/plugins/cachekey/pattern.cc b/plugins/cachekey/pattern.cc
index a63fc97..319a4a3 100644
--- a/plugins/cachekey/pattern.cc
+++ b/plugins/cachekey/pattern.cc
@@ -303,6 +303,12 @@ Pattern::replace(const String , String )
 int start = ovector[2 * replIndex];
 int length= ovector[2 * replIndex + 1] - ovector[2 * replIndex];
 
+/* Handle the case when no match / a group capture result in an empty 
string */
+if (start < 0) {
+  start  = 0;
+  length = 0;
+}
+
 String src(_replacement, _tokenOffset[i], 2);
 String dst(subject, start, length);
 



[trafficserver] branch master updated: Prefetch plugin

2018-08-13 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 18e67bd  Prefetch plugin
18e67bd is described below

commit 18e67bd9e9790543e2872741d48b61571ef5408c
Author: Gancho Tenev 
AuthorDate: Sat Jun 30 00:48:24 2018 -0700

Prefetch plugin

The purpose of the plugin is to increase the cache-hit ratio
for a sequence of objects which URL paths follow a common pattern.
---
 doc/admin-guide/plugins/index.en.rst   |   1 +
 doc/admin-guide/plugins/prefetch.en.rst| 278 
 .../images/admin/prefetch_plugin_deployment.png| Bin 0 -> 222757 bytes
 plugins/Makefile.am|   1 +
 plugins/experimental/prefetch/Makefile.inc |  28 +
 plugins/experimental/prefetch/README.md|   8 +
 plugins/experimental/prefetch/common.cc|  59 ++
 plugins/experimental/prefetch/common.h |  68 ++
 plugins/experimental/prefetch/configs.cc   | 172 +
 plugins/experimental/prefetch/configs.h| 202 ++
 plugins/experimental/prefetch/fetch.cc | 739 
 plugins/experimental/prefetch/fetch.h  | 202 ++
 plugins/experimental/prefetch/fetch_policy.cc  |  57 ++
 plugins/experimental/prefetch/fetch_policy.h   |  66 ++
 plugins/experimental/prefetch/fetch_policy_lru.cc  | 141 
 plugins/experimental/prefetch/fetch_policy_lru.h   | 105 +++
 .../experimental/prefetch/fetch_policy_simple.cc   |  80 +++
 .../experimental/prefetch/fetch_policy_simple.h|  46 ++
 plugins/experimental/prefetch/headers.cc   | 213 ++
 plugins/experimental/prefetch/headers.h|  31 +
 plugins/experimental/prefetch/pattern.cc   | 463 +
 plugins/experimental/prefetch/pattern.h|  92 +++
 plugins/experimental/prefetch/plugin.cc| 751 +
 23 files changed, 3803 insertions(+)

diff --git a/doc/admin-guide/plugins/index.en.rst 
b/doc/admin-guide/plugins/index.en.rst
index 566eecf..2a37ea6 100644
--- a/doc/admin-guide/plugins/index.en.rst
+++ b/doc/admin-guide/plugins/index.en.rst
@@ -161,6 +161,7 @@ directory of the |TS| source tree. Experimental plugins can 
be compiled by passi
Stale While Revalidate 
System Statistics 
WebP Transform 
+   Prefetch 
 
 :doc:`Access Control `
Access control plugin that handles various access control use-cases.
diff --git a/doc/admin-guide/plugins/prefetch.en.rst 
b/doc/admin-guide/plugins/prefetch.en.rst
new file mode 100644
index 000..93a7e3c
--- /dev/null
+++ b/doc/admin-guide/plugins/prefetch.en.rst
@@ -0,0 +1,278 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+
+.. include:: ../../common.defs
+
+.. _admin-plugins-prefetch:
+
+
+Prefetch Plugin
+***
+
+Description
+===
+
+The purpose of the plugin is to increase the cache-hit ratio for a sequence of
+objects which URL paths follow a common pattern.
+
+On every **incoming** URL request, the plugin can decide to pre-fetch the
+**next object** or more objects based on the common URL path pattern and a
+pre-defined pre-fetch policy.
+
+Currently, most HLS video urls follow a predictable pattern, with most URLs
+containing a segment number. Since the segments are ~10s of content, the normal
+usage pattern is to fetch the incremental segment every few seconds. The CDN
+has its best chance of delivering a good user experience if the requests are
+served from cache. Since we can predict the **next object** fetched, we should 
be
+able to dramatically increase the chance of it being a cache hit.
+
+This is primarily useful for:
+
+* less popular content. Popular movies' segments are constantly being refreshed
+  in cache by user requests. Less popular content is less likely to be in 
cache.
+* device failures. There can be a significant time gap between a seeding 
request
+  and the user request. During this time, devices can fail, which cause cache
+  misses. The time gap between the plugin's request and

[trafficserver] branch master updated: cachekey: capture cache key elements from headers

2018-08-14 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new d49271b  cachekey: capture cache key elements from headers
d49271b is described below

commit d49271b167bca3ea9e7a1b0ac0c65c72512a85dc
Author: Gancho Tenev 
AuthorDate: Fri Jul 6 17:25:33 2018 -0700

cachekey: capture cache key elements from headers

--capture-header=:
captures elements from header  using 
and adds them to the cache key.
---
 doc/admin-guide/plugins/cachekey.en.rst |  29 ++--
 plugins/cachekey/cachekey.cc| 119 +---
 plugins/cachekey/cachekey.h |   4 ++
 plugins/cachekey/common.h   |   2 +
 plugins/cachekey/configs.cc |  52 ++
 plugins/cachekey/configs.h  |  14 +++-
 plugins/cachekey/pattern.cc |  12 
 plugins/cachekey/pattern.h  |   2 +
 8 files changed, 187 insertions(+), 47 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index 958cdbe..0b631d5 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -110,14 +110,16 @@ Cache key structure and related plugin parameters
 
 ::
 
-  Optional components  | ┌───┐
-   | │ --include-headers │
-   | ├───┤
-  Default values if no | │ (empty)   |
-  optional components  | └───┘
+  Optional components  | ┌───┬┐
+   | │ --include-headers │  --capture-headers │
+   | ├┤
+  Default values if no | │ (empty)   |  (empty)   |
+  optional components  | └───┴┘
   configured   |
 
-* ``--include-headers`` (default: empty list) - comma separated list of 
headers to be added to the cache key. The list of headers defined by 
``--include-headers`` are always sorted before adding them to the cache  key.
+* ``--include-headers`` (default: empty list) - comma separated list of 
headers to be added to the cache key. The list of headers defined by 
``--include-headers`` are always sorted before adding them to the cache key.
+
+* ``--capture-header=:`` (default: empty) - 
captures elements from header  using  and adds 
them to the cache key.
 
 "Cookies" section
 ^
@@ -400,6 +402,21 @@ The following headers ``HeaderA`` and ``HeaderB`` will be 
used when constructing
 
   @plugin=cachekey.so @pparam=--include-headers=HeaderA,HeaderB
 
+The following would capture from the ``Authorization`` header and will add the 
captured element to the cache key ::
+
+  @plugin=cachekey.so \
+  
@pparam=--capture-header=Authorization:/AWS\s(?[^:]+).*/clientID:$1/"
+
+If the request looks like the following::
+
+  http://example-cdn.com/path/file
+  Authorization: AWS MKIARYMOG51PT0DLD:DLiWQ2lyS49H4Zyx34kW0URtg6s=
+
+Cache key would be set to::
+
+  /example-cdn.com/80/clientID:MKIARYMOG51PTCKQ0DLD/path/file
+
+
 HTTP Cookies
 
 
diff --git a/plugins/cachekey/cachekey.cc b/plugins/cachekey/cachekey.cc
index a31d628..c89b657 100644
--- a/plugins/cachekey/cachekey.cc
+++ b/plugins/cachekey/cachekey.cc
@@ -437,6 +437,61 @@ CacheKey::appendPath(Pattern , Pattern 
)
   }
 }
 
+template 
+void
+CacheKey::processHeader(const String , const ConfigHeaders , T 
,
+void (*fun)(const ConfigHeaders , const String 
_s, const String _s, T ))
+{
+  TSMLoc field;
+
+  for (field = TSMimeHdrFieldFind(_buf, _hdrs, name.c_str(), name.size()); 
field != TS_NULL_MLOC;
+   field = ::nextDuplicate(_buf, _hdrs, field)) {
+const char *value;
+int vlen;
+int count = TSMimeHdrFieldValuesCount(_buf, _hdrs, field);
+
+for (int i = 0; i < count; ++i) {
+  value = TSMimeHdrFieldValueStringGet(_buf, _hdrs, field, i, );
+  if (value == nullptr || vlen == 0) {
+CacheKeyDebug("missing value %d for header %s", i, name.c_str());
+continue;
+  }
+
+  String value_s(value, vlen);
+  fun(config, name, value_s, dst);
+}
+  }
+}
+
+template 
+void
+captureWholeHeaders(const ConfigHeaders , const String , const 
String , T )
+{
+  CacheKeyDebug("processing header %s", name.c_str());
+  if (config.toBeAdded(name)) {
+String header;
+header.append(name).append(":").append(value);
+captures.insert(header);
+CacheKeyDebug("adding header '%s: %s'", name.c_str(), value.c_str());
+  } else {
+CacheKeyDebug("failed to find header '%s'", name.c_str());
+  }
+}
+
+template 
+void
+captureFromHeaders(const ConfigHeaders 

[trafficserver] branch master updated: logstats conditionally disable format check

2018-08-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 63cc2b0  logstats conditionally disable format check
63cc2b0 is described below

commit 63cc2b0db8c40409e21a26dc9d55462828657099
Author: Gancho Tenev 
AuthorDate: Thu Aug 9 10:46:52 2018 -0700

logstats conditionally disable format check

Don’t validate the log format field names according to the squid log format.
This would allow squid format fields to be replaced, i.e. the username of
the authenticated client caun with a random header value by using cqh,
or to remove the client’s host IP address from the log for privacy reasons.

Added command line option --no_format_check (default false) and some 
documentation.

Related to https://issues.apache.org/jira/browse/TS-5069
---
 .../command-line/traffic_logstats.en.rst   | 46 ++
 src/traffic_logstats/logstats.cc   | 23 ++-
 2 files changed, 60 insertions(+), 9 deletions(-)

diff --git a/doc/appendices/command-line/traffic_logstats.en.rst 
b/doc/appendices/command-line/traffic_logstats.en.rst
index dfd4153..481c3d2 100644
--- a/doc/appendices/command-line/traffic_logstats.en.rst
+++ b/doc/appendices/command-line/traffic_logstats.en.rst
@@ -52,40 +52,86 @@ Options
 
 .. option:: -f FILE, --log_file FILE
 
+   Specific logfile to parse
+
 .. option:: -o LIST, --origin_list LIST
 
+   Only show stats for listed Origins
+
 .. option:: -O FILE, --origin_file FILE
 
+   File listing Origins to show
+
 .. option:: -M COUNT, --max_origins COUNT
 
+   Max number of Origins to show
+
 .. option:: -u COUNT, --urls COUNT
 
+   Produce JSON stats for URLs, argument is LRU size
+
 .. option:: -U COUNT, --show_urls COUNT
 
+   Only show max this number of URLs
+
 .. option:: -A, --as_object
 
+   Produce URL stats as a JSON object instead of array
+
 .. option:: -C, --concise
 
+   Eliminate metrics that can be inferred from other values
+
 .. option:: -i, --incremental
 
+   Incremental log parsing
+
 .. option:: -S FILE, --statetag FILE
 
+   Name of the state file to use
+
 .. option:: -t, --tail
 
+   Parse the last  seconds of log
+
 .. option:: -s, --summary
 
+   Only produce the summary
+
 .. option:: -j, --json
 
+   Produce JSON formatted output
+
 .. option:: -c, --cgi
 
+   Produce HTTP headers suitable as a CGI
+
 .. option:: -m, --min_hits
 
+   Minimum total hits for an Origin
+
 .. option:: -a, --max_age
 
+   Max age for log entries to be considered
+
 .. option:: -l COUNT, --line_len COUNT
 
+   Output line length
+
 .. option:: -T TAGS, --debug_tags TAGS
 
+   Colon-Separated Debug Tags
+
+.. option:: -r, --report_per_user
+
+   Report stats per username of the authenticated client ``caun`` instead of 
host, see `squid log format <../../admin-guide/logging/examples.en.html#squid>`_
+
+.. option:: -n, --no_format_check
+
+   Don't validate the log format field names according to the `squid log 
format <../../admin-guide/logging/examples.en.html#squid>`_.
+   This would allow squid format fields to be replaced, i.e. the username of 
the authenticated client ``caun`` with a random header value by using ``cqh``,
+   or to remove the client's host IP address from the log for privacy reasons.
+
 .. option:: -h, --help
 
Print usage information and exit.
diff --git a/src/traffic_logstats/logstats.cc b/src/traffic_logstats/logstats.cc
index bae5ef1..44cbb1e 100644
--- a/src/traffic_logstats/logstats.cc
+++ b/src/traffic_logstats/logstats.cc
@@ -614,6 +614,7 @@ struct CommandLineArgs {
   int as_object;   // Show the URL stats as a single JSON object (not 
array)
   int concise; // Eliminate metrics that can be inferred by other 
values
   int report_per_user; // A flag to aggregate and report stats per user 
instead of per host if 'true' (default 'false')
+  int no_format_check; // A flag to skip the log format check if any of the 
fields is not a standard squid log format field.
 
   CommandLineArgs()
 : max_origins(0),
@@ -629,7 +630,8 @@ struct CommandLineArgs {
   show_urls(0),
   as_object(0),
   concise(0),
-  report_per_user(0)
+  report_per_user(0),
+  no_format_check(0)
   {
 log_file[0]= '\0';
 origin_file[0] = '\0';
@@ -662,6 +664,7 @@ static ArgumentDescription argument_descriptions[] = {
   {"line_len", 'l', "Output line length", "I", _len, nullptr, nullptr},
   {"debug_tags", 'T', "Colon-Separated Debug Tags", "S1023", _tags, 
nullptr, nullptr},
   {"report_per_user", 'r', "Report stats per user instead of host", "T", 
_per_user, nullptr, nullptr},
+  {"no_format_check", 'n', "Don't validate the log format 

[trafficserver] branch master updated: Fixed broken sphinx version check on MacOS

2018-07-13 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 441f5f6  Fixed broken sphinx version check on MacOS
441f5f6 is described below

commit 441f5f6203079a457dea1fef40c775118eaef2c2
Author: Gancho Tenev 
AuthorDate: Tue Jul 10 21:20:04 2018 -0700

Fixed broken sphinx version check on MacOS
---
 configure.ac | 57 +---
 doc/checkvers.sh | 40 ---
 2 files changed, 30 insertions(+), 67 deletions(-)

diff --git a/configure.ac b/configure.ac
index c08bba3..e85747e 100644
--- a/configure.ac
+++ b/configure.ac
@@ -303,14 +303,26 @@ AC_ARG_ENABLE([docs],
   [AS_HELP_STRING([--enable-docs],[enable documentation building])],
   [
 enable_doc_build=yes
+AM_PATH_PYTHON([2.4], [
+  dnl action-if-found
+  TS_MAN1_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=1 | 
$AWK '{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
+  TS_MAN3_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=3 | 
$AWK '{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
+  TS_MAN5_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=5 | 
$AWK '{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
+  TS_MAN8_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=8 | 
$AWK '{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
+], [
+  dnl action-if-not-found
+  :
+])
+
 AS_IF([test -z "$JAVA"],
   [
 enable_doc_build=no
 AC_ERROR([Doc building disabled, java required but not found])
   ])
 AC_ARG_VAR(SPHINXBUILD, [the sphinx-build documentation generator])
+AC_ARG_VAR(SPHINXOPTS, [additional sphinx-build options])
 AC_PATH_PROG([SPHINXBUILD], [sphinx-build], [])
-AS_IF(["$srcdir/doc/checkvers.sh" "$SPHINXBUILD" 
"$srcdir/doc/checkvers.py"],
+AS_IF(["$PYTHON" "$srcdir/doc/checkvers.py" --check-version],
   [
 sphinx_version_check=yes
   ],[
@@ -318,11 +330,28 @@ AC_ARG_ENABLE([docs],
 enable_doc_build=no
 AC_ERROR([Doc building disabled, check sphinx installation])
   ])
+
+
+AC_SUBST(TS_MAN1_MANPAGES)
+AC_SUBST(TS_MAN3_MANPAGES)
+AC_SUBST(TS_MAN5_MANPAGES)
+AC_SUBST(TS_MAN8_MANPAGES)
+
+AC_MSG_CHECKING([whether to build man pages])
+AS_IF([test "x$sphinx_version_check" = "xyes" -a "x$SPHINXBUILD" != 
"xfalse"], [
+  build_manpages=true
+  AC_MSG_RESULT([yes])
+], [
+  build_manpages=false
+  AC_MSG_RESULT([no])
+])
+
   ],
   [enable_doc_build=no]
 )
 AC_MSG_RESULT([$enable_doc_build])
 AM_CONDITIONAL([BUILD_DOCS], [test "xyes" = "x$enable_doc_build"])
+AM_CONDITIONAL([BUILD_MANPAGES], [test "xtrue" = "x$build_manpages"])
 
 #
 # Remote Coverity Prevent commit
@@ -708,39 +737,13 @@ AC_CHECK_PROG(ASCPP, cpp, cpp)
 AC_CHECK_TOOL(AR, ar, ar)
 AC_ISC_POSIX
 
-AM_PATH_PYTHON([2.4], [
-  dnl action-if-found
-  TS_MAN1_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=1 | $AWK 
'{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
-  TS_MAN3_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=3 | $AWK 
'{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
-  TS_MAN5_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=5 | $AWK 
'{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
-  TS_MAN8_MANPAGES=`cd $srcdir/doc && $PYTHON manpages.py --section=8 | $AWK 
'{print "$(BUILDDIR)/man/" $0 }' | tr '\n' ' '`
-], [
-  dnl action-if-not-found
-  :
-])
-
 AC_ARG_VAR(RPATH, [path to be added to rpath])
-AC_ARG_VAR(SPHINXOPTS, [additional sphinx-build options])
 
 AC_ARG_VAR([CLANG_TIDY], [clang-tidy command])
 
 # Default CLANG_TIDY to "clang-tidy", or "false" if it is not present.
 AC_PATH_PROG([CLANG_TIDY], [clang-tidy],[false])
 
-AC_SUBST(TS_MAN1_MANPAGES)
-AC_SUBST(TS_MAN3_MANPAGES)
-AC_SUBST(TS_MAN5_MANPAGES)
-AC_SUBST(TS_MAN8_MANPAGES)
-
-AC_MSG_CHECKING([whether to build man pages])
-AS_IF([test "x$sphinx_version_check" = "xyes" -a "x$SPHINXBUILD" != "xfalse"], 
[
-  AM_CONDITIONAL([BUILD_MANPAGES], [true])
-  AC_MSG_RESULT([yes])
-], [
-  AM_CONDITIONAL([BUILD_MANPAGES], [false])
-  AC_MSG_RESULT([no])
-])
-
 # Do bison check by hand because we must do a version check.
 # Use YACC because it makes autotools shut up.
 BISON_MAJOR=2
diff --git a/doc/checkvers.sh b/doc/checkvers.sh
deleted file mode 100755
index 2e95771..000
--- a/doc/checkvers.sh
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation

[trafficserver] branch master updated: cachekey: running as global plugin

2018-07-06 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new f7e3cf7  cachekey: running as global plugin
f7e3cf7 is described below

commit f7e3cf7a92b7deb1bfafd6bf16a3c0a9dac3d37c
Author: Gancho Tenev 
AuthorDate: Thu Jul 5 08:21:35 2018 -0700

cachekey: running as global plugin

The plugin can now run as a global plugin (a single global instance
configured using `plugin.config`) or as per-remap plugin
(a separate instance configured per remap rule in `remap.config`).

If both global and per-remap instance are used the per-remap
configuration would take precedence (per-remap configuration
would be applied and the global configuration ignored).
---
 doc/admin-guide/plugins/cachekey.en.rst |  40 +
 plugins/cachekey/cachekey.cc| 131 ++-
 plugins/cachekey/cachekey.h |  20 +++--
 plugins/cachekey/configs.cc |  13 ++-
 plugins/cachekey/configs.h  |   3 +-
 plugins/cachekey/plugin.cc  | 153 +++-
 6 files changed, 264 insertions(+), 96 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index bc126ca..958cdbe 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -177,6 +177,46 @@ Cache key elements separator
 * ``--separator=`` - the cache key is constructed by extracting 
elements from HTTP URI and headers or by using the UA classifiers and they are 
appended during the key construction and separated by ``/`` (by default). This 
options allows to override the default separator to any string (including an 
empty string).
 
 
+How to run the plugin
+=
+
+The plugin can run as a global plugin (a single global instance configured 
using `plugin.config`) or as per-remap plugin (a separate instance configured 
per remap rule in `remap.config`).
+
+Global instance
+^^^
+
+::
+
+  $ cat plugin.config
+  cachekey.so \
+  --include-params=a,b,c \
+  --sort-params=true
+
+
+Per-remap instance
+^^
+
+::
+
+  $cat remap.config
+  map http://www.example.com http://www.origin.com \
+  @plugin=cachekey.so \
+  @pparam=--include-params=a,b,c \
+  @pparam=--sort-params=true
+
+
+If both global and per-remap instance are used the per-remap configuration 
would take precedence (per-remap configuration would be applied and the global 
configuration ignored).
+
+Because of the ATS core (remap) and the CacheKey plugin implementation there 
is a slight difference between the global and the per-remap functionality when 
``--uri-type=remap`` is used.
+
+* The global instance always uses the URI **after** remap (at 
``TS_HTTP_POST_REMAP_HOOK``).
+
+* The per-remap instance uses the URI **during** remap (after 
``TS_HTTP_PRE_REMAP_HOOK`` and  before ``TS_HTTP_POST_REMAP_HOOK``) which leads 
to a different URI to be used depending on plugin order in the remap rule.
+
+* If CacheKey plugin is the first plugin in the remap rule the URI used 
will be practically the same as the pristine URI.
+* If the CacheKey plugin is the last plugin in the remap rule (which is 
right before ``TS_HTTP_POST_REMAP_HOOK``) the behavior will be simillar to the 
global instnance.
+
+
 Detailed examples and troubleshooting
 =
 
diff --git a/plugins/cachekey/cachekey.cc b/plugins/cachekey/cachekey.cc
index 2a0b140..a31d628 100644
--- a/plugins/cachekey/cachekey.cc
+++ b/plugins/cachekey/cachekey.cc
@@ -172,17 +172,98 @@ classifyUserAgent(const Classifier , TSMBuffer buf, 
TSMLoc hdrs, String 
   return matched;
 }
 
+static String
+getUri(TSMBuffer buf, TSMLoc url)
+{
+  String uri;
+  int uriLen;
+  const char *uriPtr = TSUrlStringGet(buf, url, );
+  if (nullptr != uriPtr && 0 != uriLen) {
+uri.assign(uriPtr, uriLen);
+TSfree((void *)uriPtr);
+  } else {
+CacheKeyError("failed to get URI");
+  }
+  return uri;
+}
+
 /**
  * @brief Constructor setting up the cache key prefix, initializing request 
info.
  * @param txn transaction handle.
- * @param buf marshal buffer
- * @param url URI handle
- * @param hdrs headers handle
+ * @param separator cache key elements separator
+ * @param uriType type of the URI used to create the cachekey ("remap" or 
"pristine")
+ * @param rri remap request info
  */
-CacheKey::CacheKey(TSHttpTxn txn, TSMBuffer buf, TSMLoc url, TSMLoc hdrs, 
String separator)
-  : _txn(txn), _buf(buf), _url(url), _hdrs(hdrs), _separator(separator)
+CacheKey::CacheKey(TSHttpTxn txn, String separator, CacheKeyUriType uriType, 
TSRemapRequestInfo *rri)
+  : _txn(txn), _separator(separator), _uriType(uriType)
 {
   _key.reser

[trafficserver] branch master updated: ASAN: stack-use-after-scope

2018-07-11 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 7e5e6ed  ASAN: stack-use-after-scope
7e5e6ed is described below

commit 7e5e6ede04fd74056636ab928ed9078aa19fe87a
Author: Gancho Tenev 
AuthorDate: Tue Jul 10 23:36:44 2018 -0700

ASAN: stack-use-after-scope

in YamlLogConfig::decodeLogObject(YAML::Node const&)
   const char *mode_str = node["mode"].as().c_str();
results in dangling mode_str pointer.
---
 proxy/logging/YamlLogConfig.cc | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/proxy/logging/YamlLogConfig.cc b/proxy/logging/YamlLogConfig.cc
index e6b9e8c..dc93f1c 100644
--- a/proxy/logging/YamlLogConfig.cc
+++ b/proxy/logging/YamlLogConfig.cc
@@ -139,10 +139,10 @@ YamlLogConfig::decodeLogObject(const YAML::Node )
   // file format
   LogFileFormat file_type = LOG_FILE_ASCII; // default value
   if (node["mode"]) {
-const char *mode_str = node["mode"].as().c_str();
-file_type= (strncasecmp(mode_str, "bin", 3) == 0 || 
(mode_str[0] == 'b' && mode_str[1] == 0) ?
+std::string mode = node["mode"].as();
+file_type= (0 == strncasecmp(mode.c_str(), "bin", 3) || (1 == 
mode.size() && mode[0] == 'b') ?
LOG_FILE_BINARY :
-   (strcasecmp(mode_str, "ascii_pipe") == 0 ? LOG_FILE_PIPE : 
LOG_FILE_ASCII));
+   (0 == strcasecmp(mode.c_str(), "ascii_pipe") ? 
LOG_FILE_PIPE : LOG_FILE_ASCII));
   }
 
   int obj_rolling_enabled  = 0;



[trafficserver] branch master updated: s3_auth: check if previous config (re)load failed

2018-03-07 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 20db74b  s3_auth: check if previous config (re)load failed
20db74b is described below

commit 20db74bc0d0bf8e2d83e2c92a83274bb713196c7
Author: Gancho Tenev <gan...@apache.org>
AuthorDate: Wed Mar 7 13:08:52 2018 -0800

s3_auth: check if previous config (re)load failed

before releasing the old configuration
---
 plugins/s3_auth/s3_auth.cc | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/plugins/s3_auth/s3_auth.cc b/plugins/s3_auth/s3_auth.cc
index bd007df..3b21bbd 100644
--- a/plugins/s3_auth/s3_auth.cc
+++ b/plugins/s3_auth/s3_auth.cc
@@ -495,7 +495,10 @@ ConfigCache::get(const char *fname)
 
   TSDebug(PLUGIN_NAME, "Configuration from %s is stale, reloading", 
config_fname.c_str());
   it->second.second = tv.tv_sec;
-  it->second.first->release();
+  if (nullptr != it->second.first) {
+// The previous config update / reload attempt did not fail, safe to 
call release.
+it->second.first->release();
+  }
   if (s3->parse_config(config_fname)) {
 it->second.first = s3;
   } else {

-- 
To stop receiving notification emails like this one, please contact
gan...@apache.org.


[trafficserver] branch master updated: access_control: changes after 1st integration

2018-11-05 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 965bd1d  access_control: changes after 1st integration
965bd1d is described below

commit 965bd1d1e0bff79e057fec88f21e6acabaf1772d
Author: Gancho Tenev 
AuthorDate: Mon Nov 5 11:35:56 2018 -0800

access_control: changes after 1st integration

- allow multiple set-cookie headers in the response
- associate the cookie with path=/ (needed by all known use-cases).
---
 plugins/experimental/access_control/headers.cc | 6 +++---
 plugins/experimental/access_control/headers.h  | 2 +-
 plugins/experimental/access_control/plugin.cc  | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/plugins/experimental/access_control/headers.cc 
b/plugins/experimental/access_control/headers.cc
index 2bb70bb..fda3a74 100644
--- a/plugins/experimental/access_control/headers.cc
+++ b/plugins/experimental/access_control/headers.cc
@@ -133,7 +133,7 @@ getHeader(TSMBuffer bufp, TSMLoc hdrLoc, const char 
*header, int headerlen, char
  * @return true - OK, false - failed
  */
 bool
-setHeader(TSMBuffer bufp, TSMLoc hdrLoc, const char *header, int headerlen, 
const char *value, int valuelen)
+setHeader(TSMBuffer bufp, TSMLoc hdrLoc, const char *header, int headerlen, 
const char *value, int valuelen, bool duplicateOk)
 {
   if (!bufp || !hdrLoc || !header || headerlen <= 0 || !value || valuelen <= 
0) {
 return false;
@@ -142,8 +142,8 @@ setHeader(TSMBuffer bufp, TSMLoc hdrLoc, const char 
*header, int headerlen, cons
   bool ret= false;
   TSMLoc fieldLoc = TSMimeHdrFieldFind(bufp, hdrLoc, header, headerlen);
 
-  if (!fieldLoc) {
-// No existing header, so create one
+  if (!fieldLoc || duplicateOk) {
+// No existing header or duplicates ok, so create one
 if (TS_SUCCESS == TSMimeHdrFieldCreateNamed(bufp, hdrLoc, header, 
headerlen, )) {
   if (TS_SUCCESS == TSMimeHdrFieldValueStringSet(bufp, hdrLoc, fieldLoc, 
-1, value, valuelen)) {
 TSMimeHdrFieldAppend(bufp, hdrLoc, fieldLoc);
diff --git a/plugins/experimental/access_control/headers.h 
b/plugins/experimental/access_control/headers.h
index d3ad443..bd45e30 100644
--- a/plugins/experimental/access_control/headers.h
+++ b/plugins/experimental/access_control/headers.h
@@ -28,5 +28,5 @@
 int removeHeader(TSMBuffer bufp, TSMLoc hdr_loc, const char *header, int len);
 bool headerExist(TSMBuffer bufp, TSMLoc hdr_loc, const char *header, int len);
 char *getHeader(TSMBuffer bufp, TSMLoc hdr_loc, const char *header, int 
headerlen, char *value, int *valuelen);
-bool setHeader(TSMBuffer bufp, TSMLoc hdr_loc, const char *header, int len, 
const char *val, int val_len);
+bool setHeader(TSMBuffer bufp, TSMLoc hdr_loc, const char *header, int len, 
const char *val, int val_len, bool duplicateOk = false);
 void dumpHeaders(TSMBuffer bufp, TSMLoc hdr_loc);
diff --git a/plugins/experimental/access_control/plugin.cc 
b/plugins/experimental/access_control/plugin.cc
index 10d3852..bcc68f9 100644
--- a/plugins/experimental/access_control/plugin.cc
+++ b/plugins/experimental/access_control/plugin.cc
@@ -385,11 +385,11 @@ contHandleAccessControl(const TSCont contp, TSEvent 
event, void *edata)
*a secure channel, typically HTTP over Transport 
Layer Security (TLS)
* HttpOnly - instructs the UA to omit the cookie when providing 
access to cookies via “non-HTTP” APIs such as a web
*browser API that exposes cookies to scripts */
-  cookieValue.append("Secure; HttpOnly");
+  cookieValue.append("path=/; Secure; HttpOnly");
 
   AccessControlDebug("%.*s: %s", TS_MIME_LEN_SET_COOKIE, 
TS_MIME_FIELD_SET_COOKIE, cookieValue.c_str());
   setHeader(clientRespBufp, clientRespHdrLoc, 
TS_MIME_FIELD_SET_COOKIE, TS_MIME_LEN_SET_COOKIE, cookieValue.c_str(),
-cookieValue.size());
+cookieValue.size(), /* duplicateOk = */ true);
 
   delete token;
 } else {



[trafficserver] branch master updated: access_control: reduced some log errs to debug

2018-11-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new b7b8785  access_control: reduced some log errs to debug
b7b8785 is described below

commit b7b87850d9e39572741487dcdd3963c54bed45a9
Author: Gancho Tenev 
AuthorDate: Wed Nov 7 14:14:54 2018 -0800

access_control: reduced some log errs to debug

Reduced some errors to debug if they can happen during normal operation
and added some extra debugging info for the unexpected ones.
---
 plugins/experimental/access_control/plugin.cc | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/plugins/experimental/access_control/plugin.cc 
b/plugins/experimental/access_control/plugin.cc
index bcc68f9..5f003e3 100644
--- a/plugins/experimental/access_control/plugin.cc
+++ b/plugins/experimental/access_control/plugin.cc
@@ -32,6 +32,8 @@
 #include "utils.h"  /* cryptoBase64Decode.* functions */
 #include "headers.h"/* getHeader, setHeader, removeHeader */
 
+static const std::string_view UNKNOWN{"unknown"};
+
 static const char *
 getEventName(TSEvent event)
 {
@@ -415,12 +417,18 @@ contHandleAccessControl(const TSCont contp, TSEvent 
event, void *edata)
 
   TSHandleMLocRelease(serverRespBufp, TS_NULL_MLOC, serverRespHdrLoc);
 } else {
-  AccessControlError("failed to retrieve server response header");
+  int len;
+  char *url = TSHttpTxnEffectiveUrlStringGet(txnp, );
+  AccessControlError("failed to retrieve server response header for 
request url:%.*s",
+ (len ? len : static_cast(UNKNOWN.size())), 
(url ? url : UNKNOWN.data()));
 }
 
 TSHandleMLocRelease(clientRespBufp, TS_NULL_MLOC, clientRespHdrLoc);
   } else {
-AccessControlError("failed to retrieve client response header");
+int len;
+char *url = TSHttpTxnEffectiveUrlStringGet(txnp, );
+AccessControlError("failed to retrieve client response header for 
request url:%.*s",
+   (len ? len : static_cast(UNKNOWN.size())), 
(url ? url : UNKNOWN.data()));
   }
 }
   } break;
@@ -583,7 +591,7 @@ TSRemapDoRemap(void *instance, TSHttpTxn txnp, 
TSRemapRequestInfo *rri)
 String pattern;
 if (config->_uriPathScope.empty()) {
   /* Scope match enforce access control */
-  AccessControlError("no plugin scope specified, enforcing access 
control");
+  AccessControlDebug("no plugin scope specified, enforcing access 
control");
   remapStatus = enforceAccessControl(txnp, rri, config);
 } else {
   if (true == config->_uriPathScope.matchAll(reqPath, filename, 
pattern)) {
@@ -592,13 +600,13 @@ TSRemapDoRemap(void *instance, TSHttpTxn txnp, 
TSRemapRequestInfo *rri)
 /* Scope match enforce access control */
 remapStatus = enforceAccessControl(txnp, rri, config);
   } else {
-AccessControlError("not matching plugin scope (file: %s, pattern 
%s), skipping access control for path '%s'",
+AccessControlDebug("not matching plugin scope (file: %s, pattern 
%s), skipping access control for path '%s'",
filename.c_str(), pattern.c_str(), 
reqPath.c_str());
   }
 }
   } else {
 TSHttpTxnStatusSet(txnp, config->_invalidRequest);
-AccessControlError("https is the only allowed scheme (plugin should be 
used only with TLS)");
+AccessControlDebug("https is the only allowed scheme (plugin should be 
used only with TLS)");
 remapStatus = TSREMAP_DID_REMAP;
   }
 } else {



[trafficserver] branch master updated: s3_auth_v4: update default region map

2018-10-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 13c827e  s3_auth_v4: update default region map
13c827e is described below

commit 13c827e57fbe042d3113c24861e60bfb1d13fc11
Author: Gancho Tenev 
AuthorDate: Mon Oct 8 13:31:04 2018 -0700

s3_auth_v4: update default region map

Update the default region mapping based on the latest changes in AWS
Regions and Endpoints spec:
https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
May be if one day AWS naming/mapping becomes 100% consistent we could
just extract / calculate the right region from hostname.

The default region mapping can be overridden by using --v4-region-map
parameter (more info in the s3 plugin docs).
---
 plugins/s3_auth/aws_auth_v4.cc | 34 --
 1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/plugins/s3_auth/aws_auth_v4.cc b/plugins/s3_auth/aws_auth_v4.cc
index da5bbe9..61f595d 100644
--- a/plugins/s3_auth/aws_auth_v4.cc
+++ b/plugins/s3_auth/aws_auth_v4.cc
@@ -401,6 +401,8 @@ getCanonicalRequestSha256Hash(TsInterface , bool 
signPayload, const StringSe
  * @see http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
  * it is used to get the region programmatically  w/o configuration
  * parameters and can (meant to) be overwritten if necessary.
+ * @todo may be if one day AWS naming/mapping becomes 100% consistent
+ * we could just extract (calculate) the right region from hostname.
  */
 const StringMap
 createDefaultRegionMap()
@@ -412,47 +414,67 @@ createDefaultRegionMap()
   m["s3.dualstack.us-east-2.amazonaws.com"] = "us-east-2";
   /* "us-east-1" */
   m["s3.amazonaws.com"] = "us-east-1";
+  m["s3.us-east-1.amazonaws.com"]   = "us-east-1";
   m["s3-external-1.amazonaws.com"]  = "us-east-1";
   m["s3.dualstack.us-east-1.amazonaws.com"] = "us-east-1";
   /* us-west-1 */
+  m["s3.us-west-1.amazonaws.com"]   = "us-west-1";
   m["s3-us-west-1.amazonaws.com"]   = "us-west-1";
   m["s3.dualstack.us-west-1.amazonaws.com"] = "us-west-1";
   /* us-west-2 */
+  m["s3.us-west-2.amazonaws.com"]   = "us-west-2";
   m["s3-us-west-2.amazonaws.com"]   = "us-west-2";
   m["s3.dualstack.us-west-2.amazonaws.com"] = "us-west-2";
-  /* ca-central-1 */
-  m["s3.ca-central-1.amazonaws.com"]   = "ca-central-1";
-  m["s3-ca-central-1.amazonaws.com"]   = "ca-central-1";
-  m["s3.dualstack.ca-central-1.amazonaws.com"] = "ca-central-1";
   /* ap-south-1 */
   m["s3.ap-south-1.amazonaws.com"]   = "ap-south-1";
   m["s3-ap-south-1.amazonaws.com"]   = "ap-south-1";
   m["s3.dualstack.ap-south-1.amazonaws.com"] = "ap-south-1";
+  /* ap-northeast-3 */
+  m["s3.ap-northeast-3.amazonaws.com"]   = "ap-northeast-3";
+  m["s3-ap-northeast-3.amazonaws.com"]   = "ap-northeast-3";
+  m["s3.dualstack.ap-northeast-3.amazonaws.com"] = "ap-northeast-3";
   /* ap-northeast-2 */
   m["s3.ap-northeast-2.amazonaws.com"]   = "ap-northeast-2";
   m["s3-ap-northeast-2.amazonaws.com"]   = "ap-northeast-2";
   m["s3.dualstack.ap-northeast-2.amazonaws.com"] = "ap-northeast-2";
   /* ap-southeast-1 */
+  m["s3.ap-southeast-1.amazonaws.com"]   = "ap-southeast-1";
   m["s3-ap-southeast-1.amazonaws.com"]   = "ap-southeast-1";
   m["s3.dualstack.ap-southeast-1.amazonaws.com"] = "ap-southeast-1";
   /* ap-southeast-2 */
+  m["s3.ap-southeast-2.amazonaws.com"]   = "ap-southeast-2";
   m["s3-ap-southeast-2.amazonaws.com"]   = "ap-southeast-2";
   m["s3.dualstack.ap-southeast-2.amazonaws.com"] = "ap-southeast-2";
   /* ap-northeast-1 */
+  m["s3.ap-northeast-1.amazonaws.com"]   = "ap-northeast-1";
   m["s3-ap-northeast-1.amazonaws.com"]   = "ap-northeast-1";
   m["s3.dualstack.ap-northeast-1.amazonaws.com"] = "ap-northeast-1";
+  /* ca-central-1 */
+  m["s3.ca-central-1.amazonaws.com"]   = "ca-central-1";
+  m["s3-ca-central-1.amazonaws.com"]   = "ca-central-1";
+  m["s3.dualstack.ca-central-1.amazonaws.com&

[trafficserver] branch master updated: url_sig: fixed unit-test for remapped url.

2019-04-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 233aa44  url_sig: fixed unit-test for remapped url.
233aa44 is described below

commit 233aa44353c90a94830455f68bb7ad18635e80ac
Author: Gancho Tenev 
AuthorDate: Mon Feb 25 14:57:50 2019 -0800

url_sig: fixed unit-test for remapped url.

When the plugin does not use @pparam=pristineurl it should work
with the remapped url. The code already does this but the unit-test
still tests the signing of the pristine url in those specific use
cases. This is related to a behavior change with v9.0 where the
1st plugin gets the remapped url as a remap API requestUrl
regardless of its possition in the plugin chain (PR #4964).
---
 .../gold_tests/pluginTest/url_sig/url_sig.test.py  | 37 +++---
 1 file changed, 25 insertions(+), 12 deletions(-)

diff --git a/tests/gold_tests/pluginTest/url_sig/url_sig.test.py 
b/tests/gold_tests/pluginTest/url_sig/url_sig.test.py
index e4a5819..3b83837 100644
--- a/tests/gold_tests/pluginTest/url_sig/url_sig.test.py
+++ b/tests/gold_tests/pluginTest/url_sig/url_sig.test.py
@@ -16,6 +16,8 @@
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
 
+import hashlib
+import hmac
 import os
 import subprocess
 Test.Summary = '''
@@ -194,16 +196,6 @@ tr.Processes.Default.Command = (
 
 # Success tests.
 
-# No client / SHA1 / P=1 / URL not pristine / URL not altered.
-#
-tr = Test.AddTestRun()
-tr.Processes.Default.ReturnCode = 0
-tr.Processes.Default.Command = (
-"curl --verbose --proxy http://127.0.0.1:{} 
'http://one.two.three/".format(ts.Variables.port) +
-
"foo/abcde/qrstuvwxyz?E=33046618506=1=7=1=acae22b0e1ba6ea6fbb5d26018dbf152558e98cb'"
 +
-LogTee
-)
-
 # With client / SHA1 / P=1 / URL pristine / URL not altered.
 #
 tr = Test.AddTestRun()
@@ -244,13 +236,34 @@ tr.Processes.Default.Command = (
 LogTee
 )
 
+def sign(payload, key):
+  secret=bytes(key,'utf-8')
+  data=bytes(payload, 'utf-8')
+  md=bytes(hmac.new(secret, data, digestmod=hashlib.sha1).digest().hex(), 
'utf-8')
+  return md.decode("utf-8")
+
+# No client / SHA1 / P=1 / URL not pristine / URL not altered.
+#
+path="foo/abcde/qrstuvwxyz?E=33046618506=1=7=1="
+to_sign="127.0.0.1:{}/".format(server.Variables.Port) + path
+url="http://one.two.three/; + path + sign(to_sign, 
"dqsgopTSM_doT6iAysasQVUKaPykyb6e")
+
+tr = Test.AddTestRun()
+tr.Processes.Default.ReturnCode = 0
+tr.Processes.Default.Command = (
+"curl --verbose --proxy http://127.0.0.1:{} 
'{}'".format(ts.Variables.port, url) + LogTee
+)
+
 # No client / SHA1 / P=1 / URL not pristine / URL not altered -- HTTPS.
 #
+path="foo/abcde/qrstuvwxyz?E=33046618506=1=7=1="
+to_sign="127.0.0.1:{}/".format(server.Variables.Port) + path
+url="https://127.0.0.1:{}/".format(ts.Variables.ssl_port) + path + 
sign(to_sign, "dqsgopTSM_doT6iAysasQVUKaPykyb6e")
+
 tr = Test.AddTestRun()
 tr.Processes.Default.ReturnCode = 0
 tr.Processes.Default.Command = (
-"curl --verbose --http1.1 --insecure --header 'Host: one.two.three' 
'https://127.0.0.1:{}/".format(ts.Variables.ssl_port) +
-
"foo/abcde/qrstuvwxyz?E=33046618506=1=7=1=acae22b0e1ba6ea6fbb5d26018dbf152558e98cb'"
 +
+"curl --verbose --http1.1 --insecure --header 'Host: one.two.three' 
'{}'".format(url) +
 LogTee + " ; grep -F -e '< HTTP' -e Authorization {0}/url_sig_long.log > 
{0}/url_sig_short.log ".format(ts.RunDirectory)
 )
 



[trafficserver] branch master updated: s3_auth:fixed uncaught exception on invalid input

2019-02-18 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 953cde6  s3_auth:fixed uncaught exception on invalid input
953cde6 is described below

commit 953cde65b015e0f914fabdc79153fb9d694b18aa
Author: Gancho Tenev 
AuthorDate: Sat Feb 16 09:32:07 2019 -0800

s3_auth:fixed uncaught exception on invalid input

Instead of fixing the URI decoding functionality to handle the exception
(re)implemented a check for percent encoding which was what was needed.

During signature calculation AWS avoids URI encoding of already encoded
query parameters (rfc3986#section-2.4 says "implementations must not
percent-encode or decode the same string more than once ...")
---
 plugins/s3_auth/aws_auth_v4.cc | 63 
 plugins/s3_auth/unit_tests/test_aws_auth_v4.cc | 79 +++---
 plugins/s3_auth/unit_tests/test_aws_auth_v4.h  |  2 +-
 3 files changed, 101 insertions(+), 43 deletions(-)

diff --git a/plugins/s3_auth/aws_auth_v4.cc b/plugins/s3_auth/aws_auth_v4.cc
index 61f595d..1df1a61 100644
--- a/plugins/s3_auth/aws_auth_v4.cc
+++ b/plugins/s3_auth/aws_auth_v4.cc
@@ -103,32 +103,52 @@ uriEncode(const String , bool isObjectName)
 }
 
 /**
- * @brief URI-decode a character string (AWS specific version, see spec)
+ * @brief checks if the string is URI-encoded (AWS specific encoding version, 
see spec)
  *
  * @see AWS spec: 
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
  *
- * @todo Consider reusing / converting to TSStringPercentDecode()
- *   Currently we don't build a library/archive so we could link with the 
unit-test binary. Also using
- *   different sets of encode/decode functions during runtime and 
unit-testing did not seem as a good idea.
- * @param in string to be URI decoded
- * @return encoded string.
+ * @note According to the following RFC if the string is encoded and contains 
'%' it should
+ *   be followed by 2 hexadecimal symbols otherwise '%' should be encoded 
with %25:
+ *  https://tools.ietf.org/html/rfc3986#section-2.1
+ *
+ * @param in string to be URI checked
+ * @param isObjectName if true encoding didn't encode '/', kept it as it is.
+ * @return true if encoded, false not encoded.
  */
-String
-uriDecode(const String )
+bool
+isUriEncoded(const String , bool isObjectName)
 {
-  std::string result;
-  result.reserve(in.length());
-  size_t i = 0;
-  while (i < in.length()) {
-if (in[i] == '%') {
-  result += static_cast(std::stoi(in.substr(i + 1, 2), nullptr, 16));
-  i += 3;
-} else {
-  result += in[i];
-  i++;
+  for (size_t pos = 0; pos < in.length(); pos++) {
+char c = in[pos];
+
+if (isalnum(c) || c == '-' || c == '_' || c == '.' || c == '~') {
+  /* found a unreserved character which should not have been be encoded 
regardless
+   * 'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '_', and '~'.  */
+  continue;
+}
+
+if (' ' == c) {
+  /* space should have been encoded with %20 if the string was encoded */
+  return false;
+}
+
+if ('/' == c && !isObjectName) {
+  /* if this is not an object name '/' should have been encoded */
+  return false;
+}
+
+if ('%' == c) {
+  if (pos + 2 < in.length() && std::isxdigit(in[pos + 1]) && 
std::isxdigit(in[pos + 2])) {
+/* if string was encoded we should have exactly 2 hexadecimal chars 
following it */
+return true;
+  } else {
+/* lonely '%' should have been encoded with %25 according to the RFC 
so likely not encoded */
+return false;
+  }
 }
   }
-  return result;
+
+  return false;
 }
 
 /**
@@ -290,10 +310,7 @@ getCanonicalRequestSha256Hash(TsInterface , bool 
signPayload, const StringSe
 
 paramNames.insert(encodedParam);
 
-/* Look for '%' first trying to avoid as many uri-decode calls as possible.
- * it is hard to estimate which is more likely use-case - (1) URIs with 
uri-encoded query parameter
- * values or (2) with unencoded which defines the success of this 
optimization */
-if (nullptr == memchr(value.c_str(), '%', value.length()) || 0 == 
uriDecode(value).compare(value)) {
+if (!isUriEncoded(value, /* isObjectName */ false)) {
   /* Not URI-encoded */
   paramsMap[encodedParam] = uriEncode(value, /* isObjectName */ false);
 } else {
diff --git a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc 
b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
index 7dd4f60..025cc2b 100644
--- a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
+++ b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
@@ -69,37 +69,78 @@ TEST_CASE("uriEncode(): encode reserved chars in an object 
name", "[AWS][auth][u
   
CHECK_FALSE(encoded.com

[trafficserver] branch master updated: HdrHeap default size unit test fix.

2019-02-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 6565396  HdrHeap default size unit test fix.
6565396 is described below

commit 65653961072d9e6d56570e5a744b7499ee6b1b22
Author: Gancho Tenev 
AuthorDate: Mon Feb 25 15:30:46 2019 -0800

HdrHeap default size unit test fix.

After refactoring "HdrHeap refresh" in PR #4953 unit test need
to be updated as well. Changing HDR_HEAP_DEFAULT_SIZE to
HdrHeap::DEFAULT_SIZE
---
 doc/developer-guide/core-architecture/heap.en.rst | 4 ++--
 proxy/hdrs/unit_tests/test_Hdrs.cc| 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/developer-guide/core-architecture/heap.en.rst 
b/doc/developer-guide/core-architecture/heap.en.rst
index e03b5ad..31bee10 100644
--- a/doc/developer-guide/core-architecture/heap.en.rst
+++ b/doc/developer-guide/core-architecture/heap.en.rst
@@ -132,10 +132,10 @@ Classes
 
 .. function:: HdrHeap * new_HdrHeap(int n)
 
-   Create and return a new instance of :class:`HdrHeap`. If :arg:`n` is less 
than ``HDR_HEAP_DEFAULT_SIZE``
+   Create and return a new instance of :class:`HdrHeap`. If :arg:`n` is less 
than ``HdrHeap::DEFAULT_SIZE``
it is increased to that value.
 
-   If the allocated size is ``HDR_HEAP_DEFAULT_SIZE`` (or smaller and upsized 
to that value) then
+   If the allocated size is ``HdrHeap::DEFAULT_SIZE`` (or smaller and upsized 
to that value) then
the instance is allocated from a thread local pool via 
:code:`hdrHeapAllocator`. If larger it
is allocated from global memory via :code:`ats_malloc`.
 
diff --git a/proxy/hdrs/unit_tests/test_Hdrs.cc 
b/proxy/hdrs/unit_tests/test_Hdrs.cc
index a43503a..790418c 100644
--- a/proxy/hdrs/unit_tests/test_Hdrs.cc
+++ b/proxy/hdrs/unit_tests/test_Hdrs.cc
@@ -68,7 +68,7 @@ TEST_CASE("HdrTest", "[proxy][hdrtest]")
 
   for (auto const  : tests) {
 HTTPHdr req_hdr;
-HdrHeap *heap = new_HdrHeap(HDR_HEAP_DEFAULT_SIZE + 64); // extra to 
prevent proxy allocation.
+HdrHeap *heap = new_HdrHeap(HdrHeap::DEFAULT_SIZE + 64); // extra to 
prevent proxy allocation.
 
 req_hdr.create(HTTP_TYPE_REQUEST, heap);
 



[trafficserver] branch master updated: Rewrite URL before all remap plugins run

2019-02-26 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new fdf7688  Rewrite URL before all remap plugins run
fdf7688 is described below

commit fdf76885b5cf5403cfc008c307d6a2034b248422
Author: Gancho Tenev 
AuthorDate: Fri Feb 8 16:17:19 2019 -0800

Rewrite URL before all remap plugins run

Rewriting the url *before* running all plugins instead of *after*
which would guarantee that:
- all plugins would get the same TSRemapRequestInfo::reqiestUrl
  (first plugin in the chain would not be special)
- all plugins would treat TSRemapRequestInfo::reqiestUrl the same
  way consistently as a *remapped* URL which makes the first plugin
  really not different from the rest
- there would be a remapped URL default in case the remap rule had
  no plugins OR none of the plugins modifed the mapped URL

Also turning off url_sig and cookie_remap plugin unit-tests impacted
by this not backwards compatible change.
---
 proxy/http/remap/RemapPlugins.cc  | 11 +--
 .../pluginTest/cookie_remap/collapseslashes.test.py   |  1 +
 tests/gold_tests/pluginTest/cookie_remap/connector.test.py|  1 +
 tests/gold_tests/pluginTest/cookie_remap/matrixparams.test.py |  1 +
 tests/gold_tests/pluginTest/cookie_remap/subcookie.test.py|  1 +
 tests/gold_tests/pluginTest/cookie_remap/substitute.test.py   |  1 +
 tests/gold_tests/pluginTest/url_sig/url_sig.test.py   |  2 ++
 7 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/proxy/http/remap/RemapPlugins.cc b/proxy/http/remap/RemapPlugins.cc
index 8900e54..7a3293b 100644
--- a/proxy/http/remap/RemapPlugins.cc
+++ b/proxy/http/remap/RemapPlugins.cc
@@ -88,6 +88,11 @@ RemapPlugins::run_single_remap()
   Debug("url_rewrite", "running single remap rule id %d for the %d%s time", 
map->map_id, _cur,
 _cur == 1 ? "st" : _cur == 2 ? "nd" : _cur == 3 ? "rd" : "th");
 
+  if (0 == _cur) {
+Debug("url_rewrite", "setting the remapped url by copying from mapping 
rule");
+url_rewrite_remap_request(_s->url_map, _request_url, 
_s->hdr_info.client_request.method_get_wksidx());
+  }
+
   // There might not be a plugin if we are a regular non-plugin map rule. In 
that case, we will fall through
   // and do the default mapping and then stop.
   if (plugin) {
@@ -110,12 +115,6 @@ RemapPlugins::run_single_remap()
   Debug("url_rewrite", "completed single remap, attempting another via 
immediate callback");
   zret = false; // not done yet.
 }
-
-// If the chain is finished, and the URL hasn't been rewritten, do the 
rule remap.
-if (zret && 0 == _rewritten) {
-  Debug("url_rewrite", "plugins did not change host, port or path, copying 
from mapping rule");
-  url_rewrite_remap_request(_s->url_map, _request_url, 
_s->hdr_info.client_request.method_get_wksidx());
-}
   }
   return zret;
 }
diff --git a/tests/gold_tests/pluginTest/cookie_remap/collapseslashes.test.py 
b/tests/gold_tests/pluginTest/cookie_remap/collapseslashes.test.py
index 7f6db2a..77d823a 100644
--- a/tests/gold_tests/pluginTest/cookie_remap/collapseslashes.test.py
+++ b/tests/gold_tests/pluginTest/cookie_remap/collapseslashes.test.py
@@ -27,6 +27,7 @@ Test.SkipUnless(
 )
 Test.ContinueOnFail = True
 Test.testName = "cookie_remap: plugin collapses consecutive slashes"
+Test.SkipIf(Condition.true("Test is temporarily turned off, to be fixed 
according to an incompatible plugin API change (PR #4964)"))
 
 # Define default ATS
 ts = Test.MakeATSProcess("ts")
diff --git a/tests/gold_tests/pluginTest/cookie_remap/connector.test.py 
b/tests/gold_tests/pluginTest/cookie_remap/connector.test.py
index 2f7c18e..24edae3 100644
--- a/tests/gold_tests/pluginTest/cookie_remap/connector.test.py
+++ b/tests/gold_tests/pluginTest/cookie_remap/connector.test.py
@@ -27,6 +27,7 @@ Test.SkipUnless(
 )
 Test.ContinueOnFail = True
 Test.testName = "cookie_remap: test connector"
+Test.SkipIf(Condition.true("Test is temporarily turned off, to be fixed 
according to an incompatible plugin API change (PR #4964)"))
 
 # Define default ATS
 ts = Test.MakeATSProcess("ts")
diff --git a/tests/gold_tests/pluginTest/cookie_remap/matrixparams.test.py 
b/tests/gold_tests/pluginTest/cookie_remap/matrixparams.test.py
index 68529ba..45ecb7b 100644
--- a/tests/gold_tests/pluginTest/cookie_remap/matrixparams.test.py
+++ b/tests/gold_tests/pluginTest/cookie_remap/matrixparams.test.py
@@ -27,6 +27,7 @@ Test.SkipUnless(
 )
 Test.ContinueOnFail = True
 Test.testName = "cookie_remap: Tests when matrix parameters are pr

[trafficserver] branch max-life-logs updated (c597de5 -> 4b620b5)

2019-07-11 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch max-life-logs
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


 discard c597de5  options to roll empty logs and log trimming
 add 4b620b5  Options to roll empty logs and log trimming

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c597de5)
\
 N -- N -- N   refs/heads/max-life-logs (4b620b5)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 doc/admin-guide/files/records.config.en.rst | 2 +-
 doc/admin-guide/logging/rotation.en.rst | 2 +-
 proxy/logging/LogFile.cc| 2 +-
 proxy/logging/LogObject.cc  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)



[trafficserver] 01/01: options to roll empty logs and log trimming

2019-07-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch max-life-logs
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 74840f9ecbcfcac659cef0729faf1766f39a3652
Author: Gancho Tenev 
AuthorDate: Fri Jul 5 17:09:15 2019 -0700

options to roll empty logs and log trimming

Added 2 options:
- proxy.config.log.rolling_allow_empty - ability to roll empty logs
(i.e. rolling logs without traffic)
- proxy.config.log.rolling_max_count - trimming logs to a certain
number of rolled files on each rolling
More info in records.config.en.rst and rotation.en.rst.
---
 doc/admin-guide/files/records.config.en.rst | 25 
 doc/admin-guide/logging/rotation.en.rst | 23 
 mgmt/RecordsConfig.cc   |  4 ++
 proxy/logging/LogConfig.cc  | 14 -
 proxy/logging/LogConfig.h   |  2 +
 proxy/logging/LogFile.cc| 89 -
 proxy/logging/LogFile.h |  3 +-
 proxy/logging/LogObject.cc  | 27 +++--
 proxy/logging/LogObject.h   | 16 +++---
 proxy/logging/YamlLogConfig.cc  | 19 --
 src/traffic_server/InkAPI.cc|  3 +-
 11 files changed, 202 insertions(+), 23 deletions(-)

diff --git a/doc/admin-guide/files/records.config.en.rst 
b/doc/admin-guide/files/records.config.en.rst
index 11107d1..833802b 100644
--- a/doc/admin-guide/files/records.config.en.rst
+++ b/doc/admin-guide/files/records.config.en.rst
@@ -273,6 +273,31 @@ System Variables
order of auto-deletion (if enabled). A default value of 0 means 
auto-deletion will try to keep
output logs as much as possible. See :doc:`../logging/rotation.en` for 
guidance.
 
+.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_max_count INT 0
+   :reloadable:
+
+   Specifies the maximum count of rolled output logs to keep. This value will 
be used by the
+   auto-deletion (if enabled) to trim the number of rolled log files every 
time the log is rolled.
+   A default value of 0 means auto-deletion will not try to limit the number 
of output logs.
+   See :doc:`../logging/rotation.en` for an use-case for this option.
+
+.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_allow_empty INT 0
+   :reloadable:
+
+   While rolling default behaviour is to rename, close and re-open the log 
file *only* when/if there is
+   something to log to the log file. This option opens a new log file right 
after rolling even if there
+   is nothing to log (i.e. nothing to be logged due to lack of requests to the 
server)
+   which may lead to 0-sized log files while rollong. See 
:doc:`../logging/rotation.en` for an use-case 
+   for this option.
+
+   = ==
+   Value Description
+   = ==
+   ``0`` No empty log files created and rolloed if there was nothing to log
+   ``1`` Allow empty log files to be created and  rolled even if there was 
nothing to log
+   = ==
+
+
 Thread Variables
 
 
diff --git a/doc/admin-guide/logging/rotation.en.rst 
b/doc/admin-guide/logging/rotation.en.rst
index eaed409..a4861d5 100644
--- a/doc/admin-guide/logging/rotation.en.rst
+++ b/doc/admin-guide/logging/rotation.en.rst
@@ -245,3 +245,26 @@ To set log management options, follow the steps below:
 #. Run the command :option:`traffic_ctl config reload` to apply the 
configuration
changes.
 
+
+Retaining Logs For No More Than a Specified Period
+--
+
+If for security reasons logs need to be purged to make sure no log entry 
remains on the box
+for more then a specified period of time, we could achieve this by setting the 
following variables.
+
+Let us say we wanted the oldest log entry to be kept on the box to be no older 
than 2-hour old.
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_interval_sec` (yaml: 
`rolling_interval_sec`) to 3600 (1h)
+which will lead to rolling every 1h.
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_max_count` (yaml: 
`rolling_max_count`) to 1
+which will lead to keeping only one rolled log file at any moment (rolled will 
be trimmed on every roll).
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_allow_empty` (yaml: 
`rolling_allow_empty`)  to 1 (default: 0)
+which will allow logs to be open and rolled even if there was nothing to be 
logged during the previous period 
+(i.e. no requests to the traffic server).
+
+The above will ensure logs are rolled every 1h hour, only 1 rolled log file to 
be kept
+(rest will be trimmed/removed) and logs will be rolling ("moving") even if 
nothing is logged
+(i.e. no traffic to the traffic server).
+
diff --git a/mgmt/RecordsConfig.cc b/mgmt/RecordsConfig.cc
index 9d4a2a

[trafficserver] branch max-life-logs created (now 74840f9)

2019-07-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch max-life-logs
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


  at 74840f9  options to roll empty logs and log trimming

This branch includes the following new commits:

 new 74840f9  options to roll empty logs and log trimming

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[trafficserver] branch master updated: Options to roll empty logs and log trimming

2019-07-16 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new b81422a  Options to roll empty logs and log trimming
b81422a is described below

commit b81422ac723daca585fbe24247832b33309b21a9
Author: Gancho Tenev 
AuthorDate: Fri Jul 5 17:09:15 2019 -0700

Options to roll empty logs and log trimming

Added 2 options:
- proxy.config.log.rolling_allow_empty - ability to roll empty logs
(i.e. rolling logs without traffic)
- proxy.config.log.rolling_max_count - trimming logs to a certain
number of rolled files on each rolling
More info in records.config.en.rst and rotation.en.rst.
---
 doc/admin-guide/files/records.config.en.rst | 25 
 doc/admin-guide/logging/rotation.en.rst | 24 
 mgmt/RecordsConfig.cc   |  4 ++
 proxy/logging/LogConfig.cc  | 14 -
 proxy/logging/LogConfig.h   |  2 +
 proxy/logging/LogFile.cc| 89 -
 proxy/logging/LogFile.h |  3 +-
 proxy/logging/LogObject.cc  | 25 ++--
 proxy/logging/LogObject.h   | 16 +++---
 proxy/logging/YamlLogConfig.cc  | 19 --
 src/traffic_server/InkAPI.cc|  3 +-
 11 files changed, 202 insertions(+), 22 deletions(-)

diff --git a/doc/admin-guide/files/records.config.en.rst 
b/doc/admin-guide/files/records.config.en.rst
index 11107d1..fd965b0 100644
--- a/doc/admin-guide/files/records.config.en.rst
+++ b/doc/admin-guide/files/records.config.en.rst
@@ -273,6 +273,31 @@ System Variables
order of auto-deletion (if enabled). A default value of 0 means 
auto-deletion will try to keep
output logs as much as possible. See :doc:`../logging/rotation.en` for 
guidance.
 
+.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_max_count INT 0
+   :reloadable:
+
+   Specifies the maximum count of rolled output logs to keep. This value will 
be used by the
+   auto-deletion (if enabled) to trim the number of rolled log files every 
time the log is rolled.
+   A default value of 0 means auto-deletion will not try to limit the number 
of output logs.
+   See :doc:`../logging/rotation.en` for an use-case for this option.
+
+.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_allow_empty INT 0
+   :reloadable:
+
+   While rolling default behavior is to rename, close and re-open the log file 
*only* when/if there is
+   something to log to the log file. This option opens a new log file right 
after rolling even if there
+   is nothing to log (i.e. nothing to be logged due to lack of requests to the 
server)
+   which may lead to 0-sized log files while rollong. See 
:doc:`../logging/rotation.en` for an use-case
+   for this option.
+
+   = ==
+   Value Description
+   = ==
+   ``0`` No empty log files created and rolloed if there was nothing to log
+   ``1`` Allow empty log files to be created and  rolled even if there was 
nothing to log
+   = ==
+
+
 Thread Variables
 
 
diff --git a/doc/admin-guide/logging/rotation.en.rst 
b/doc/admin-guide/logging/rotation.en.rst
index eaed409..64ed337 100644
--- a/doc/admin-guide/logging/rotation.en.rst
+++ b/doc/admin-guide/logging/rotation.en.rst
@@ -245,3 +245,27 @@ To set log management options, follow the steps below:
 #. Run the command :option:`traffic_ctl config reload` to apply the 
configuration
changes.
 
+
+Retaining Logs For No More Than a Specified Period
+--
+
+If for security reasons logs need to be purged to make sure no log entry 
remains on the box
+for more then a specified period of time, we could achieve this by setting the 
rolling interval,
+the maximum number of rolled log files, and forcing |TS| to roll even when 
there is no traffic.
+
+Let us say we wanted the oldest log entry to be kept on the box to be no older 
than 2-hour old.
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_interval_sec` (yaml: 
`rolling_interval_sec`) to 3600 (1h)
+which will lead to rolling every 1h.
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_max_count` (yaml: 
`rolling_max_count`) to 1
+which will lead to keeping only one rolled log file at any moment (rolled will 
be trimmed on every roll).
+
+Set :ts:cv:`proxy.config.output.logfile.rolling_allow_empty` (yaml: 
`rolling_allow_empty`) to 1 (default: 0)
+which will allow logs to be open and rolled even if there was nothing to be 
logged during the previous period
+(i.e. no requests to |TS|).
+
+The above will ensure logs are rolled every 1h hour, only 1 rolled log file

[trafficserver] branch master updated: Avoid AWS auth v4 path/query param double encoding

2019-08-16 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 30b75a3  Avoid AWS auth v4 path/query param double encoding
30b75a3 is described below

commit 30b75a3d5cae94bcd105464122a253eb9bbf4a1c
Author: Gancho Tenev 
AuthorDate: Tue Aug 13 11:35:39 2019 -0700

Avoid AWS auth v4 path/query param double encoding

AWS signature validation differs from the spec published here (or spec
not detailed enough):
  
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
("Task 1: Create a Canonical Request”)

During signature calculation (CanonicalURI and CanonicalQueryString
inside the CanonicalRequest) AWS avoids URI encoding of already
encoded path or query parameters which is nowhere mentioned in the
specification but it is likely done according to rfc3986#section-2.4
which says "implementations must not percent-encode or decode the
same string more than once ..."

We already had a fix for query parementer values. Added missing
checks to be consistent with AWS behavior while still waiting for
response/confirmation from AWS.
---
 plugins/s3_auth/aws_auth_v4.cc | 31 --
 plugins/s3_auth/unit_tests/test_aws_auth_v4.cc | 11 -
 2 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/plugins/s3_auth/aws_auth_v4.cc b/plugins/s3_auth/aws_auth_v4.cc
index 1df1a61..22e90ad 100644
--- a/plugins/s3_auth/aws_auth_v4.cc
+++ b/plugins/s3_auth/aws_auth_v4.cc
@@ -151,6 +151,22 @@ isUriEncoded(const String , bool isObjectName)
   return false;
 }
 
+String
+canonicalEncode(const String , bool isObjectName)
+{
+  String canonical;
+  if (!isUriEncoded(in, isObjectName)) {
+/* Not URI-encoded */
+canonical = uriEncode(in, isObjectName);
+  } else {
+/* URI-encoded, then don't encode since AWS does not encode which is not 
mentioned in the spec,
+ * asked AWS, still waiting for confirmation */
+canonical = in;
+  }
+
+  return canonical;
+}
+
 /**
  * @brief trim the white-space character from the beginning and the end of the 
string ("in-place", just moving pointers around)
  *
@@ -287,7 +303,7 @@ getCanonicalRequestSha256Hash(TsInterface , bool 
signPayload, const StringSe
   str = api.getPath();
   String path("/");
   path.append(str, length);
-  String canonicalUri = uriEncode(path, /* isObjectName */ true);
+  String canonicalUri = canonicalEncode(path, /* isObjectName */ true);
   sha256Update(, canonicalUri);
   sha256Update(, "\n");
 
@@ -306,18 +322,9 @@ getCanonicalRequestSha256Hash(TsInterface , bool 
signPayload, const StringSe
 String param(token.substr(0, pos == String::npos ? token.size() : pos));
 String value(pos == String::npos ? "" : token.substr(pos + 1, 
token.size()));
 
-String encodedParam = uriEncode(param, /* isObjectName */ false);
-
+String encodedParam = canonicalEncode(param, /* isObjectName */ false);
 paramNames.insert(encodedParam);
-
-if (!isUriEncoded(value, /* isObjectName */ false)) {
-  /* Not URI-encoded */
-  paramsMap[encodedParam] = uriEncode(value, /* isObjectName */ false);
-} else {
-  /* URI-encoded, then don't encode since AWS does not encode which is not 
mentioned in the spec,
-   * asked AWS, still waiting for confirmation */
-  paramsMap[encodedParam] = value;
-}
+paramsMap[encodedParam] = canonicalEncode(value, /* isObjectName */ false);
   }
 
   String queryStr;
diff --git a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc 
b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
index 408ce21..4bf58af 100644
--- a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
+++ b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
@@ -631,7 +631,7 @@ TEST_CASE("AWSAuthSpecByExample: GET Bucket List Objects, 
query param value alre
   MockTsInterface api;
   api._method.assign("GET");
   api._host.assign("examplebucket.s3.amazonaws.com");
-  api._path.assign("");
+  api._path.assign("PATH==");
   api._query.assign("key=TEST==");
   api._headers["Host"] = "examplebucket.s3.amazonaws.com";
   api._headers["x-amz-content-sha256"] = "UNSIGNED-PAYLOAD";
@@ -642,18 +642,18 @@ TEST_CASE("AWSAuthSpecByExample: GET Bucket List Objects, 
query param value alre
 "AWS4-HMAC-SHA256 "
 "Credential=AKIAIOSFODNN7EXAMPLE/20130524/us-east-1/s3/aws4_request,"
 "SignedHeaders=host;x-amz-content-sha256;x-amz-date,"
-
"Signature=60b410f6a0ffe09b91c2aef1f179945916b45ea215278e6b8f6cfb8d461e3706",
+
"Signature=3b195c74eaa89790c596114a9fcb8515d6a3cc78577f8941c46b09ee7f501194"

[trafficserver] branch master updated: make check race condition fix

2019-08-15 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new f0cbc11  make check race condition fix
f0cbc11 is described below

commit f0cbc11e5286ffc18e565c2516da77b37e504d9b
Author: Gancho Tenev 
AuthorDate: Mon Aug 12 17:28:11 2019 -0700

make check race condition fix

Avoid race conditions when creating unique temporary
unit-test sandbox directory by using mkdtemp().
---
 proxy/http/remap/unit-tests/plugin_testing_common.cc | 13 +
 proxy/http/remap/unit-tests/plugin_testing_common.h  |  3 +++
 proxy/http/remap/unit-tests/test_PluginDso.cc|  9 +
 proxy/http/remap/unit-tests/test_PluginFactory.cc| 13 +
 proxy/http/remap/unit-tests/test_RemapPlugin.cc  | 20 +++-
 5 files changed, 41 insertions(+), 17 deletions(-)

diff --git a/proxy/http/remap/unit-tests/plugin_testing_common.cc 
b/proxy/http/remap/unit-tests/plugin_testing_common.cc
index d5d08d3..a54422c 100644
--- a/proxy/http/remap/unit-tests/plugin_testing_common.cc
+++ b/proxy/http/remap/unit-tests/plugin_testing_common.cc
@@ -37,3 +37,16 @@ PrintToStdErr(const char *fmt, ...)
   vfprintf(stderr, fmt, args);
   va_end(args);
 }
+
+fs::path
+getTemporaryDir()
+{
+  std::error_code ec;
+  fs::path tmpDir = fs::canonical(fs::temp_directory_path(), ec);
+  tmpDir /= "sandbox_XX";
+
+  char dirNameTemplate[tmpDir.string().length() + 1];
+  sprintf(dirNameTemplate, "%s", tmpDir.c_str());
+
+  return fs::path(mkdtemp(dirNameTemplate));
+}
diff --git a/proxy/http/remap/unit-tests/plugin_testing_common.h 
b/proxy/http/remap/unit-tests/plugin_testing_common.h
index 12346ea..66a8dd6 100644
--- a/proxy/http/remap/unit-tests/plugin_testing_common.h
+++ b/proxy/http/remap/unit-tests/plugin_testing_common.h
@@ -40,6 +40,9 @@
 
 extern thread_local PluginThreadContext *pluginThreadContext;
 
+/* A temp sandbox to play with our toys used for all fun with this test-bench 
*/
+fs::path getTemporaryDir();
+
 class PluginDebugObject
 {
 public:
diff --git a/proxy/http/remap/unit-tests/test_PluginDso.cc 
b/proxy/http/remap/unit-tests/test_PluginDso.cc
index c31e1d6..092261b 100644
--- a/proxy/http/remap/unit-tests/test_PluginDso.cc
+++ b/proxy/http/remap/unit-tests/test_PluginDso.cc
@@ -39,11 +39,8 @@ thread_local PluginThreadContext *pluginThreadContext;
 
 std::error_code ec;
 
-/* A temp sandbox to play with our toys used for all fun with this test-bench 
*/
-static fs::path tmpDir = fs::canonical(fs::temp_directory_path(), ec);
-
 /* The following are dirs that are used commonly in the unit-tests */
-static fs::path sandboxDir = tmpDir / fs::path("sandbox");
+static fs::path sandboxDir = getTemporaryDir();
 static fs::path runtimeDir = sandboxDir / fs::path("runtime");
 static fs::path searchDir  = sandboxDir / fs::path("search");
 static fs::path pluginBuildDir = fs::current_path() / 
fs::path("unit-tests/.libs");
@@ -91,6 +88,8 @@ public:
  */
 SCENARIO("loading plugins", "[plugin][core]")
 {
+  REQUIRE_FALSE(sandboxDir.empty());
+
   clean();
   std::string error;
 
@@ -322,6 +321,8 @@ SCENARIO("loading plugins", "[plugin][core]")
  */
 SCENARIO("looking for symbols inside a plugin DSO", "[plugin][core]")
 {
+  REQUIRE_FALSE(sandboxDir.empty());
+
   clean();
   std::string error;
 
diff --git a/proxy/http/remap/unit-tests/test_PluginFactory.cc 
b/proxy/http/remap/unit-tests/test_PluginFactory.cc
index c75040e..9c2d819 100644
--- a/proxy/http/remap/unit-tests/test_PluginFactory.cc
+++ b/proxy/http/remap/unit-tests/test_PluginFactory.cc
@@ -77,11 +77,8 @@ getDebugObject(const PluginDso )
   }
 }
 
-/* A temp sandbox to play with our toys used for all fun with this test-bench 
*/
-static fs::path tmpDir = fs::canonical(fs::temp_directory_path(), ec);
-
 /* The following are paths that are used commonly in the unit-tests */
-static fs::path sandboxDir = tmpDir / "sandbox";
+static fs::path sandboxDir = getTemporaryDir();
 static fs::path runtimeRootDir = sandboxDir / "runtime";
 static fs::path runtimeDir = runtimeRootDir / tempComponent;
 static fs::path searchDir  = sandboxDir / "search";
@@ -149,6 +146,8 @@ validateSuccessfulConfigPathTest(const RemapPluginInst 
*pluginInst, const std::s
 
 SCENARIO("loading plugins", "[plugin][core]")
 {
+  REQUIRE_FALSE(sandboxDir.empty());
+
   fs::path effectivePath;
   fs::path runtimePath;
   std::string error;
@@ -249,6 +248,8 @@ SCENARIO("loading plugins", "[plugin][core]")
 
 SCENARIO("multiple search dirs + multiple or no plugins installed", 
"[plugin][core]")
 {
+  REQUIRE_FALSE(sandboxDir.empty());

[trafficserver] branch master updated (a53821e -> af7299d)

2019-08-30 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from a53821e  Bumped the version of master to 10.0.0
 add af7299d  cachekey: added --key-type (for parent selection)

No new revisions were added by this update.

Summary of changes:
 doc/admin-guide/plugins/cachekey.en.rst | 167 +++-
 doc/admin-guide/plugins/index.en.rst|   4 +-
 plugins/cachekey/cachekey.cc|  69 ++---
 plugins/cachekey/cachekey.h |  10 +-
 plugins/cachekey/configs.cc |  62 +++-
 plugins/cachekey/configs.h  |  29 +-
 plugins/cachekey/plugin.cc  |   2 +-
 7 files changed, 248 insertions(+), 95 deletions(-)



[trafficserver] branch master updated (211d6c9 -> 0f86efc)

2019-08-28 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 211d6c9  static linking asan, tsan, lsan
 add 0f86efc  cachekey: added --canonical-prefix parameter

No new revisions were added by this update.

Summary of changes:
 doc/admin-guide/plugins/cachekey.en.rst | 21 +---
 plugins/cachekey/cachekey.cc| 90 -
 plugins/cachekey/cachekey.h |  3 +-
 plugins/cachekey/configs.cc | 11 
 plugins/cachekey/configs.h  |  6 +++
 plugins/cachekey/plugin.cc  |  2 +-
 6 files changed, 102 insertions(+), 31 deletions(-)



[trafficserver] branch max-life-logs deleted (was 4b620b5)

2019-07-17 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch max-life-logs
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


 was 4b620b5  Options to roll empty logs and log trimming

The revisions that were on this branch are still contained in
other references; therefore, this change does not discard any commits
from the repository.



[trafficserver] branch master updated (c4ce635 -> 70de21d)

2019-09-26 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from c4ce635  Removed hardcoded sni.yaml configuration filename in logs
 add 70de21d  Rearrange config reload notifications

No new revisions were added by this update.

Summary of changes:
 build/plugins.mk   |   2 +-
 doc/developer-guide/api/functions/TSRemap.en.rst   |  14 ++-
 doc/developer-guide/plugins/remap-plugins.en.rst   |  79 --
 include/ts/remap.h |  22 +++-
 proxy/ReverseProxy.cc  |   2 +-
 proxy/http/remap/Makefile.am   |   2 +-
 proxy/http/remap/PluginDso.h   |  10 +-
 proxy/http/remap/PluginFactory.cc  |  75 -
 proxy/http/remap/PluginFactory.h   |   6 +-
 proxy/http/remap/RemapConfig.cc|  15 ++-
 proxy/http/remap/RemapPluginInfo.cc|  37 +--
 proxy/http/remap/RemapPluginInfo.h |  27 +++--
 .../http/remap/unit-tests/plugin_testing_calls.cc  |  11 +-
 .../http/remap/unit-tests/plugin_testing_common.h  |  28 ++---
 proxy/http/remap/unit-tests/test_PluginDso.cc  |   6 +-
 proxy/http/remap/unit-tests/test_PluginFactory.cc  | 119 -
 proxy/http/remap/unit-tests/test_RemapPlugin.cc|   9 +-
 17 files changed, 320 insertions(+), 144 deletions(-)



[trafficserver] branch max-life-logs updated (74840f9 -> c597de5)

2019-07-09 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch max-life-logs
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


 discard 74840f9  options to roll empty logs and log trimming
 add c597de5  options to roll empty logs and log trimming

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (74840f9)
\
 N -- N -- N   refs/heads/max-life-logs (c597de5)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 doc/admin-guide/files/records.config.en.rst | 2 +-
 doc/admin-guide/logging/rotation.en.rst | 9 +
 2 files changed, 6 insertions(+), 5 deletions(-)



[trafficserver] branch master updated (c179620 -> 0d88806)

2019-10-31 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from c179620  cleanup the eventloop
 add 0d88806  Docs: clarify remap plugin inst init params

No new revisions were added by this update.

Summary of changes:
 doc/developer-guide/plugins/remap-plugins.en.rst | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)



[trafficserver] branch master updated (0d88806 -> 1e6760e)

2019-10-31 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 0d88806  Docs: clarify remap plugin inst init params
 add 1e6760e  Docs: cachekey: fixed non-ascii table characters

No new revisions were added by this update.

Summary of changes:
 doc/admin-guide/plugins/cachekey.en.rst | 34 -
 1 file changed, 17 insertions(+), 17 deletions(-)



[trafficserver] branch master updated (60e6ace -> 63e048a)

2019-10-30 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 60e6ace  Updating the autest version pin to 1.7.4.
 add 63e048a  doc + unittest TSRemap(Init|NewInstance) failures

No new revisions were added by this update.

Summary of changes:
 doc/developer-guide/plugins/remap-plugins.en.rst   |  6 +++
 proxy/http/remap/Makefile.am   | 13 +-
 proxy/http/remap/PluginFactory.cc  |  6 ++-
 ..._missing_newinstance.cc => plugin_init_fail.cc} | 14 +++---
 ...g_deleteinstance.cc => plugin_instinit_fail.cc} | 20 +++--
 proxy/http/remap/unit-tests/test_PluginFactory.cc  | 52 --
 6 files changed, 95 insertions(+), 16 deletions(-)
 copy proxy/http/remap/unit-tests/{plugin_missing_newinstance.cc => 
plugin_init_fail.cc} (95%)
 copy proxy/http/remap/unit-tests/{plugin_missing_deleteinstance.cc => 
plugin_instinit_fail.cc} (92%)



[trafficserver] branch master updated (4dc8dc7 -> c269a5a)

2019-10-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 4dc8dc7  support for listening on all the net threads uses 
epollexclusive flag and soreuseport
 add c269a5a  Premature DSO unload with "suicidal" continuations

No new revisions were added by this update.

Summary of changes:
 proxy/http/remap/Makefile.am   | 26 --
 proxy/http/remap/PluginDso.cc  | 96 ++
 proxy/http/remap/PluginDso.h   | 29 ++-
 proxy/http/remap/PluginFactory.cc  | 47 ---
 proxy/http/remap/PluginFactory.h   |  3 +-
 proxy/http/remap/RemapConfig.cc|  1 +
 proxy/http/remap/RemapPluginInfo.cc| 20 +++--
 .../http/remap/unit-tests/plugin_testing_common.h  |  5 +-
 proxy/http/remap/unit-tests/test_PluginFactory.cc  | 24 ++
 9 files changed, 183 insertions(+), 68 deletions(-)



[trafficserver] branch master updated (5426d91 -> e6cf1bf)

2019-12-20 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from 5426d91  Remove remnants of obsolete remap thread.
 add e6cf1bf  Assure no SM survives plugin factory deactivation.

No new revisions were added by this update.

Summary of changes:
 proxy/ReverseProxy.cc | 13 ++---
 proxy/http/remap/PluginFactory.cc |  9 -
 proxy/http/remap/UrlRewrite.cc|  3 +++
 3 files changed, 17 insertions(+), 8 deletions(-)



[trafficserver] branch master updated: Fixed logging docs typos

2020-03-03 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 3d81f66  Fixed logging docs typos
3d81f66 is described below

commit 3d81f660da9a5f8aca0f01e8ccb79db35aa72d3d
Author: Gancho Tenev 
AuthorDate: Tue Mar 3 14:50:26 2020 -0800

Fixed logging docs typos

Renamed the following references:
-`proxy.config.output.logfile.rolling_max_count`
+`proxy.config.log.rolling_max_count`
-`proxy.config.output.logfile.rolling_allow_empty`
+`proxy.config.log.rolling_allow_empty`
---
 doc/admin-guide/files/records.config.en.rst | 48 ++---
 doc/admin-guide/logging/rotation.en.rst |  6 ++--
 2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/doc/admin-guide/files/records.config.en.rst 
b/doc/admin-guide/files/records.config.en.rst
index 14a9438..fd30913 100644
--- a/doc/admin-guide/files/records.config.en.rst
+++ b/doc/admin-guide/files/records.config.en.rst
@@ -273,30 +273,6 @@ System Variables
order of auto-deletion (if enabled). A default value of 0 means 
auto-deletion will try to keep
output logs as much as possible. See :doc:`../logging/rotation.en` for 
guidance.
 
-.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_max_count INT 0
-   :reloadable:
-
-   Specifies the maximum count of rolled output logs to keep. This value will 
be used by the
-   auto-deletion (if enabled) to trim the number of rolled log files every 
time the log is rolled.
-   A default value of 0 means auto-deletion will not try to limit the number 
of output logs.
-   See :doc:`../logging/rotation.en` for an use-case for this option.
-
-.. ts:cv:: CONFIG proxy.config.output.logfile.rolling_allow_empty INT 0
-   :reloadable:
-
-   While rolling default behavior is to rename, close and re-open the log file 
*only* when/if there is
-   something to log to the log file. This option opens a new log file right 
after rolling even if there
-   is nothing to log (i.e. nothing to be logged due to lack of requests to the 
server)
-   which may lead to 0-sized log files while rolling. See 
:doc:`../logging/rotation.en` for an use-case
-   for this option.
-
-   = ==
-   Value Description
-   = ==
-   ``0`` No empty log files created and rolled if there was nothing to log
-   ``1`` Allow empty log files to be created and  rolled even if there was 
nothing to log
-   = ==
-
 
 Thread Variables
 
@@ -2872,6 +2848,30 @@ Logging Configuration
order of auto-deletion (if enabled). A default value of 0 means 
auto-deletion will try to keep
logs as much as possible. This value can be and should be overridden in 
logging.yaml. See :doc:`../logging/rotation.en` for guidance.
 
+.. ts:cv:: CONFIG proxy.config.log.rolling_max_count INT 0
+   :reloadable:
+
+   Specifies the maximum count of rolled output logs to keep. This value will 
be used by the
+   auto-deletion (if enabled) to trim the number of rolled log files every 
time the log is rolled.
+   A default value of 0 means auto-deletion will not try to limit the number 
of output logs.
+   See :doc:`../logging/rotation.en` for an use-case for this option.
+
+.. ts:cv:: CONFIG proxy.config.log.rolling_allow_empty INT 0
+   :reloadable:
+
+   While rolling default behavior is to rename, close and re-open the log file 
*only* when/if there is
+   something to log to the log file. This option opens a new log file right 
after rolling even if there
+   is nothing to log (i.e. nothing to be logged due to lack of requests to the 
server)
+   which may lead to 0-sized log files while rolling. See 
:doc:`../logging/rotation.en` for an use-case
+   for this option.
+
+   = ==
+   Value Description
+   = ==
+   ``0`` No empty log files created and rolled if there was nothing to log
+   ``1`` Allow empty log files to be created and  rolled even if there was 
nothing to log
+   = ==
+
 .. ts:cv:: CONFIG proxy.config.log.auto_delete_rolled_files INT 1
:reloadable:
 
diff --git a/doc/admin-guide/logging/rotation.en.rst 
b/doc/admin-guide/logging/rotation.en.rst
index 6dad554..e0b93c2 100644
--- a/doc/admin-guide/logging/rotation.en.rst
+++ b/doc/admin-guide/logging/rotation.en.rst
@@ -255,13 +255,13 @@ the maximum number of rolled log files, and forcing |TS| 
to roll even when there
 
 Let us say we wanted the oldest log entry to be kept on the box to be no older 
than 2-hour old.
 
-Set :ts:cv

[trafficserver] branch master updated: s3_auth_v4: multiple same name fields signing fix

2020-04-08 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new e892afc  s3_auth_v4: multiple same name fields signing fix
e892afc is described below

commit e892afc9b67381648017bb74be2201a856fcd19e
Author: Gancho Tenev 
AuthorDate: Mon Apr 6 23:50:01 2020 -0700

s3_auth_v4: multiple same name fields signing fix

When signing multiple header fields with the same name AWS auth v4
seems to combine them into a single field by white-space trimming the
fields' values from the beginning and end and concatinating them by
using a comma in the order they are received. If same name header
fields already contain multiple values they are considered as a single
value headers during the trimming/concatenation.

This is undocumented behavior, discovered by reverse engineering
experiments, AWS signature calculation spec is not quite clear
about this case (unspecified, see CanonicalHeaders section):

https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
---
 plugins/s3_auth/aws_auth_v4.cc |   6 +-
 plugins/s3_auth/aws_auth_v4.h  |   1 +
 plugins/s3_auth/unit_tests/test_aws_auth_v4.cc | 247 +++--
 plugins/s3_auth/unit_tests/test_aws_auth_v4.h  |   6 +-
 4 files changed, 156 insertions(+), 104 deletions(-)

diff --git a/plugins/s3_auth/aws_auth_v4.cc b/plugins/s3_auth/aws_auth_v4.cc
index 072a900..3f9aea0 100644
--- a/plugins/s3_auth/aws_auth_v4.cc
+++ b/plugins/s3_auth/aws_auth_v4.cc
@@ -385,7 +385,11 @@ getCanonicalRequestSha256Hash(TsInterface , bool 
signPayload, const StringSe
 const char *trimValue = trimWhiteSpaces(value, valueLen, trimValueLen);
 
 signedHeadersSet.insert(lowercaseName);
-headersMap[lowercaseName] = String(trimValue, trimValueLen);
+if (headersMap.find(lowercaseName) == headersMap.end()) {
+  headersMap[lowercaseName] = String(trimValue, trimValueLen);
+} else {
+  headersMap[lowercaseName].append(",").append(String(trimValue, 
trimValueLen));
+}
   }
 
   for (const auto  : signedHeadersSet) {
diff --git a/plugins/s3_auth/aws_auth_v4.h b/plugins/s3_auth/aws_auth_v4.h
index aa929de..865a199 100644
--- a/plugins/s3_auth/aws_auth_v4.h
+++ b/plugins/s3_auth/aws_auth_v4.h
@@ -36,6 +36,7 @@
 typedef std::string String;
 typedef std::set StringSet;
 typedef std::map StringMap;
+typedef std::multimap HeaderMultiMap;
 
 class HeaderIterator;
 
diff --git a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc 
b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
index 4bf58af..595fe00 100644
--- a/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
+++ b/plugins/s3_auth/unit_tests/test_aws_auth_v4.cc
@@ -405,10 +405,10 @@ TEST_CASE("AWSAuthSpecByExample: GET Object", 
"[AWS][auth][SpecByExample]")
   api._host.assign("examplebucket.s3.amazonaws.com");
   api._path.assign("test.txt");
   api._query.assign("");
-  api._headers["Host"] = "examplebucket.s3.amazonaws.com";
-  api._headers["Range"]= "bytes=0-9";
-  api._headers["x-amz-content-sha256"] = 
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855";
-  api._headers["x-amz-date"]   = "20130524T00Z";
+  api._headers.insert(std::make_pair("Host", 
"examplebucket.s3.amazonaws.com"));
+  api._headers.insert(std::make_pair("Range", "bytes=0-9"));
+  api._headers.insert(std::make_pair("x-amz-content-sha256", 
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"));
+  api._headers.insert(std::make_pair("x-amz-date", "20130524T00Z"));
 
   const char *bench[] = {
 /* Authorization Header */
@@ -450,9 +450,9 @@ TEST_CASE("AWSAuthSpecByExample: GET Bucket Lifecycle", 
"[AWS][auth][SpecByExamp
   api._host.assign("examplebucket.s3.amazonaws.com");
   api._path.assign("");
   api._query.assign("lifecycle");
-  api._headers["Host"] = "examplebucket.s3.amazonaws.com";
-  api._headers["x-amz-content-sha256"] = 
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855";
-  api._headers["x-amz-date"]   = "20130524T00Z";
+  api._headers.insert(std::make_pair("Host", 
"examplebucket.s3.amazonaws.com"));
+  api._headers.insert(std::make_pair("x-amz-content-sha256", 
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"));
+  api._headers.insert(std::make_pair("x-amz-date", "20130524T00Z"));
 
   cons

[trafficserver] branch 8.1.x updated (c351e20 -> 2a70ee8)

2020-03-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a change to branch 8.1.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git.


from c351e20  free(map) -> delete map
 new 9141684  cachekey: added --canonical-prefix parameter
 new a2c7afc  cachekey: added --key-type (for parent selection)
 new 0cc117a  cachekey: allow multiple values for `--key-type`
 new 2a70ee8  cachekey plugin docs and clang-format/tidy changes

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 doc/admin-guide/plugins/cachekey.en.rst | 197 +---
 doc/admin-guide/plugins/index.en.rst|   4 +-
 plugins/cachekey/README.md  |   2 +-
 plugins/cachekey/cachekey.cc| 160 --
 plugins/cachekey/cachekey.h |  13 ++-
 plugins/cachekey/configs.cc |  84 +-
 plugins/cachekey/configs.h  |  37 +-
 plugins/cachekey/pattern.cc |  10 +-
 plugins/cachekey/pattern.h  |  14 +--
 plugins/cachekey/plugin.cc  |  52 +
 10 files changed, 415 insertions(+), 158 deletions(-)



[trafficserver] 04/04: cachekey plugin docs and clang-format/tidy changes

2020-03-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch 8.1.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 2a70ee8e4b2d329da7721dac255d1220791ea1b1
Author: Gancho Tenev 
AuthorDate: Thu Mar 19 15:31:48 2020 -0700

cachekey plugin docs and clang-format/tidy changes

Cherry-picked some isolated cachekey related changes
from the following commits to branck 8.1.x:
  aa58b6eb8 Ran clang-format
  4cfd5a738 Ran make clang-tidy
  8e451b774 Fixes spelling in plugins
  7651e269d Ran clang-tidy with modernize-use-default-member-init
  d77cd7316 Ran clang-tidy
---
 plugins/cachekey/README.md   |  2 +-
 plugins/cachekey/cachekey.cc |  1 +
 plugins/cachekey/configs.cc  |  6 +++---
 plugins/cachekey/configs.h   | 10 +-
 plugins/cachekey/pattern.cc  | 10 +-
 plugins/cachekey/pattern.h   | 14 +++---
 plugins/cachekey/plugin.cc   |  4 ++--
 7 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/plugins/cachekey/README.md b/plugins/cachekey/README.md
index 91dc725..6cc5989 100644
--- a/plugins/cachekey/README.md
+++ b/plugins/cachekey/README.md
@@ -1,7 +1,7 @@
 # Description
 This plugin allows some common cache key manipulations based on various HTTP 
request elements.  It can
 
-* sort query parameters to prevent query parameters reordereding from being a 
cache miss
+* sort query parameters to prevent query parameters reordering from being a 
cache miss
 * ignore specific query parameters from the cache key by name or regular 
expression
 * ignore all query parameters from the cache key
 * only use specific query parameters in the cache key by name or regular 
expression
diff --git a/plugins/cachekey/cachekey.cc b/plugins/cachekey/cachekey.cc
index 5eedc76..5f12889 100644
--- a/plugins/cachekey/cachekey.cc
+++ b/plugins/cachekey/cachekey.cc
@@ -23,6 +23,7 @@
 
 #include  /* strlen() */
 #include  /* istringstream */
+#include 
 #include "cachekey.h"
 
 static void
diff --git a/plugins/cachekey/configs.cc b/plugins/cachekey/configs.cc
index 05d93ca..938ae1d 100644
--- a/plugins/cachekey/configs.cc
+++ b/plugins/cachekey/configs.cc
@@ -183,8 +183,8 @@ ConfigElements::noIncludeExcludeRules() const
 
 ConfigElements::~ConfigElements()
 {
-  for (auto it = _captures.begin(); it != _captures.end(); it++) {
-delete it->second;
+  for (auto &_capture : _captures) {
+delete _capture.second;
   }
 }
 
@@ -414,7 +414,7 @@ Configs::init(int argc, const char *argv[], bool 
perRemapConfig)
 
   for (;;) {
 int opt;
-opt = getopt_long(argc, (char *const *)argv, "", longopt, nullptr);
+opt = getopt_long(argc, const_cast(argv), "", longopt, 
nullptr);
 
 if (opt == -1) {
   break;
diff --git a/plugins/cachekey/configs.h b/plugins/cachekey/configs.h
index 454fb36..e8712f1 100644
--- a/plugins/cachekey/configs.h
+++ b/plugins/cachekey/configs.h
@@ -51,7 +51,7 @@ typedef std::set CacheKeyKeyTypeSet;
 class ConfigElements
 {
 public:
-  ConfigElements() : _sort(false), _remove(false), _skip(false) {}
+  ConfigElements() {}
   virtual ~ConfigElements();
   void setExclude(const char *arg);
   void setInclude(const char *arg);
@@ -92,9 +92,9 @@ protected:
   MultiPattern _includePatterns;
   MultiPattern _excludePatterns;
 
-  bool _sort;
-  bool _remove;
-  bool _skip;
+  bool _sort   = false;
+  bool _remove = false;
+  bool _skip   = false;
 
   std::map _captures;
 };
@@ -158,7 +158,7 @@ public:
   /**
* @brief provides means for post-processing of the plugin parameters to 
finalize the configuration or to "cache" some of the
* decisions for later use.
-   * @return true if succesful, false if failure.
+   * @return true if successful, false if failure.
*/
   bool finalize();
 
diff --git a/plugins/cachekey/pattern.cc b/plugins/cachekey/pattern.cc
index 27eb94e..c934902 100644
--- a/plugins/cachekey/pattern.cc
+++ b/plugins/cachekey/pattern.cc
@@ -38,7 +38,7 @@ replaceString(String , const String , const String 
)
   }
 }
 
-Pattern::Pattern() : _re(nullptr), _extra(nullptr), _pattern(""), 
_replacement(""), _replace(false), _tokenCount(0) {}
+Pattern::Pattern() : _pattern(""), _replacement("") {}
 
 /**
  * @brief Initializes PCRE pattern by providing the subject and replacement 
strings.
@@ -47,18 +47,18 @@ Pattern::Pattern() : _re(nullptr), _extra(nullptr), 
_pattern(""), _replacement("
  * @return true if successful, false if failure
  */
 bool
-Pattern::init(const String , const String , bool replace)
+Pattern::init(const String , const String , bool replace)
 {
   pcreFree();
 
   _pattern.assign(pattern);
-  _replacement.assign(replacenemt);
+  _replacement.assign(replacement);
   _replace = replace;
 
   _tokenCount = 0;
 
   if (!compile()) {
-CacheKeyDebug("failed to initialize pattern:'%s', replacement:'%s'", 
pattern.c_str()

[trafficserver] 02/04: cachekey: added --key-type (for parent selection)

2020-03-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch 8.1.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit a2c7afcb2d018c541122abf9dcc0e6731b59a55a
Author: Gancho Tenev 
AuthorDate: Fri Aug 23 14:26:26 2019 -0700

cachekey: added --key-type (for parent selection)

Added ability to apply all transformations, available for modifying
the cache key, to parent selection URL:
* --key-type=cache_key - apply transformations to  cache key
* --key-type=parent_selection_url - apply transformations to
  parent selection URL

TODO/TBD: After this change all transformations can be applied not
only to the cache key but to parent selection URL as well. It would
make sense to give the cachekey plugin a new, more suitable name.

(cherry picked from commit af7299dd06a2a7b5e862edf5aea0693dd1bde2c1)
---
 doc/admin-guide/plugins/cachekey.en.rst | 169 +++-
 doc/admin-guide/plugins/index.en.rst|   4 +-
 plugins/cachekey/cachekey.cc|  69 ++---
 plugins/cachekey/cachekey.h |  10 +-
 plugins/cachekey/configs.cc |  62 +++-
 plugins/cachekey/configs.h  |  19 
 plugins/cachekey/plugin.cc  |   2 +-
 7 files changed, 244 insertions(+), 91 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index c82e8e4..35a4968 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -21,22 +21,24 @@
 .. _admin-plugins-cachekey:
 
 
-Cache Key Manipulation Plugin
-*
+Cache Key and Parent Selection URL Manipulation Plugin
+**
 
 Description
 ===
 
-This plugin allows some common cache key manipulations based on various HTTP 
request components.  It can
+This plugin allows some common `cache key` or `parent selection URL` 
manipulations based on various HTTP request components.
+Although `cache key` is used everywhere in this document, the same 
manipulations can be applied to `parent selection URL`
+by switching `key type`_. The plugin can
 
 * sort query parameters to prevent query parameter reordering being a cache 
miss
-* ignore specific query parameters from the cache key by name or regular 
expression
-* ignore all query parameters from the cache key
-* only use specific query parameters in the cache key by name or regular 
expression
+* ignore specific query parameters from the `cache key` by name or regular 
expression
+* ignore all query parameters from the `cache key`
+* only use specific query parameters in the `cache key` by name or regular 
expression
 * include headers or cookies by name
 * capture values from the ``User-Agent`` header.
 * classify request using ``User-Agent`` and a list of regular expressions
-* capture and replace strings from the URI and include them in the cache key
+* capture and replace strings from the URI and include them in the `cache key`
 * do more - please find more examples below.
 
 URI type
@@ -46,6 +48,17 @@ The plugin manipulates the ``remap`` URI (value set during 
URI remap) by default
 
 * ``--uri-type=[remap|pristine]`` (default: ``remap``)
 
+Key type
+
+
+The plugin manipulates the `cache key` by default. If `parent selection URL` 
manipulation is needed the following option can be used:
+
+* ``--key-type=[cache_key|parent_selection_url]`` (default: ``cache_key``)
+
+One instance of this plugin can used either for `cache key` or `parent 
selection URL` manupulation but never both.
+If `simultaneous cache key and parent selection URL manipulation`_ is needed 
two separate instances of the plugin
+have to be loaded for each key type.
+
 Cache key structure and related plugin parameters
 =
 
@@ -59,11 +72,11 @@ Cache key structure and related plugin parameters
   │  (default)  |  (optional)  │  (optional)  │  (optional)  │  (default)  │  
(default)  │
   
└─┴──┴──┴──┴─┴─┘
 
-* The cache key set by the cachekey plugin can be considered as devided into 
several sections.
+* The `cache key` set by the cachekey plugin can be considered as divided into 
several sections.
 * Every section is manipulated separately by the related plugin parameters 
(more info in each section description below).
-* "User-Agent", "Headers" and "Cookies" sections are optional and will be 
missing from the cache key if no related plugin parameters are used.
+* "User-Agent", "Headers" and "Cookies" sections are optional and will be 
missing from the `cache key` if no related plugin parameters are used.
 * "Prefix", "Path" and "Query" sections always have default values even if no 
related plug

[trafficserver] 01/04: cachekey: added --canonical-prefix parameter

2020-03-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch 8.1.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 914168440684b315eaa52cd029b13448400633d9
Author: Gancho Tenev 
AuthorDate: Wed Aug 21 16:11:48 2019 -0700

cachekey: added --canonical-prefix parameter

In certain use-cases when calculating the prefix (the initial value
of the new cache key) we need to have the scheme, host and port in
their original form from the request URI, i.e. when hosting.config
is used the cache key is expected to contain a valid URI authority
element used for volume selection.

More details about the new parameter and its functionality can be
found in doc/admin-guide/plugins/cachekey.en.rst

(cherry picked from commit 0f86efc7678248c09f87ec8b99bc026456edc3f7)
---
 doc/admin-guide/plugins/cachekey.en.rst | 21 +---
 plugins/cachekey/cachekey.cc| 90 -
 plugins/cachekey/cachekey.h |  3 +-
 plugins/cachekey/configs.cc | 11 
 plugins/cachekey/configs.h  |  6 +++
 plugins/cachekey/plugin.cc  |  2 +-
 6 files changed, 102 insertions(+), 31 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index 5d8b6a0..c82e8e4 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -70,20 +70,29 @@ Cache key structure and related plugin parameters
 
 ::
 
-  Optional components  | 
┌─┬──┬──┐
+  Optional components  | ┌─┬── 
───┬──┐
   (included in this order) | │ --static-prefix | --capture-prefix │ 
--capture-prefix-uri │
| 
├─┴──┴──┤
-  Default values if no | │ /host/port  
  |
+  Default values if no | │ /host/port or scheme://host:port (see the table 
below)|
   optional components  | 
└───┘
   configured   |
 
+  ┌┬─┬──┐
+  │ --canonical-prefix |  default value if no│ input used for   │
+  │|  prefix parameters used │ --capture-prefix │
+  ├┴─┴──┤
+  │ fasle  | /host/port  | host:port|
+  ├┴─┴──┤
+  │ true   | scheme://host:port  | scheme://host:port   |
+  └──┴──┘
+
+
 * ``--static-prefix=`` (default: empty string) - if specified and not 
an empty string the  will be added to the cache key.
-* ``--capture-prefix=`` (default: empty string) - if 
specified and not empty then strings are captured from ``host:port`` based on 
the  and are added to the cache key.
+* ``--capture-prefix=`` (default: empty string) - if 
specified and not empty then strings are captured based on the value of 
``--canonical-prefix`` parameter (see the table above) and 
 and are added to the cache key.
 * ``--capture-prefix-uri=`` (default: empty string) - if 
specified and not empty then strings are captured from the entire URI based on 
the  and are added to the cache key.
 * If any of the "Prefix" related plugin parameters are used together in the 
plugin configuration they are added to the cache key in the order shown in the 
diagram.
-* ``--remove-prefix= default prefix */
+append(getCanonicalUrl(_buf, _url, canonicalPrefix, /* provideDefaultKey 
*/ true), /* useSeparator */ false);
 CacheKeyDebug("added default prefix, key: '%s'", _key.c_str());
   }
 }
diff --git a/plugins/cachekey/cachekey.h b/plugins/cachekey/cachekey.h
index 7ea058c..0922ed1 100644
--- a/plugins/cachekey/cachekey.h
+++ b/plugins/cachekey/cachekey.h
@@ -55,9 +55,10 @@ public:
 
   void append(unsigned number);
   void append(const String &);
+  void append(const String , bool useSeparator);
   void append(const char *s);
   void append(const char *n, unsigned s);
-  void appendPrefix(const String , Pattern , Pattern 
);
+  void appendPrefix(const String , Pattern , Pattern 
, bool canonicalPrefix);
   void appendPath(Pattern , Pattern );
   void appendHeaders(const ConfigHeaders );
   void appendQuery(const ConfigQuery );
diff --git a/plugins/cachekey/configs.cc b/plugins/cachekey/configs.cc
index 2a13d76..8b018c8 100644
--- a/plugins/cachekey/configs.cc
+++ b/plugins/cachekey/configs.cc
@@ -397,6 +397,8 @@ Configs::init(int argc, const char *argv[], bool 
perRemapConfig)
 {const_cast("separator"), optional_argument, nullptr, 's'},
 {const_cast("uri

[trafficserver] 03/04: cachekey: allow multiple values for `--key-type`

2020-03-25 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch 8.1.x
in repository https://gitbox.apache.org/repos/asf/trafficserver.git

commit 0cc117aa4869cef92d4d0b2e06329489d55d58a1
Author: Gancho Tenev 
AuthorDate: Tue Sep 17 10:06:50 2019 -0700

cachekey: allow multiple values for `--key-type`

Allow multiple target types to be specified for `--key-type` so the
operator can apply the same modifications to both cache key and parent
selection url at the same time without chaining cachekey plugin
instances.

Instead of:

@plugin=cachekey.so \
@pparam=--key-type=parent_selection_url \
@pparam=--remove-all-params=true
@plugin=cachekey.so \
@pparam=--key-type=cache_key \
@pparam=--remove-all-params=true

to write:

@plugin=cachekey.so \
@pparam=--key-type=parent_selection_url,cache_key \
@pparam=--remove-all-params=true

(cherry picked from commit db8cd14acede7460a5996864c52e1b206695e405)
---
 doc/admin-guide/plugins/cachekey.en.rst | 25 ++---
 plugins/cachekey/configs.cc | 29 +---
 plugins/cachekey/configs.h  |  6 +++--
 plugins/cachekey/plugin.cc  | 48 ++---
 4 files changed, 70 insertions(+), 38 deletions(-)

diff --git a/doc/admin-guide/plugins/cachekey.en.rst 
b/doc/admin-guide/plugins/cachekey.en.rst
index 35a4968..6199c93 100644
--- a/doc/admin-guide/plugins/cachekey.en.rst
+++ b/doc/admin-guide/plugins/cachekey.en.rst
@@ -53,11 +53,11 @@ Key type
 
 The plugin manipulates the `cache key` by default. If `parent selection URL` 
manipulation is needed the following option can be used:
 
-* ``--key-type=[cache_key|parent_selection_url]`` (default: ``cache_key``)
+* ``--key-type=``  (default: ``cache_key``) - list of 
``cache_key`` or ``parent_selection_url``, if multiple ``--key-type`` options 
are specified then all values are combined together.
+
+An instance of this plugin can be used for applying manipulations to `cache 
key`, `parent selection URL` or both depending on the need. See `simultaneous 
cache key and parent selection URL manipulation`_
+for examples of how to apply the **same** set of manupulations to both targets 
with a single plugin instance or applying **diferent** sets of manipulations to 
each target using separate plugin instances.
 
-One instance of this plugin can used either for `cache key` or `parent 
selection URL` manupulation but never both.
-If `simultaneous cache key and parent selection URL manipulation`_ is needed 
two separate instances of the plugin
-have to be loaded for each key type.
 
 Cache key structure and related plugin parameters
 =
@@ -664,3 +664,20 @@ For this purpose two separate instances are loaded for 
that remap rule:
 
 In the example above the first instance of the plugin sets the prefix to the 
parent selection URI and
 the second instance of the plugin sets the prefix to the cache key.
+
+The **same** string manipulations can be applied to both cache key and parent 
selection url more concisely without chaining cachekey plugin instances by 
specifying multiple target types `--key-type`.
+
+Instead of::
+
+@plugin=cachekey.so \
+@pparam=--key-type=parent_selection_url \
+@pparam=--remove-all-params=true
+@plugin=cachekey.so \
+@pparam=--key-type=cache_key \
+@pparam=--remove-all-params=true
+
+one could write::
+
+@plugin=cachekey.so \
+@pparam=--key-type=parent_selection_url,cache_key \
+@pparam=--remove-all-params=true
diff --git a/plugins/cachekey/configs.cc b/plugins/cachekey/configs.cc
index 9321232..05d93ca 100644
--- a/plugins/cachekey/configs.cc
+++ b/plugins/cachekey/configs.cc
@@ -529,6 +529,10 @@ Configs::init(int argc, const char *argv[], bool 
perRemapConfig)
 bool
 Configs::finalize()
 {
+  if (_keyTypes.empty()) {
+CacheKeyDebug("setting cache key");
+_keyTypes = {CACHE_KEY};
+  }
   return _query.finalize() && _headers.finalize() && _cookies.finalize();
 }
 
@@ -586,14 +590,19 @@ void
 Configs::setKeyType(const char *arg)
 {
   if (nullptr != arg) {
-if (9 == strlen(arg) && 0 == strncasecmp(arg, "cache_key", 9)) {
-  _keyType = CacheKeyKeyType::CACHE_KEY;
-  CacheKeyDebug("setting cache key");
-} else if (20 == strlen(arg) && 0 == strncasecmp(arg, 
"parent_selection_url", 20)) {
-  _keyType = CacheKeyKeyType::PARENT_SELECTION_URL;
-  CacheKeyDebug("setting parent selection URL");
-} else {
-  CacheKeyError("unrecognized key type '%s', using default 'cache_key'", 
arg);
+StringVector types;
+::commaSeparateString(types, arg);
+
+for (auto type : types) {
+  if (9 == type.length() && 0 =

[trafficserver] branch master updated: Assert non-zero HdrHeap object size (#6954)

2020-07-07 Thread gancho
This is an automated email from the ASF dual-hosted git repository.

gancho pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/trafficserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 986d355  Assert non-zero HdrHeap object size (#6954)
986d355 is described below

commit 986d355daf0fc97131e9d6ed22988563e59547c7
Author: Gancho Tenev <10522628+gte...@users.noreply.github.com>
AuthorDate: Tue Jul 7 11:03:03 2020 -0700

Assert non-zero HdrHeap object size (#6954)

HdrHeap object length cannot be 0 by design otherwise there is something
wrong, i.e. possible memory corruption, in such cases iterating over
HdrHeap objects would lead to infinite loop, i.e. during unmarshaling.
---
 proxy/hdrs/HdrHeap.cc | 12 
 1 file changed, 12 insertions(+)

diff --git a/proxy/hdrs/HdrHeap.cc b/proxy/hdrs/HdrHeap.cc
index 1452a8e..dfa387a 100644
--- a/proxy/hdrs/HdrHeap.cc
+++ b/proxy/hdrs/HdrHeap.cc
@@ -400,6 +400,9 @@ HdrHeap::evacuate_from_str_heaps(HdrStrHeap *new_heap)
 while (data < h->m_free_start) {
   HdrHeapObjImpl *obj = reinterpret_cast(data);
 
+  // Object length cannot be 0 by design, otherwise something is wrong + 
infinite loop here!
+  ink_release_assert(0 != obj->m_length);
+
   switch (obj->m_type) {
   case HDR_HEAP_OBJ_URL:
 ((URLImpl *)obj)->move_strings(new_heap);
@@ -440,6 +443,9 @@ HdrHeap::required_space_for_evacuation()
 while (data < h->m_free_start) {
   HdrHeapObjImpl *obj = reinterpret_cast(data);
 
+  // Object length cannot be 0 by design, otherwise something is wrong + 
infinite loop here!
+  ink_release_assert(0 != obj->m_length);
+
   switch (obj->m_type) {
   case HDR_HEAP_OBJ_URL:
 ret += ((URLImpl *)obj)->strings_length();
@@ -514,6 +520,9 @@ HdrHeap::sanity_check_strs()
 while (data < h->m_free_start) {
   HdrHeapObjImpl *obj = reinterpret_cast(data);
 
+  // Object length cannot be 0 by design, otherwise something is wrong + 
infinite loop here!
+  ink_release_assert(0 != obj->m_length);
+
   switch (obj->m_type) {
   case HDR_HEAP_OBJ_URL:
 ((URLImpl *)obj)->check_strings(heaps, num_heaps);
@@ -937,6 +946,9 @@ HdrHeap::unmarshal(int buf_length, int obj_type, 
HdrHeapObjImpl **found_obj, Ref
 HdrHeapObjImpl *obj = reinterpret_cast(obj_data);
 ink_assert(obj_is_aligned(obj));
 
+// Object length cannot be 0 by design, otherwise something is wrong + 
infinite loop here!
+ink_release_assert(0 != obj->m_length);
+
 if (obj->m_type == static_cast(obj_type) && *found_obj == 
nullptr) {
   *found_obj = obj;
 }