Re: [ANNOUNCE] haproxy-2.0.0

2019-06-17 Thread Willy Tarreau
On Tue, Jun 18, 2019 at 12:57:58AM +0200, Cyril Bonté wrote:
> Le 16/06/2019 à 21:56, Willy Tarreau a écrit :
> > Hi,
> > 
> > HAProxy 2.0.0 was released on 2019/06/16. It added 63 new commits
> > after version 2.0-dev7.
> > [...]
> > 
> > Please find the usual URLs below :
> > Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> And with more than 24h later, I've deployed the HTML documentation for 2.0.0
> and 2.1-dev.

Looks good, thank you Cyril!

> I really need to clean up/repair/rewrite my scripts ;)

You know how it is, everything that takes more time to rewrite than to
workaround by hand tends to last very long. After all it took something
like 15 years before I forced myself to write the release scripts ;-)

Willy



Re: [ANNOUNCE] haproxy-2.0.0

2019-06-17 Thread Cyril Bonté

Le 16/06/2019 à 21:56, Willy Tarreau a écrit :

Hi,

HAProxy 2.0.0 was released on 2019/06/16. It added 63 new commits
after version 2.0-dev7.
[...]

Please find the usual URLs below :
Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


And with more than 24h later, I've deployed the HTML documentation for 
2.0.0 and 2.1-dev. I really need to clean up/repair/rewrite my scripts ;)



--
Cyril Bonté



June 17, 2019 - Start-Ups Urge Government To Address Digital Infra, Tax, Data Issues

2019-06-17 Thread TradeBriefs



Re: [PATCH v3] BUG/MEDIUM: compression: Set Vary: Accept-Encoding for compressed responses

2019-06-17 Thread Willy Tarreau
On Mon, Jun 17, 2019 at 04:10:07PM +0200, Tim Duesterhus wrote:
> Make HAProxy set the `Vary: Accept-Encoding` response header if it compressed
> the server response.
(...)

Perfect, now merged, thank you Tim!
Willy



[PATCH v3] BUG/MEDIUM: compression: Set Vary: Accept-Encoding for compressed responses

2019-06-17 Thread Tim Duesterhus
Make HAProxy set the `Vary: Accept-Encoding` response header if it compressed
the server response.

Technically the `Vary` header SHOULD also be set for responses that would
normally be compressed based off the current configuration, but are not due
to a missing or invalid `Accept-Encoding` request header or due to the
maximum compression rate being exceeded.

Not setting the header in these cases does no real harm, though: An
uncompressed response might be returned by a Cache, even if a compressed
one could be retrieved from HAProxy. This increases the traffic to the end
user if the cache is unable to compress itself, but it saves another
roundtrip to HAProxy.

see the discussion on the mailing list: 
https://www.mail-archive.com/haproxy@formilux.org/msg34221.html
Message-ID: 20190617121708.ga2...@1wt.eu

A small issue remains: The User-Agent is not added to the `Vary` header,
despite being relevant to the response. Adding the User-Agent header would
make responses effectively uncacheable and it's unlikely to see a Mozilla/4
in the wild in 2019.

Add a reg-test to ensure the behaviour as described in this commit message.

see issue #121
Should be backported to all branches with compression (i.e. 1.6+).
---
 reg-tests/compression/vary.vtc | 187 +
 src/flt_http_comp.c|   6 ++
 2 files changed, 193 insertions(+)
 create mode 100644 reg-tests/compression/vary.vtc

diff --git a/reg-tests/compression/vary.vtc b/reg-tests/compression/vary.vtc
new file mode 100644
index 0..0a060e4bc
--- /dev/null
+++ b/reg-tests/compression/vary.vtc
@@ -0,0 +1,187 @@
+varnishtest "Compression sets Vary header"
+
+#REQUIRE_VERSION=1.9
+#REQUIRE_OPTION=ZLIB|SLZ
+
+feature ignore_unknown_macro
+
+server s1 {
+rxreq
+expect req.url == "/plain/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/plain/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/html/accept-encoding-invalid"
+expect req.http.accept-encoding == "invalid"
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+
+rxreq
+expect req.url == "/html/accept-encoding-null"
+expect req.http.accept-encoding == ""
+txresp \
+  -hdr "Content-Type: text/html" \
+  -bodylen 100
+
+rxreq
+expect req.url == "/dup-etag/accept-encoding-gzip"
+expect req.http.accept-encoding == "gzip"
+txresp \
+  -hdr "Content-Type: text/plain" \
+  -hdr "ETag: \"123\"" \
+  -hdr "ETag: \"123\"" \
+  -bodylen 100
+} -repeat 2 -start
+
+
+haproxy h1 -conf {
+defaults
+mode http
+${no-htx} option http-use-htx
+timeout connect 1s
+timeout client  1s
+timeout server  1s
+
+frontend fe-gzip
+bind "fd@${fe_gzip}"
+default_backend be-gzip
+
+backend be-gzip
+compression algo gzip
+compression type text/plain
+server www ${s1_addr}:${s1_port}
+
+frontend fe-nothing
+bind "fd@${fe_nothing}"
+default_backend be-nothing
+
+backend be-nothing
+server www ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_gzip_sock} {
+txreq -url "/plain/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.content-encoding == "gzip"
+expect resp.http.vary == "Accept-Encoding"
+gunzip
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url "/plain/accept-encoding-null"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-gzip" \
+  -hdr "Accept-Encoding: gzip"
+rxresp
+expect resp.status == 200
+expect resp.http.vary == ""
+expect resp.bodylen == 100
+
+txreq -url "/html/accept-encoding-invalid" \
+  -hdr "Accept-Encoding: invalid"
+rxresp
+

VTest output change

2019-06-17 Thread Poul-Henning Kamp
Hi,

For reason now obscured by history, VTest has a (relative) timestamp
on every line of its output:

***  v1   27.4 CLI RX  300
 v1   27.4 CLI RX|Cannot set the active VCL cold.
**   v1   27.4 CLI 300 
**   top  27.4 === varnish v1 -cliok "vcl.state vcl1 auto"
 v1   27.4 CLI TX|vcl.state vcl1 auto
***  v1   27.5 CLI RX  200
**   v1   27.5 CLI 200 

That makes it patently hard to rund diff(1) on VTest outputs.

I want to change this to instead emit separate timestamp lines
whenever the time has changed since the last output to the log:

 dT   3.864
***  v3   CLI RX  200
 v3   CLI RX|Child in state running
 v3   CLI TX|debug.listen_address
 dT   3.970
***  v3   CLI RX  200
 v3   CLI RX|/tmp/vtc.49670.11c55d2e/v3.sock -
 v3   CLI TX|debug.xid 999
 dT   4.076
 v3   vsl|  0 CLI - Rd debug.listen_address 

Even with the increased precision to milliseconds, the output from
running all the Varnish Cache tests in verbose mode is still 7% smaller
than before.

The Varnish Cache project has OK'ed this, any objections from HAproxy ?

Poul-Henning

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.



Re: [PATCH 0/1] compression: Set Vary: Accept-Encoding

2019-06-17 Thread Willy Tarreau
On Mon, Jun 17, 2019 at 02:33:05PM +0200, Tim Düsterhus wrote:
> > So in my opinion we should only emit "Vary: accept-encoding" when
> > adding Content-Encoding. Am I missing something ?
> 
> I believe you are correct that only sending the 'Vary' when actually
> compressing is fine according to the RFC. Sending a compressed response,
> when not requesting one is incorrect.

Absolutely, and that's the behaviour we need to fix (and to backport
this fix).

> Sending an uncompressed one, even
> if theoretically a compressed would be supported is valid. If the proxy
> / cache wants to it can compress the response itself.

Agreed. And Varnish does compress for example so for it it's better that
we only place a Vary when needed to let it do the best thing in other
cases.

Thanks,
willy



Re: [PATCH 0/1] compression: Set Vary: Accept-Encoding

2019-06-17 Thread Tim Düsterhus
Willy,

Am 17.06.19 um 14:17 schrieb Willy Tarreau:
> Hi Tim,
> 
> I'm back to this one :
> 
> On Wed, Jun 12, 2019 at 10:36:51PM +0200, Tim Duesterhus wrote:
>> This thread contains two "competing" patches to fix the BUG that HAProxy
>> does not set the `Vary` response header when the compression filter is
>> applied. When not setting the `Vary` header the response may be miscached
>> by intermediate caching proxies.
>>
>> Please select the one you like better, because I wasn't sure. I'll explain
>> the differences below:
>>
>> PATCH 1 (the one *without* v2):
>> ---
>>
>> This one attempts to only set the `Vary` response header when it's
>> *required* to not pollute responses that are never going to be compressed
>> based on the current configuration (e.g. because the Content-Type is not
>> listed in `compression type`).
>>
>> To do so the patch adds a new `would_compress` flag and requires careful
>> checking in `htx_set_comp_reshdr`:
>>
>> 1. All the response conditions must go first.
>> 2. Then the `would_compress` flag must be set.
>> 3. Then the other conditions (e.g. compression rate) must be checked.
>>
>> Otherwise the `would_compress` flag might be missing due to a temporary
>> condition, leading to a missing `Vary` header, leading to bugs.
> 
> So I'm a bit confused by what it does because it *seems* to set the Vary
> header even when the client mentions no compression is supported by not
> specifying an accept-encoding, asking the cache to revalidate for whatever
> accept-encoding request it sees.

That is correct.

> I think that the real bug is in fact that we can return compressed
> contents that do not advertise vary and that this one alone needs to
> be addressed. The remaining cases are just cache optimizations and
> will only serve to encourage caches to try to find a better
> representation even when an uncompressed one is present. But I'm really
> not convinced it's welcome, because if compression was enabled on haproxy
> in the first place, it's to save bandwidth or download time. If a cache
> is present between the client and haproxy, it will always be faster to
> deliver the uncompressed object than it would be to fetch the same again
> from haproxy hoping to get a different representation. Also, returning
> Vary for all non-matching algos may result in cache pollution : if
> someone fetches through a cache a large number of same objects with
> random accept-encoding, all responses will be uncompressed with a Vary
> header and will result in a different copy in the cache. Without the
> Vary header for uncompressed objects, all non-matching algos may use
> the same single uncompressed representation.
> 
> So in my opinion we should only emit "Vary: accept-encoding" when
> adding Content-Encoding. Am I missing something ?

I believe you are correct that only sending the 'Vary' when actually
compressing is fine according to the RFC. Sending a compressed response,
when not requesting one is incorrect. Sending an uncompressed one, even
if theoretically a compressed would be supported is valid. If the proxy
/ cache wants to it can compress the response itself.

Best regards
Tim Düsterhus



Re: [PATCH 0/1] compression: Set Vary: Accept-Encoding

2019-06-17 Thread Willy Tarreau
Hi Tim,

I'm back to this one :

On Wed, Jun 12, 2019 at 10:36:51PM +0200, Tim Duesterhus wrote:
> This thread contains two "competing" patches to fix the BUG that HAProxy
> does not set the `Vary` response header when the compression filter is
> applied. When not setting the `Vary` header the response may be miscached
> by intermediate caching proxies.
> 
> Please select the one you like better, because I wasn't sure. I'll explain
> the differences below:
> 
> PATCH 1 (the one *without* v2):
> ---
> 
> This one attempts to only set the `Vary` response header when it's
> *required* to not pollute responses that are never going to be compressed
> based on the current configuration (e.g. because the Content-Type is not
> listed in `compression type`).
> 
> To do so the patch adds a new `would_compress` flag and requires careful
> checking in `htx_set_comp_reshdr`:
> 
> 1. All the response conditions must go first.
> 2. Then the `would_compress` flag must be set.
> 3. Then the other conditions (e.g. compression rate) must be checked.
> 
> Otherwise the `would_compress` flag might be missing due to a temporary
> condition, leading to a missing `Vary` header, leading to bugs.

So I'm a bit confused by what it does because it *seems* to set the Vary
header even when the client mentions no compression is supported by not
specifying an accept-encoding, asking the cache to revalidate for whatever
accept-encoding request it sees.

I think that the real bug is in fact that we can return compressed
contents that do not advertise vary and that this one alone needs to
be addressed. The remaining cases are just cache optimizations and
will only serve to encourage caches to try to find a better
representation even when an uncompressed one is present. But I'm really
not convinced it's welcome, because if compression was enabled on haproxy
in the first place, it's to save bandwidth or download time. If a cache
is present between the client and haproxy, it will always be faster to
deliver the uncompressed object than it would be to fetch the same again
from haproxy hoping to get a different representation. Also, returning
Vary for all non-matching algos may result in cache pollution : if
someone fetches through a cache a large number of same objects with
random accept-encoding, all responses will be uncompressed with a Vary
header and will result in a different copy in the cache. Without the
Vary header for uncompressed objects, all non-matching algos may use
the same single uncompressed representation.

So in my opinion we should only emit "Vary: accept-encoding" when
adding Content-Encoding. Am I missing something ?

Thanks,
Willy



Re: [PATCH] server state: cleanup and load global file in a tree

2019-06-17 Thread Willy Tarreau
On Fri, Jun 14, 2019 at 09:59:23PM +0200, Baptiste wrote:
> Let's sync after the release.

Second patch now merged!
Willy



Re: [PATCH] MINOR: sample: Add sha2([]) converter

2019-06-17 Thread Willy Tarreau
On Mon, Jun 17, 2019 at 12:41:44PM +0200, Tim Duesterhus wrote:
> This adds a converter for the SHA-2 family, supporting SHA-224, SHA-256
> SHA-384 and SHA-512.
(...)

Merged, thanks Tim!
Willy



Re: [ANNOUNCE] haproxy-2.0.0

2019-06-17 Thread Aleksandar Lazic
Am 16.06.2019 um 21:56 schrieb Willy Tarreau:
> Hi,
> 
> HAProxy 2.0.0 was released on 2019/06/16. It added 63 new commits
> after version 2.0-dev7.

[snipp]

> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/2.0/src/
>Git repository   : http://git.haproxy.org/git/haproxy-2.0.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git
>Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Image with tls 1.3 openssl: https://hub.docker.com/r/me2digital/haproxy20-centos
Image with tls 1.3 boringsssl:
https://hub.docker.com/r/me2digital/haproxy20-boringssl

```
$ docker run --rm --entrypoint /usr/local/sbin/haproxy 
[MASKED]/haproxy20-centos -vv
HA-Proxy version 2.0.0 2019/06/16 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_PCRE=1 USE_PCRE_JIT=1 USE_PTHREAD_PSHARED=1 USE_REGPARM=1
USE_OPENSSL=1 USE_LUA=1 USE_SLZ=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE +PCRE_JIT
-PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD +PTHREAD_PSHARED +REGPARM
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT
+CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 -ZLIB
+SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL -SYSTEMD
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=1).
Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019
Running on OpenSSL version : OpenSSL 1.1.1c  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.5
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with the Prometheus exporter as a service

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTXside=FE|BE mux=H2
  h2 : mode=HTTP   side=FEmux=H2
: mode=HTXside=FE|BE mux=H1
: mode=TCP|HTTP   side=FE|BE mux=PASS

Available services :
prometheus-exporter

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace
```

> Willy

Best regards
Aleks

> ---
> Complete changelog from 2.0-dev7 :
[snipp]




[PATCH] MINOR: sample: Add sha2([]) converter

2019-06-17 Thread Tim Duesterhus
This adds a converter for the SHA-2 family, supporting SHA-224, SHA-256
SHA-384 and SHA-512.

The converter relies on the OpenSSL implementation, thus only being available
when HAProxy is compiled with USE_OPENSSL.

See GitHub issue #123. The hypothetical `ssl_?_sha256` fetch can then be
simulated using `ssl_?_der,sha2(256)`:

  http-response set-header Server-Cert-FP %[ssl_f_der,sha2(256),hex]
---
 doc/configuration.txt| 12 ++-
 reg-tests/converter/sha2.vtc | 60 
 src/sample.c | 67 
 3 files changed, 138 insertions(+), 1 deletion(-)
 create mode 100644 reg-tests/converter/sha2.vtc

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 3e402fb92..2a09bab61 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14086,9 +14086,19 @@ set-var()
   contain characters 'a-z', 'A-Z', '0-9', '.' and '_'.
 
 sha1
-  Converts a binary input sample to a SHA1 digest. The result is a binary
+  Converts a binary input sample to a SHA-1 digest. The result is a binary
   sample with length of 20 bytes.
 
+sha2([])
+  Converts a binary input sample to a digest in the SHA-2 family. The result
+  is a binary sample with length of /8 bytes.
+
+  Valid values for  are 224, 256, 384, 512, each corresponding to
+  SHA-. The default value is 256.
+
+  Please note that this converter is only available when haproxy has been
+  compiled with USE_OPENSSL.
+
 strcmp()
   Compares the contents of  with the input value of type string. Returns
   the result as a signed integer compatible with strcmp(3): 0 if both strings
diff --git a/reg-tests/converter/sha2.vtc b/reg-tests/converter/sha2.vtc
new file mode 100644
index 0..0354b0a20
--- /dev/null
+++ b/reg-tests/converter/sha2.vtc
@@ -0,0 +1,60 @@
+varnishtest "sha2 converter Test"
+
+#REQUIRE_VERSION=2.1
+#REQUIRE_OPTION=OPENSSL
+
+feature ignore_unknown_macro
+
+server s1 {
+   rxreq
+   txresp
+} -repeat 3 -start
+
+haproxy h1 -conf {
+defaults
+   mode http
+   timeout connect 1s
+   timeout client  1s
+   timeout server  1s
+
+frontend fe
+   bind "fd@${fe}"
+
+    requests
+   http-request  set-var(txn.hash) req.hdr(hash)
+
+   http-response set-header SHA2   "%[var(txn.hash),sha2,hex,lower]"
+   http-response set-header SHA2-224   
"%[var(txn.hash),sha2(224),hex,lower]"
+   http-response set-header SHA2-256   
"%[var(txn.hash),sha2(256),hex,lower]"
+   http-response set-header SHA2-384   
"%[var(txn.hash),sha2(384),hex,lower]"
+   http-response set-header SHA2-512   
"%[var(txn.hash),sha2(512),hex,lower]"
+   http-response set-header SHA2-invalid   
"%[var(txn.hash),sha2(1),hex,lower]"
+
+   default_backend be
+
+backend be
+   server s1 ${s1_addr}:${s1_port}
+} -start
+
+client c1 -connect ${h1_fe_sock} {
+   txreq -url "/" \
+ -hdr "Hash: 1"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.sha2 == 
"6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b"
+   expect resp.http.sha2-224 == 
"e25388fde8290dc286a6164fa2d97e551b53498dcbf7bc378eb1f178"
+   expect resp.http.sha2-256 == 
"6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b"
+   expect resp.http.sha2-384 == 
"47f05d367b0c32e438fb63e6cf4a5f35c2aa2f90dc7543f8a41a0f95ce8a40a313ab5cf36134a2068c4c969cb50db776"
+   expect resp.http.sha2-512 == 
"4dff4ea340f0a823f15d3f4f01ab62eae0e5da579ccb851f8db9dfe84c58b2b37b89903a740e1ee172da793a6e79d560e5f7f9bd058a12a280433ed6fa46510a"
+   expect resp.http.sha2-invalid == ""
+   txreq -url "/" \
+ -hdr "Hash: 2"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.sha2 == 
"d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35"
+   expect resp.http.sha2-224 == 
"58b2aaa0bfae7acc021b3260e941117b529b2e69de878fd7d45c61a9"
+   expect resp.http.sha2-256 == 
"d4735e3a265e16eee03f59718b9b5d03019c07d8b6c51f90da3a666eec13ab35"
+   expect resp.http.sha2-384 == 
"d063457705d66d6f016e4cdd747db3af8d70ebfd36badd63de6c8ca4a9d8bfb5d874e7fbd750aa804dcaddae7eeef51e"
+   expect resp.http.sha2-512 == 
"40b244112641dd78dd4f93b6c9190dd46e0099194d5a44257b7efad6ef9ff4683da1eda028cb343aa688f5d3efd7314dafe580ac0bcbf115aeca9e8dc114"
+   expect resp.http.sha2-invalid == ""
+} -run
diff --git a/src/sample.c b/src/sample.c
index 67f59e844..96102504b 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -1537,6 +1537,70 @@ static int sample_conv_sha1(const struct arg *arg_p, 
struct sample *smp, void *p
return 1;
 }
 
+#ifdef USE_OPENSSL
+static int sample_conv_sha2(const struct arg *arg_p, struct sample *smp, void 
*private)
+{
+   struct buffer *trash = get_trash_chunk();
+   int bits = 256;
+   if (arg_p && arg_p->data.sint)
+   bits = arg_p->data.sint;
+
+   switch (bits) {
+   case 224: {
+   SHA256_CTX ctx;
+
+ 

Your Business Development Our Responsibility :dinchak.com

2019-06-17 Thread jibinantony089
Hello dinchak.com, 

Hope you are doing well!

While doing a preliminary analysis on your
website dinchak.com, I have discovered the current design standards. Your
website can be enhanced in a way to make it more productive.

However, I have prepared a Website
Re-Design/Development proposal for your business keeping in minds the
"must have" factors which could potentially help you expand your
business without the financial strain of hiring staff locally. Those are as
follows:

First of all design factors, as the layout needs
to be more appealing in nature and interactive enough to increase user
experience. 

- Quality banners with auto sliding feature to
make it more attractive. 

- Glossy menu bar with roll over buttons. 

- Clear navigation to all the pages to improve
the user experience and make it search engine friendly.

- More informative in terms of contents to
reduce bounce rate. 

- Blog section to keep your website updated with
your thoughts. 

- Social media business pages integration. 

- Full contact information with a query form and
a spam filter to keep away from spammers and internet bots.

- Interactive Google Map integration on Contact
Page

- Google Analytic Tool to know about the amount
of traffic, traffic sources, keywords, bounce rate and so on. 

- XML Site Map for major search engines to index
the pages of your website properly.

- Fully responsive with call to action elements
(Mobile Friendly Website).

- Integration of an easily manageable and latest
technology oriented Content Management System to have the flexibility in terms
of control over the website, ease of updating etc. 

Please let me know if you are interested to know
more. You can simply drop me an email stating all your requirements for which I
can help you in a much better way and can come up with the best solutions to
them.
Looking forward to your reply.

 

Thanks
& Regards,

Jibin Antony |
Sr. Business Development Manager

 

Note: If you are not
interested, please email with the subject line "Remove" and I will
happy to update my database.


 
---km--