Re: dequeue on first byte instead of connection close?

2016-12-14 Thread Dylan Jay

> On 15 Dec. 2016, at 1:25 pm, Willy Tarreau  wrote:
> 
> Hi Dylan,
> 
> On Thu, Dec 15, 2016 at 12:59:32PM +0700, Dylan Jay wrote:
>> Still no one else has a usecase for a maxconn based when the end of headers
>> are received rather than when the connection is closed?
>> This is still an issue for us and we haven???t found any load balancer that
>> can handle it. It leaves our servers under-utilised. 
>> 
>> Basically we want to set maxconn = 20 (or some large number) and
>> maxconn_processing = 1 (where maxconn_processing is defined as not having
>> finished returning headers yet).
> 
> I understand how this could be useful to you, but I also think that you
> have a software architecture issue that makes this use case quite unique.
> But at the same time I think it could make sense to have such a feature,
> for example for those using fd-passing between their dynamic frontend and
> the static server, as well as those using sendfile().
> 
> What I think is that we could implement a new per-server and per-backend
> counter of active requests still being processed, and have a max for these
> ones so that we can also decide to queue/dequeue on such events. But there
> are different steps and some people will want to be even finer by only
> counting as active the requests whose POST body has been completely sent.
> People using WAFs may find this useful for example : you send a large
> request, you know the request will not be processed until completely sent,
> and you know it's not completed until you receive the headers.
> 
> We would also need to implement a new LB algorithm, eg: "least-active" as
> a complement to leastconn to pick the most suitable server.
> 
> I'm seeing a similar use case when using haproxy with distcc, the data
> transfer time is not negligible compared to the compilation time and if
> I could parse the protocol, I'd need to take into account the real
> processing time as well.
> 
> Now the obvious question is : who is interested in working on implementing
> this ? Are you willing to take a look at it and implement it yourself,
> possibly with some help from the rest of us ?
> 

I think the distcc example does sound similar. That is basically whats 
happening here. The time to read from disk + send back is not insignificant, 
but the CPU part is done and is able to handle another request. Not sure if I 
mentioned this before, but this archtecture is not mine specifically but an 
open source python based application server (http://zope.org).
I also perhaps should mention that we have solved this partially by using 
x-sendfile headers (also called accelerated-redirects) which for some requests 
will return just the header and then reenter haproxy via a different queue with 
a high maxconn and get sent to nginx instead. However this makes for a more 
complex setup and doesn’t cover all the use cases where the backend server is 
streaming rather than processing.

You might be right that there could be different definitions of “active” but at 
least for your POST example it is the same definition as for us. Nothing gets 
sent from zope until processing is over. Actually thats only partially true. 
There is a way to stream the result as you a processing it. In which case the 
signal of “any byte received” is not enough to differentiate a response that is 
still being processed from one that is not. I guess there could be an optional 
response header to indicate processing or not processing? (assuming haproxy 
parses response headers). This is an edge case however, and in such cases it 
might be possible to know their urls in advance and handle them via a different 
queue.

In terms of implementing, unfortunately myself and my colleagues are all python 
developers. I haven’t touched C for 20 years :(







Re: dequeue on first byte instead of connection close?

2016-12-14 Thread Willy Tarreau
Hi Dylan,

On Thu, Dec 15, 2016 at 12:59:32PM +0700, Dylan Jay wrote:
> Still no one else has a usecase for a maxconn based when the end of headers
> are received rather than when the connection is closed?
> This is still an issue for us and we haven???t found any load balancer that
> can handle it. It leaves our servers under-utilised. 
> 
> Basically we want to set maxconn = 20 (or some large number) and
> maxconn_processing = 1 (where maxconn_processing is defined as not having
> finished returning headers yet).

I understand how this could be useful to you, but I also think that you
have a software architecture issue that makes this use case quite unique.
But at the same time I think it could make sense to have such a feature,
for example for those using fd-passing between their dynamic frontend and
the static server, as well as those using sendfile().

What I think is that we could implement a new per-server and per-backend
counter of active requests still being processed, and have a max for these
ones so that we can also decide to queue/dequeue on such events. But there
are different steps and some people will want to be even finer by only
counting as active the requests whose POST body has been completely sent.
People using WAFs may find this useful for example : you send a large
request, you know the request will not be processed until completely sent,
and you know it's not completed until you receive the headers.

We would also need to implement a new LB algorithm, eg: "least-active" as
a complement to leastconn to pick the most suitable server.

I'm seeing a similar use case when using haproxy with distcc, the data
transfer time is not negligible compared to the compilation time and if
I could parse the protocol, I'd need to take into account the real
processing time as well.

Now the obvious question is : who is interested in working on implementing
this ? Are you willing to take a look at it and implement it yourself,
possibly with some help from the rest of us ?

Best regards,
Willy



Re: dequeue on first byte instead of connection close?

2016-12-14 Thread Dylan Jay
Still no one else has a usecase for a maxconn based when the end of headers are 
received rather than when the connection is closed?
This is still an issue for us and we haven’t found any load balancer that can 
handle it. It leaves our servers under-utilised. 

Basically we want to set maxconn = 20 (or some large number) and 
maxconn_processing = 1 (where maxconn_processing is defined as not having 
finished returning headers yet).



> On 19 Mar. 2014, at 11:29 am, Dylan Jay  wrote:
> 
> Hi,
> 
> I was wondering if you'd given any more thought to this feature?
> 
> To summarise:
> - we are using a backend service that has both synchronous and async threads.
> - it has its own internal queuing
> - as soon as haproxy gets the first byte the body we know the backend service 
> is able to accept another connection as it's handed off the streaming to an 
> asynchronous thread
> - the way haproxy works now we can't take advantage of this. The streamed 
> response back to the user could be taking a long time complete, yet our 
> backend service is sitting unoccupied as haproxy believes its already reached 
> its maxconn of 1.
> - setting maxconn higher doesn't solve the problem
> - we can't differentiate the urls such that we have two queues with different 
> maxcon
> 
> What we'd like is something like maxstreamingconn which can be set higher 
> than maxconn. 
> 
> Dylan Jay
> 
> 
> 
> On 19 Nov 2012, at 10:56 pm, Dylan Jay  wrote:
> 
>> On 20/11/2012, at 12:29 AM, Willy Tarreau  wrote:
>> 
>>> On Tue, Nov 20, 2012 at 12:08:12AM +1300, Dylan Jay wrote:
 No not all responses. Zope has an object database with blob support. It 
 only
 applies to images, document, videos etc stored in the database, which is 
 sort
 of like static files. Once it's decide to send the file, the transaction 
 can
 end and the actual sending of the file is handled off to an async thread. 
 It
 doesn't apply when a html template page is being generated since the
 transaction doesn't end till the last bit of the page has been generated.
>>> 
>>> OK, that's becoming a bit tricky then.
>> 
>> I think I'm making it sound more complicated than it is. Basically in
>> most web apps once the status header has been sent then most
>> processing has already happened since otherwise you can't indicate to
>> the browser an error in processing occurred. So status header means
>> generally the server will be ready to receive another request very
>> soon.
>> 
>>> 
 The problem I'm trying to solve is delivery very large blobs like long 
 videos
 which could take a long time to stream and aren't amenable to cache in
 something like varnish. Currently while thats happening the cpu is being
 underutilised.
>>> 
>>> Do you know if there is something in the request that can tell you whether 
>>> the
>>> request is for a large blob or a generated page ? If so, I have a solution 
>>> :-)
>> 
>> :)
>> Only the extension in the URL or perhaps a range header with some
>> video players. But both are hacks and won't apply in all cases which
>> why I'm looking to avoid it.
>> 
>>> 
 I'm trying out setting maxconn to 2 even when dynamic requests will be
 handled synchronously. At least some of the time the request will get
 processed earlier increasing the cpu utilisation.
>>> 
>>> I agree and this is generally what people do when running with such low 
>>> limits.
>>> 
 The risk is a request could
 get stuck behind a slow request and since haproxy has already handed off to
 the backend it can't redistribute it (or it could but then it would get 
 done
 twice).
>>> 
>>> In general the risk is low because if a request gets too slow and times out,
>>> there are big changes that the second request will experience the same fate.
>> 
>> Not really. Some requests are just slow transactions like complex
>> saves or long generated HTML pages. If a request is waiting behind a 2
>> second request when it could have been sent to a server that was just
>> serving video then that's inefficient.
>> 
>> 
>>> 
 but if there was a setting like max-active-requests=1 then that would 
 result
 in better balancing. or perhaps if there was a way to use acls with 
 response
 headers to up the maxconn while serving a video?
>>> 
>>> The maxconn cannot be adjusted that way, it would be a bit dangerous. But 
>>> maybe
>>> we could have a per-server active-request count and use this to offset the
>>> maxconn vs curr_conn computations when deciding whether to dequeue or not.
>>> 
>>> However I still think that playing with maxconn is a bit dangerous because 
>>> I'm
>>> fairly sure that your server has a hard limit you don't want to cross. And
>>> that's the goal of the maxconn setting.
>>> 
>> 
>> But if there was a maxconn and a max processing or max action request
>> limit I'd set maxconn to say 4 and maxprocessing to 1. There 

IT Security Customer DB

2016-12-14 Thread Karlie redd
Good Day,



Hope you doing well,



Would you be interested in acquiring the list of Companies or Client's using IT 
Security?



We provide the Database across North America, EMEA, APAC and Latin America.



You can also acquire companies using: Lookout, Corel, Ashampoo, Canon Inc, 
Fortinet, Acronis, NVidia, Microsoft Corporation, Adobe Systems, AVG 
Technologies, Symantec, BullGuard, Trend Micro, Comodo Group, F-Secure, Sophos, 
G Data, Dr. Web, Avira, Webroot, Intel Security, Avast Software, and many 
more...



Please let me know your target criteria!



Regards,

Karlie Laine



If you don't want to include yourself in our mailing list, please reply back 
"Leave Out" in a subject line



Re: [ANNOUNCE] haproxy-1.7.1

2016-12-14 Thread Igor Pav
Hi Lukas, in fact, openssl already gets early TLS 1.3 adoption in dev,
will release in 1.1.1, and BoringSSL supports TLSv1.3 already.

On Thu, Dec 15, 2016 at 1:48 AM, Lukas Tribus  wrote:
> Hi Igor,
>
>
> Am 14.12.2016 um 14:37 schrieb Igor Pav:
>>
>> That's great!
>>
>> Will HAProxy adopt TLS 1.3 soon?
>
>
> This actually depends way more on openssl than it depends on haproxy (which
> most likely only needs a few tweaks).
>
> TLS 1.3 is the primary focus of the next openssl release [1], which I assume
> is gonna be 1.2.0, but I doubt there is an ETA for this.
>
>
>
> Regards,
> Lukas
>
>
> [1] https://www.openssl.org/policies/roadmap.html



Re: [ANNOUNCE] haproxy-1.7.1

2016-12-14 Thread Lukas Tribus

Hi Igor,


Am 14.12.2016 um 14:37 schrieb Igor Pav:

That's great!

Will HAProxy adopt TLS 1.3 soon?


This actually depends way more on openssl than it depends on haproxy 
(which most likely only needs a few tweaks).


TLS 1.3 is the primary focus of the next openssl release [1], which I 
assume is gonna be 1.2.0, but I doubt there is an ETA for this.




Regards,
Lukas


[1] https://www.openssl.org/policies/roadmap.html



Re: lua support does not build on FreeBSD

2016-12-14 Thread thierry . fournier
Hi, thanks for the patch.

Maybe it is more efficient to simply add a "#define _KERNEL", or the
following code:

#if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
#define _KERNEL
#endif

I'm not sure that src_hlua.c was the good file for adding these
kinds of defines or includes. Even if these defines are used only in
this file, I suppose that it should be used in other file (in the
future).

unfortunately, pattern mathing use only the length in bits of the mask.
I can't compare.

Maybe Willy have an opinion about this ?

Thierry


On Wed, 14 Dec 2016 13:42:26 +
David CARLIER  wrote:

> On Linux it s also an "alias" if I m not mistaken, someone might
> confirm it or not though. Kind regards.
> 
> On 14 December 2016 at 13:32, Dmitry Sivachenko  wrote:
> >
> >> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
> >>
> >> Hi,
> >>
> >> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
> >> made all the fields available, not sure if would be useful one day).
> >>
> >
> > Well, I was not sure what this s6_addr32 is used for and if it is possible 
> > to avoid it's usage (since it is linux-specific).
> > If not, then this is probably the correct solution.
> >
> 



Re: lua support does not build on FreeBSD

2016-12-14 Thread David CARLIER
Hi,

On 14 December 2016 at 14:48,   wrote:
> Hi, thanks for the patch.
>
> Maybe it is more efficient to simply add a "#define _KERNEL", or the
> following code:
>
> #if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
> #define _KERNEL
> #endif
>

It is easy (and temptating) but not a good practice ... in my opinion :-)

> I'm not sure that src_hlua.c was the good file for adding these
> kinds of defines or includes. Even if these defines are used only in
> this file, I suppose that it should be used in other file (in the
> future).
>

I was not sure myself, I hesitated to put it in one of the headers ...
let me know what you all think.

Cheers.

> unfortunately, pattern mathing use only the length in bits of the mask.
> I can't compare.
>
> Maybe Willy have an opinion about this ?
>
> Thierry
>
>
> On Wed, 14 Dec 2016 13:42:26 +
> David CARLIER  wrote:
>
>> On Linux it s also an "alias" if I m not mistaken, someone might
>> confirm it or not though. Kind regards.
>>
>> On 14 December 2016 at 13:32, Dmitry Sivachenko  wrote:
>> >
>> >> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
>> >>
>> >> Hi,
>> >>
>> >> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
>> >> made all the fields available, not sure if would be useful one day).
>> >>
>> >
>> > Well, I was not sure what this s6_addr32 is used for and if it is possible 
>> > to avoid it's usage (since it is linux-specific).
>> > If not, then this is probably the correct solution.
>> >
>>



Preparing 1.8

2016-12-14 Thread Willy Tarreau
Hi all,

I wanted to send this earlier but we've had two quite busy weeks, and
now things are cooling down.

I want to propose a plan for the 1.8 development cycle. Everyone has
noticed that 1.7's was a terrible mess with nothing happening at the
beginning and everything being merged in a hurry in the last few month.

Since it's difficult to enforce merge windows on a year-long project, we
have two options :

  - release very often and maintain only a small set of versions along
time (more or less the Linux model). We're quite not ready for this,
it may only make sense if we reach a level of contributions which
require to release every 3-4 months but that's not the case now and
that would be counter-productive ;

  - have two distinct merge windows, one for unplanned stuff and another
one for planned stuff that people advertise in advance so that
others can watch their progress and know that some merge/rebase may
ultimately be needed.

For 1.8 I'll go via the second approach. What I intend to do is the
following :

  - phase 1 : pending queue and unplanned changes : every contribution
is welcome (well, based on technical quality and usefulness) until
Friday 2017-03-31. We do already have some stuff pending that I
still didn't have the time to merge (don't stress, if I told you
I'll pick it, I'll do it or I'll have to invent a good excuse for
not doing so).

  - phase 2 : completion of planned changes : some changes take more
than a few months to be completed and will have to be merged
later. Such changes *must* be announced in advance so that others
have some time to discuss the impact on their work (or even oppose
to the inclusion). I haven't fixed a deadline yet for the announces
but let's say end of February.  This phase will allow some of us to
only focus on planned work and not to have to review code arriving
at random time. This phase 2 will end on 2017-09-30. At this point,
all incomplete features will be postponed for 1.9.

  - phase 3 : tests, fixes, cleanups and reverts before the release : no
new feature is supposed to arrive here. It's very similar to a
maintenance branch except that we can revert stuff that doesn't work
if it's considered unfixable in a short time. These versions are
release candidates. This phase has no strict end date though we'll
focus on October or November but not later, and we'll privilege
reverts instead of postponing fixes.

We'll need to emit numerous development versions if we want to have
testers. It's important to ask for them, as developers generally don't
see time flying.

With such a model I expect that we'll be able to implement some stuff
which requires more concentration than what we could have during 1.7-dev
(but 1.6 bugs have plagued us for a while).

We do already have the following features in queue for phase 1 (they're
subject to evolve, change or even to disappear) :

  - JSON stats
  - PCRE2 support
  - crt-bind-list
  - proxy-addr
  - asynchronous / pipelined SPOP

For phase 2, an option was already taken for such features (remember
that it only means we'll attempt to get them merged) :

  - HTTP/2
  - multi-threading
  - RAM-based "favicon" cache
  - DNS SRV records
  - openssl async API implementation

There are also a number of interesting features which are looking for an
adopter in the ROADMAP file and a few extra things like these ones :

  - stack-based expression evaluator to merge sample fetch functions and
converters, providing extended possibilities (string compare etc)

  - reintegration of the systemd wrapper into the main program to run in
a master-worker mode (see how that conflicts with multi-threading).
This work was already attempted by Simon a few years ago but the
internal architecture was quite not ready for this. I don't know how
it compares now.

  - one log-format per log server

  - multi-process support for peers

  - check result broadcasting over peers protocol


If you intend to work on something in these areas or any other one you
have in mind and want to reserve a slot for phase 2 (merge no later than
end of September), please announce it here. Otherwise you'll have to get
it in a mergeable state before end of March, or it will only be for 1.9.

I've explained this at a meetup two weeks ago, the slides are now online
here for those interested (not many more info than what is explained here
though) :

   http://www.slideshare.net/MichaelCarney6/whats-new-in-haproxy

Any questions, comments, objections ?

Thanks,
Willy




Re: lua support does not build on FreeBSD

2016-12-14 Thread David CARLIER
On Linux it s also an "alias" if I m not mistaken, someone might
confirm it or not though. Kind regards.

On 14 December 2016 at 13:32, Dmitry Sivachenko  wrote:
>
>> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
>>
>> Hi,
>>
>> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
>> made all the fields available, not sure if would be useful one day).
>>
>
> Well, I was not sure what this s6_addr32 is used for and if it is possible to 
> avoid it's usage (since it is linux-specific).
> If not, then this is probably the correct solution.
>



Re: [ANNOUNCE] haproxy-1.7.1

2016-12-14 Thread Igor Pav
That's great!

Will HAProxy adopt TLS 1.3 soon?

On Tue, Dec 13, 2016 at 7:39 AM, Willy Tarreau  wrote:
> Hi,
>
> HAProxy 1.7.1 was released on 2016/12/13. It added 28 new commits
> after version 1.7.0.
>
> It addresses a few issues related to how buffers are allocated under
> low memory condition consecutive to the applet scheduling changes
> introduced before 1.6 was released (Christopher found a nest of pre-1.6
> bugs in this area when trying to stress SPOE and each time he would fix
> one, another would pop up), and a few other issues specific to 1.7 :
>
>   - CONNECT method was broken since the introduction in filters in
> 1.7-dev2 or so. It seems like nobody deploys a development version
> in front of an outgoing proxy (which I can easily understand)
>
>   - "show stat resolvers" and "show tls-keys" were wrong after the move
>  out of cli.c (typo in return value)
>
>   - "show stat" on a proxy with no LB algo (transparent or redispatch)
> could crash by trying to dereference the algo name which was null.
> Now it will report "none" or "unknown".
>
>   - fixed LibreSSL support
>
> The rest is pretty minor and mostly doc cleanups and spelling fixes. Given
> that the two "major" bugs and half of the medium ones also affect 1.6,
> expect 1.6.11 in the next few weeks. It's important to note that while
> marked "major", they only manifest under strong memory pressure.
>
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Sources  : http://www.haproxy.org/download/1.7/src/
>Git repository   : http://git.haproxy.org/git/haproxy-1.7.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-1.7.git
>Changelog: http://www.haproxy.org/download/1.7/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
>
> Willy
> ---
> Complete changelog :
>
> Ben Shillito (1):
>   DOC: Added 51Degrees conv and fetch functions to documentation.
>
> Christopher Faulet (12):
>   BUG/MEDIUM: http: Fix tunnel mode when the CONNECT method is used
>   BUG/MINOR: http: Keep the same behavior between 1.6 and 1.7 for 
> tunneled txn
>   BUG/MINOR: filters: Protect args in macros HAS_DATA_FILTERS and 
> IS_DATA_FILTER
>   BUG/MINOR: filters: Invert evaluation order of HTTP_XFER_BODY and 
> XFER_DATA analyzers
>   BUG/MINOR: http: Call XFER_DATA analyzer when HTTP txn is switched in 
> tunnel mode
>   DOC: Add undocumented argument of the trace filter
>   DOC: Fix some typo in SPOE documentation
>   BUG/MINOR: cli: be sure to always warn the cli applet when input buffer 
> is full
>   MINOR: applet: Count number of (active) applets
>   MINOR: task: Rename run_queue and run_queue_cur counters
>   BUG/MEDIUM: stream: Save unprocessed events for a stream
>   BUG/MAJOR: Fix how the list of entities waiting for a buffer is handled
>
> Dragan Dosen (1):
>   BUG/MINOR: cli: allow the backslash to be escaped on the CLI
>
> Luca Pizzamiglio (1):
>   BUILD/MEDIUM: Fixing the build using LibreSSL
>
> Marcin Deranek (1):
>   MINOR: proxy: Add fe_name/be_name fetchers next to existing fe_id/be_id
>
> Matthieu Guegan (1):
>   BUG/MINOR: http: don't send an extra CRLF after a Set-Cookie in a 
> redirect
>
> Ruoshan Huang (1):
>   DOC: Fix map table's format
>
> Thierry FOURNIER / OZON.IO (3):
>   BUG/MEDIUM: variables: some variable name can hide another ones
>   DOC: lua: Documentation about some entry missing
>   MINOR: Do not forward the header "Expect: 100-continue" when the option 
> http-buffer-request is set
>
> Tim Düsterhus (1):
>   DOC: Spelling fixes
>
> Willy Tarreau (7):
>   BUG/MEDIUM: proxy: return "none" and "unknown" for unknown LB algos
>   BUG/MINOR: stats: make field_str() return an empty string on NULL
>   BUG/MAJOR: stream: fix session abort on resource shortage
>   BUG/MEDIUM: cli: fix "show stat resolvers" and "show tls-keys"
>   DOC: mention that req_tot is for both frontends and backends
>   BUG/MINOR: stats: fix be/sessions/max output in html stats
>   [RELEASE] Released version 1.7.1
>
>



Re: lua support does not build on FreeBSD

2016-12-14 Thread Dmitry Sivachenko

> On 14 Dec 2016, at 16:24, David CARLIER  wrote:
> 
> Hi,
> 
> I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
> made all the fields available, not sure if would be useful one day).
> 

Well, I was not sure what this s6_addr32 is used for and if it is possible to 
avoid it's usage (since it is linux-specific).
If not, then this is probably the correct solution. 




lua support does not build on FreeBSD

2016-12-14 Thread David CARLIER
Hi,

I ve made a small patch for 1.8 branch though. Does it suit ? (ie I
made all the fields available, not sure if would be useful one day).

Kind regards.
From 7dff470cdbe0ea00ce78b504e95f8c639a11a365 Mon Sep 17 00:00:00 2001
From: David CARLIER 
Date: Wed, 14 Dec 2016 13:17:04 +
Subject: [PATCH] BUG/MINOR: hlua: *bsd fix

s6_addr* fields are not available in the userland on
BSD systems in general. needs backport to 1.7.x
---
 src/hlua_fcn.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index 5ac533a0..6892fdaf 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -39,6 +39,12 @@ static int class_listener_ref;
 
 #define STATS_LEN (MAX((int)ST_F_TOTAL_FIELDS, (int)INF_TOTAL_FIELDS))
 
+#if defined(__FreeBSD__) || defined(__NetBSD__) || defined(__OpenBSD__)
+#define s6_addr8	__u6_addr.__u6_addr8
+#define s6_addr16	__u6_addr.__u6_addr16
+#define s6_addr32	__u6_addr.__u6_addr32
+#endif
+
 static struct field stats[STATS_LEN];
 
 int hlua_checkboolean(lua_State *L, int index)
-- 
2.11.0



Re: [PATCH] MINOR: dns: support advertising UDP message size.

2016-12-14 Thread Conrad Hoffmann
Hello Willy, Baptiste,

sorry to revive this very old thread, but I was wondering if there is still
interest in this, now that the DNS subsystem has been refactored?

I was looking at implementing SRV records and noticed that the default UDP
message size of 512 has even become a build-time constant now, so the patch
would need some additional work, but I'd be happy to do so if you'd still
be open to merging it.

As another point of interest, many of our production systems DNS responses
don't fit into a UDP packet anymore, so we have to rely on TCP fallback a
lot. Would such a thing have any chance of getting merged? Or would there
be fundamental concerns with that approach? I mean, it could of course be
made opt-in only for example...

Thanks a lot,
Conrad


On 07/03/2016 09:05 PM, Baptiste wrote:
>> It's very nice having support for EDNS0, but IMHO it shouldn't be
>> enabled by default if it doesn't fallback.
> 
> Hi Remi,
> 
> My intention was to not enable this feature by default.
> 
> Baptiste
> 

-- 
Conrad Hoffmann
Traffic Engineer

SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany

Managing Director: Alexander Ljung | Incorporated in England & Wales
with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
HRB 110657B