Re: Re: CI caching improvement

2022-03-22 Thread William Lallemand
On Mon, Mar 21, 2022 at 04:19:38PM +0500, Илья Шипицин wrote:
> 
> I think we can adjust build-ssl.sh script to download tagged quictls (and
> cache it in the way we do cache openssl itself)
> Tags · quictls/openssl (github.com)
> 
> 

Unfortunately that won't work, those tags are inherited from the OpenSSL
repository and does not contain any QUIC code. They didn't make any tag
for quictls, probably because they don't want to do an active
maintenance.

However it looks like they are maintaining a branch with their quic
patches for each openssl subversions. And those branches are not
modified too much once they pushed it.

Maybe we could just take the latest commit of the "openssl-3.0.2+quic"
branch for now, and update it from time to time.

The other solution I considered for VTest was to cache one build
per day. Maybe we could do that for quictls, because the project is not
that active and most of the time the people developing on QUIC test
everything on their computer. That could be the best compromise between
low maintenance and automatic update of the quictls code.
The only requirement would be to display the quictls commit ID
somewehere in the log to be certain of the version we are linking with.


-- 
William Lallemand



Re: CI caching improvement

2022-03-21 Thread Tim Düsterhus

William,

On 3/18/22 11:31, William Lallemand wrote:

It looks like it is available as well on our repositories, I just test
it and it works correctly.

Honestly I really don't like the dependency to another repository with a
format specific to github.

I agree that a cleaner integration with github with their specific tools
is nice, but I don't want us to be locked with github, we are still
using cirrus, travis, sometimes gitlab, and also running some of the
scripts by hand.

We also try to avoid the dependencies to other projects and its much
simplier to have few shell scripts and a CI configuration in the
repository. And typescript is not a language we would want to depend on
if we need to debug it for example.


Okay, that's fair.


Giving that github is offering the job restart feature, we could skip
the VTest caching, since it's a little bit ugly. Only the quictls cache
need to be fixed.


Perfect, I agree here. QUICTLS caching is useful and VTest caching is 
obsolete with the single-job restart.


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-21 Thread Илья Шипицин
пт, 18 мар. 2022 г. в 15:32, William Lallemand :

> On Wed, Mar 16, 2022 at 09:31:56AM +0100, Tim Düsterhus wrote:
> > Willy,
> >
> > On 3/8/22 20:43, Tim Düsterhus wrote:
> > >> Yes my point was about VTest. However you made me think about a very
> good
> > >> reason for caching haproxy builds as well :-)  Very commonly, some
> VTest
> > >> randomly fails. Timing etc are involved. And at the moment, it's
> impossible
> > >> to restart the tests without rebuilding everything. And it happens to
> me to
> > >> click "restart all jobs" sometimes up to 2-3 times in a row in order
> to end
> > >
> > > I've looked up that roadmap entry I was thinking about: A "restart this
> > > job" button apparently is planned for Q1 2022.
> > >
> > > see https://github.com/github/roadmap/issues/271 "any individual job"
> > >
> > > Caching the HAProxy binary really is something I strongly advice
> against
> > > based on my experience with GitHub Actions and CI in general.
> > >
> > > I think the restart of the individual job sufficiently solves the issue
> > > of flaky builds (until they are fixed properly).
> > >
> >
> > In one of my repositories I noticed that this button is now there. One
> > can now re-run individual jobs and also all failed jobs. See screenshots
> > attached.
> >
>
> Hello Tim,
>
> It looks like it is available as well on our repositories, I just test
> it and it works correctly.
>
> Honestly I really don't like the dependency to another repository with a
> format specific to github.
>
> I agree that a cleaner integration with github with their specific tools
> is nice, but I don't want us to be locked with github, we are still
> using cirrus, travis, sometimes gitlab, and also running some of the
> scripts by hand.
>
> We also try to avoid the dependencies to other projects and its much
> simplier to have few shell scripts and a CI configuration in the
> repository. And typescript is not a language we would want to depend on
> if we need to debug it for example.
>
> Giving that github is offering the job restart feature, we could skip
> the VTest caching, since it's a little bit ugly. Only the quictls cache
> need to be fixed.
>

I think we can adjust build-ssl.sh script to download tagged quictls (and
cache it in the way we do cache openssl itself)
Tags · quictls/openssl (github.com)
<https://github.com/quictls/openssl/tags>


>
> Regards,
>
> --
> William Lallemand
>


Re: CI caching improvement

2022-03-18 Thread William Lallemand
On Wed, Mar 16, 2022 at 09:31:56AM +0100, Tim Düsterhus wrote:
> Willy,
> 
> On 3/8/22 20:43, Tim Düsterhus wrote:
> >> Yes my point was about VTest. However you made me think about a very good
> >> reason for caching haproxy builds as well :-)  Very commonly, some VTest
> >> randomly fails. Timing etc are involved. And at the moment, it's impossible
> >> to restart the tests without rebuilding everything. And it happens to me to
> >> click "restart all jobs" sometimes up to 2-3 times in a row in order to end
> > 
> > I've looked up that roadmap entry I was thinking about: A "restart this
> > job" button apparently is planned for Q1 2022.
> > 
> > see https://github.com/github/roadmap/issues/271 "any individual job"
> > 
> > Caching the HAProxy binary really is something I strongly advice against
> > based on my experience with GitHub Actions and CI in general.
> > 
> > I think the restart of the individual job sufficiently solves the issue
> > of flaky builds (until they are fixed properly).
> > 
> 
> In one of my repositories I noticed that this button is now there. One 
> can now re-run individual jobs and also all failed jobs. See screenshots 
> attached.
> 

Hello Tim,

It looks like it is available as well on our repositories, I just test
it and it works correctly.

Honestly I really don't like the dependency to another repository with a
format specific to github.

I agree that a cleaner integration with github with their specific tools
is nice, but I don't want us to be locked with github, we are still
using cirrus, travis, sometimes gitlab, and also running some of the
scripts by hand. 

We also try to avoid the dependencies to other projects and its much
simplier to have few shell scripts and a CI configuration in the
repository. And typescript is not a language we would want to depend on
if we need to debug it for example.

Giving that github is offering the job restart feature, we could skip
the VTest caching, since it's a little bit ugly. Only the quictls cache
need to be fixed.

Regards,

-- 
William Lallemand



Re: CI caching improvement

2022-03-16 Thread Tim Düsterhus

Willy,

On 3/8/22 20:43, Tim Düsterhus wrote:

Yes my point was about VTest. However you made me think about a very good
reason for caching haproxy builds as well :-)  Very commonly, some VTest
randomly fails. Timing etc are involved. And at the moment, it's impossible
to restart the tests without rebuilding everything. And it happens to me to
click "restart all jobs" sometimes up to 2-3 times in a row in order to end


I've looked up that roadmap entry I was thinking about: A "restart this
job" button apparently is planned for Q1 2022.

see https://github.com/github/roadmap/issues/271 "any individual job"

Caching the HAProxy binary really is something I strongly advice against
based on my experience with GitHub Actions and CI in general.

I think the restart of the individual job sufficiently solves the issue
of flaky builds (until they are fixed properly).



In one of my repositories I noticed that this button is now there. One 
can now re-run individual jobs and also all failed jobs. See screenshots 
attached.


Best regards
Tim Düsterhus

Re: CI caching improvement

2022-03-08 Thread Илья Шипицин
script/build-vtest.sh was/is reused for cirrus,travis

On Wed, Mar 9, 2022, 12:05 AM Tim Düsterhus  wrote:

> William,
>
> On 3/8/22 16:06, William Lallemand wrote:
> > Let me know if we can improve the attached patch, otherwise I'll merge
> > it.
> >
>
> Let me make a competing proposal that:
>
> 1. Keeps the complexity out of the GitHub workflow configuration in
> haproxy/haproxy.
> 2. Still allows VTest caching.
>
> For my https://github.com/TimWolla/haproxy-auth-request repository I
> have created a reusable GitHub Action to perform the HAProxy
> installation similar to 'actions/checkout':
>
> https://github.com/TimWolla/action-install-haproxy/
>
> I just spent a bit of time to fork that action and to make it VTest
> specific:
>
> https://github.com/TimWolla/action-install-vtest/
>
> The action receives the VTest branch or commit as the input and will
> handle all the heavy lifting of downloading, compiling and caching VTest.
>
> The necessary changes to HAProxy then look like this:
>
>
> https://github.com/TimWolla/haproxy/commit/78af831402e354f22d67682be0f323dec9c26a52
>
> This basically replaces the use of 'scripts/build-vtest.sh' by
> 'timwolla/action-install-vtest@main', so the configuration in the
> haproxy/haproxy repository is not getting any more complicated, as all
> the heavy lifting is done in the action which can be independently
> tested and maintained.
>
> If this proposal sounds good to you, then I'd like to suggest the
> following:
>
> 1. Willy creates a new haproxy/action-install-vtest repository in the
> haproxy organization.
> 2. Willy creates a new GitHub team with direct push access to that
> repository.
> 3. Willy adds me to that team, so that I can maintain that repository
> going forward (e.g. to handle the Dependabot pull requests that keep the
> JavaScript dependencies up to date).
>
> If that repository is properly set up, I'll send a patch to switch over
> haproxy/haproxy to make use of that action.
>
> Best regards
> Tim Düsterhus
>


Re: [EXTERNAL] Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

William,

On 3/8/22 21:30, Tim Düsterhus wrote:

- The action is also easily reusable by other projects. For testing my


Adding to that: It's also easily reusable by the other workflows. We 
currently have the separate musl.yml workflow that does this:


https://github.com/haproxy/haproxy/blob/5ce1299c643543c9b17b4124b299feb3caf63d9e/.github/workflows/musl.yml#L19-L20

Your patch proposal doesn't adjust that, but with the dedicated action 
it could automatically benefit from caching or any other improvements we 
make to the VTest installation without needing to touch all the .yml 
files separately.


Here's an example run with an updated commit:

https://github.com/TimWolla/haproxy/runs/5470838975?check_suite_focus=true

https://github.com/TimWolla/haproxy/commit/cefe211b774f0393d4f78268a23036f32b74ee4b

Best regards
Tim Düsterhus



Re: [EXTERNAL] Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

William,

On 3/8/22 20:54, William Lallemand wrote:

Honestly I'm confused, it is overcomplicated in my opinion :(

I don't really see the benefits in creating a whole new repository
instead of the few lines in the yaml file.


I believe that having a non-trivial amount of logic in a YAML file will 
ultimately result in a hard to understand configuration file.


As an example: YAML doesn't support any kind of syntax highlighting or 
autocompletion.



We are talking about doing a new project for just the equivalent of a 5
lines shell script... which really don't need to be tested and
maintained outside of the project.


With your suggested diff you needed to change 4 different locations 
within the vtest.yml, growing the file from 152 to 168 lines (+10%). And 
none of those lines are specific to HAProxy itself!


By separating out the VTest installation logic all that's needed in 
vtest.yml is the following:


- name: Install VTest
  uses: haproxy/action-install-vtest@main
  with:
branch: master
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

I think it's pretty obvious what that would do: It installs VTest and 
when that step is finished you can simply use vtest.


How this action does this is left to the action, it just promises you to 
do the right thing and then the HAProxy repository just needs to worry 
about the HAProxy specific parts.


It's really the same like any other programming library. Just like 
HAProxy uses libraries (e.g. mjson or SLZ) to perform some task, the CI 
can do the same.



I feel like I'm missing something with my simple implementation, we are
already downloading all the SSL libraries, should we stop doing it this
way? What could be the problems with this?


I'd like to simplify the installation of the various SSL libs as well, 
but I don't have a good proposal for that.



It seems like you want to do this in a strict github way, which is
probably convenient for a lot of usecase, but it just look really more
complicated that my first proposal.



It sure comes with a bit of initial set-up work, but I'm volunteering to 
do that. I'm also volunteering the maintenance of that. As I've said: 
This is nothing I just came up with, I'm using that for 
haproxy-auth-request since July 2020 and I'm pretty happy with that.


The benefits of this dedicated repository are:
- Fixes and improvements VTest installation no longer require a testing 
commit to the HAProxy repository. They can be developed in the dedicated 
repository with a specialized CI for VTest installation.

- It uses a proper programming language instead of embedding bash in a YAML.
- The action is also easily reusable by other projects. For testing my 
haproxy-auth-request repository I could remove the VTest installation 
logic from action-install-haproxy and simply use the existing action. 
This might also come in handy to test 
https://github.com/haproxytech/haproxy-lua-oauth and other official 
extensions.


Best regards
Tim Düsterhus



Re: [EXTERNAL] Re: CI caching improvement

2022-03-08 Thread William Lallemand
On Tue, Mar 08, 2022 at 08:05:31PM +0100, Tim Düsterhus wrote:
> 
> Let me make a competing proposal that:
> 
> 1. Keeps the complexity out of the GitHub workflow configuration in 
> haproxy/haproxy.
> 2. Still allows VTest caching.
> 
> For my https://github.com/TimWolla/haproxy-auth-request repository I 
> have created a reusable GitHub Action to perform the HAProxy 
> installation similar to 'actions/checkout':
> 
> https://github.com/TimWolla/action-install-haproxy/
> 
> I just spent a bit of time to fork that action and to make it VTest 
> specific:
> 
> https://github.com/TimWolla/action-install-vtest/
> 
> The action receives the VTest branch or commit as the input and will 
> handle all the heavy lifting of downloading, compiling and caching VTest.
> 
> The necessary changes to HAProxy then look like this:
> 
> https://github.com/TimWolla/haproxy/commit/78af831402e354f22d67682be0f323dec9c26a52
> 
> This basically replaces the use of 'scripts/build-vtest.sh' by 
> 'timwolla/action-install-vtest@main', so the configuration in the 
> haproxy/haproxy repository is not getting any more complicated, as all 
> the heavy lifting is done in the action which can be independently 
> tested and maintained.
> 
> If this proposal sounds good to you, then I'd like to suggest the following:
> 
> 1. Willy creates a new haproxy/action-install-vtest repository in the 
> haproxy organization.
> 2. Willy creates a new GitHub team with direct push access to that 
> repository.
> 3. Willy adds me to that team, so that I can maintain that repository 
> going forward (e.g. to handle the Dependabot pull requests that keep the 
> JavaScript dependencies up to date).
> 
> If that repository is properly set up, I'll send a patch to switch over 
> haproxy/haproxy to make use of that action.
> 
> Best regards
> Tim Düsterhus

Tim,

Honestly I'm confused, it is overcomplicated in my opinion :(

I don't really see the benefits in creating a whole new repository
instead of the few lines in the yaml file.

We are talking about doing a new project for just the equivalent of a 5
lines shell script... which really don't need to be tested and
maintained outside of the project.

I feel like I'm missing something with my simple implementation, we are
already downloading all the SSL libraries, should we stop doing it this
way? What could be the problems with this?

It seems like you want to do this in a strict github way, which is
probably convenient for a lot of usecase, but it just look really more
complicated that my first proposal.

-- 
William Lallemand



Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

Willy,

On 3/8/22 20:11, Willy Tarreau wrote:

Yes my point was about VTest. However you made me think about a very good
reason for caching haproxy builds as well :-)  Very commonly, some VTest
randomly fails. Timing etc are involved. And at the moment, it's impossible
to restart the tests without rebuilding everything. And it happens to me to
click "restart all jobs" sometimes up to 2-3 times in a row in order to end


I've looked up that roadmap entry I was thinking about: A "restart this 
job" button apparently is planned for Q1 2022.


see https://github.com/github/roadmap/issues/271 "any individual job"

Caching the HAProxy binary really is something I strongly advice against 
based on my experience with GitHub Actions and CI in general.


I think the restart of the individual job sufficiently solves the issue 
of flaky builds (until they are fixed properly).



If those are gone,
then I expect the vast majority of the commits to be green so that it only
catches mistakes in strange configurations that one cannot easily test
locally.


This is already the case for the vast majority of them. I think that if we
eliminate the random failures on vtest, we're around 95-98% of successful
pushes. The remaining ones are caused by occasional download failures to
install dependencies and variables referenced outside an obscure ifdef
combination that only strikes on one platform.


Yeah, exactly that's what I wanted to say: If the CI is expected to be 
green, then there should not be a need to manually check it for *every* 
push and thus it doesn't matter as much of it takes 65 seconds or 80 
seconds.



Of course if you think that the 8 seconds will improve your workflow, then
by all means: Commit it. From my POV in this case the cost is larger than
the benefit. Caching is one of the hard things in computer science :-)


[…]
extremely useful. Translating this back to haproxy builds, for me it
means: "if at any point there is any doubt that something that might
affect the build result might have changed, better rebuild". And that's


Yes, that's why I explicitly recommend against caching the HAProxy 
binary. Cache invalidation really *is* hard for that without increasing 
complexity of the vtest.yml which already is longer than I'm comfortable 
with. At least with matrix.py it's manageable.



Now rest assured that I won't roll over the floor crying for this, but
I think that this is heading in the right direction. And with more CI
time available, we can even hope to test more combinations and squash
more bugs before they reach users. That's another aspect to think about.
I'm really happy to see that the duration between versions .0 and .1
increases from version to version, despite the fact that we add a lot
more code and more complex one. For now the CI scales well with the
code, I'm interested in continuing to help it scale even better.



40 minutes ago I've already sent a proposal that should make both you as 
the developers and me as one of the CI experts happy :-) My concerns 
were primarily with regard the number of additional steps in William's 
proposal, not the caching of VTest per se.


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-08 Thread Willy Tarreau
On Tue, Mar 08, 2022 at 04:40:40PM +0100, Tim Düsterhus wrote:
> > > Please don't. You always want to run a clean build, otherwise you're going
> > > to get super-hard-to-debug failures if some object file is accidentally
> > > taken from the cache instead of a clean build.
> > 
> > What risk are you having in mind for example ? I'm personally thinking
> > that vtest is sufficiently self-contained to represent almost zero risk.
> > We could possibly even pre-populate it and decide to rebuild if it's not
> > able to dump its help page.
> 
> This was a reply to "cache the build of HAProxy", so unrelated to VTest. As
> the HAProxy build is what we want to test, it's important to always perform
> a clean build.

Yes my point was about VTest. However you made me think about a very good
reason for caching haproxy builds as well :-)  Very commonly, some VTest
randomly fails. Timing etc are involved. And at the moment, it's impossible
to restart the tests without rebuilding everything. And it happens to me to
click "restart all jobs" sometimes up to 2-3 times in a row in order to end
up on a valid one. It really takes ages. Just for this it would be really
useful to cache the haproxy builds so that re-running the jobs only runs
vtest.

The way I'm seeing an efficient process would be this:
  1) prepare OS images
  2) retrieve cached dependencies if any, and check for their freshness,
 otherwise build them
  3) retrieve cached haproxy if any and check for its freshness, otherwise
 build it (note: every single push will cause a rebuild). If we can
 include the branch name and/or tag name it's even better.
  4) retrieve cached VTest if any, and check for its freshness,
 otherwise build it
  5) run VTest

This way, a push to a branch will cause a rebuild, but restarting the exact
same test will not. This would finally allow us to double-check for unstable
VTest reports. Right now we re-discover them when stumbling upon an old tab
in the browser that was left there all day.

> Looking at this: https://github.com/haproxy/haproxy/actions/runs/1952139455,
> the fastest build is the gcc no features build with 1:33. Even if we
> subtract the 8 seconds for VTest, then that's more than I'd be willing to
> synchronously wait.

I understand, but that starts to count when you have to re-do all this
just for a single failed vtest timing out on an overloaded VM.

> The slowest are the QUICTLS builds with 6:21, because
> QUICTLS is not currently cached.

Which is an excellent reason for also caching QUICTLS as is already done
for libressl or openssl3, I don't remember (maybe both).

> FWIW: Even 6:21 is super fast compared to other projects. I've recently
> contributed to PHP and CI results appear after 20 minutes or so.

The fact that others spend even longer than us is not an excuse for
us to be as bad :-)

You & Ilya were among those insisting for a CI to improve our quality,
and it works! It works so well that developers are now impatient to see
the result to be sure they didn't break $RANDOM_OS. This morning I merged
David's build fix for kfreebsd and started to work on the backports. When
I finished, it looked like the build was OK but apparently I was still on
a previous result, I don't know, or maybe Cirrus is much more delayed, I
don't know, but in the end I thought everything was good and I pushed the
backport to 2.5. Only later I noticed I was receiving more "failed" mails
than usual, looked at it and there it was, the patch broke freebsd after
I had already pushed the backport to two branches. It's not dramatic, but
this method of working increases the context-switch rate and I often find
myself doing more mistakes or at least less controls when switching all
the time between branches and tests, because once you see an error, you
switch back to the original branch, fix the issue, push again, switch back
to the other branch you were, get notified about another issue (still
unrelated to what you're doing) etc. It's mentally difficult (at least
for me). Being able to shorten the time between a push and a result will
leave me less concentration time on another subject and destroy a bit less
what remains of my brain.

Nothing there is critical at all, the quality is already improved, but it
seems to me that for a little cost we can significantly improve some of
the remaining rough edges.

> In any case I believe that our goal should be that one does not need to
> check the CI, because no bad commits make it into the repository.

That's utopic and contrary to any principles of development because it
contests the sole existence of bugs. What matters is that we limit the
number of issues, we limit their duration, we remain as transparent as
possible on the fixes for these issues, and we limit the impact and
exposure for the undetected ones. The amount of mental efforts needed
to 

Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

William,

On 3/8/22 16:06, William Lallemand wrote:

Let me know if we can improve the attached patch, otherwise I'll merge
it.



Let me make a competing proposal that:

1. Keeps the complexity out of the GitHub workflow configuration in 
haproxy/haproxy.

2. Still allows VTest caching.

For my https://github.com/TimWolla/haproxy-auth-request repository I 
have created a reusable GitHub Action to perform the HAProxy 
installation similar to 'actions/checkout':


https://github.com/TimWolla/action-install-haproxy/

I just spent a bit of time to fork that action and to make it VTest 
specific:


https://github.com/TimWolla/action-install-vtest/

The action receives the VTest branch or commit as the input and will 
handle all the heavy lifting of downloading, compiling and caching VTest.


The necessary changes to HAProxy then look like this:

https://github.com/TimWolla/haproxy/commit/78af831402e354f22d67682be0f323dec9c26a52

This basically replaces the use of 'scripts/build-vtest.sh' by 
'timwolla/action-install-vtest@main', so the configuration in the 
haproxy/haproxy repository is not getting any more complicated, as all 
the heavy lifting is done in the action which can be independently 
tested and maintained.


If this proposal sounds good to you, then I'd like to suggest the following:

1. Willy creates a new haproxy/action-install-vtest repository in the 
haproxy organization.
2. Willy creates a new GitHub team with direct push access to that 
repository.
3. Willy adds me to that team, so that I can maintain that repository 
going forward (e.g. to handle the Dependabot pull requests that keep the 
JavaScript dependencies up to date).


If that repository is properly set up, I'll send a patch to switch over 
haproxy/haproxy to make use of that action.


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-08 Thread William Lallemand
On Tue, Mar 08, 2022 at 04:17:00PM +0100, Tim Düsterhus wrote:
> William
> 
> On 3/8/22 16:06, William Lallemand wrote:
> > Also, I'm wondering if we could also cache the build of HAProxy, you
> > could think that weird, but in fact it will help relaunch the tests when
> > one is failing, without rebuilding the whole thing.
> 
> Please don't. You always want to run a clean build, otherwise you're 
> going to get super-hard-to-debug failures if some object file is 
> accidentally taken from the cache instead of a clean build.
> 

The cache is supposed to be done once the build is valid, if the build
are not reproducible I think we have a bigger problem here, we are not
supposed to trigger random behavior of the compiler.

> I've heard that redoing a single failed build instead of the full matrix 
> is already on GitHub's roadmap, so the problem will solve itself.
> 
Maybe but that will still build HAProxy for nothing, if you have an
example of an unreproduicible build case with HAProxy I'm fine with this
statement but honestly nothing come to my mind. Even if one of the
distribution component is broken during a lapse of time, the problem
will reappear without the cache.

> > Let me know if we can improve the attached patch, otherwise I'll merge
> > it.
> > 
> 
> I don't like it. As you say: It's ugly and introduces complexity for a 
> mere 8 second gain. Sure, we should avoid burning the planet by wasting 
> CPU cycles in CI, but there's a point we're the additional complexity 
> results in a maintenance nightmare.

I don't really see where there is complexity, it's just a curl to get an
ID, and I don't have the impression that the github cache is not working
corretly since we are using it with the ssl libraries.

The build is conditioned by a unique if statement and a simple sh
oneliner.

It's not really 8s, it's 8s per job, and someting not everything is run in
parallel, without mentionning private repositories where the time is
accounted. Also that was also a way that could be used for quictls which
still takes a lot of time.

-- 
William Lallemand



Re: CI caching improvement

2022-03-08 Thread Илья Шипицин
вт, 8 мар. 2022 г. в 21:13, William Lallemand :

> On Tue, Mar 08, 2022 at 08:38:00PM +0500, Илья Шипицин wrote:
> >
> > I'm fine with swapping "vtest" <--> "haproxy" order.
> >
>
> Ok, I can do that.
>
> > also, I do not think current patch is ugly, it is acceptable for me (if
> we
> > agree to save 8 sec).
>
> Honestly I don't see the value in building the same binary which never
> change again and again, and it does not add much complexity so I think
> that's fine.
>
>
> > I'm afraid that current patch require some fix,
> > because GitHub uses cache in exclusive way, i.e.
> > you need unique cache key per job, current cache key is not job dependent
> > (but the rest looks fine)
> >
>
> I don't think I get that, the key is a combination of the VTest commit +
> the hash per job.
>
> key: vtest-${{ steps.vtest-id.outputs.key }}-${{
> steps.generate-cache-key.outputs.key }}
>

oops. as long as key includes ${{ steps.generate-cache-key.outputs.key }}
it is unique. no other change is required.


>
>
> Thanks,
>
> --
> William Lallemand
>


Re: CI caching improvement

2022-03-08 Thread William Lallemand
On Tue, Mar 08, 2022 at 08:38:00PM +0500, Илья Шипицин wrote:
> 
> I'm fine with swapping "vtest" <--> "haproxy" order.
> 

Ok, I can do that.

> also, I do not think current patch is ugly, it is acceptable for me (if we
> agree to save 8 sec).

Honestly I don't see the value in building the same binary which never
change again and again, and it does not add much complexity so I think
that's fine.


> I'm afraid that current patch require some fix,
> because GitHub uses cache in exclusive way, i.e.
> you need unique cache key per job, current cache key is not job dependent
> (but the rest looks fine)
> 

I don't think I get that, the key is a combination of the VTest commit +
the hash per job.

key: vtest-${{ steps.vtest-id.outputs.key }}-${{ 
steps.generate-cache-key.outputs.key }}


Thanks,

-- 
William Lallemand



Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

Willy,

On 3/8/22 16:24, Willy Tarreau wrote:

Hi Tim,

On Tue, Mar 08, 2022 at 04:17:00PM +0100, Tim Düsterhus wrote:

William

On 3/8/22 16:06, William Lallemand wrote:

Also, I'm wondering if we could also cache the build of HAProxy, you
could think that weird, but in fact it will help relaunch the tests when
one is failing, without rebuilding the whole thing.


Please don't. You always want to run a clean build, otherwise you're going
to get super-hard-to-debug failures if some object file is accidentally
taken from the cache instead of a clean build.


What risk are you having in mind for example ? I'm personally thinking
that vtest is sufficiently self-contained to represent almost zero risk.
We could possibly even pre-populate it and decide to rebuild if it's not
able to dump its help page.


This was a reply to "cache the build of HAProxy", so unrelated to VTest. 
As the HAProxy build is what we want to test, it's important to always 
perform a clean build.



I don't like it. As you say: It's ugly and introduces complexity for a mere
8 second gain. Sure, we should avoid burning the planet by wasting CPU
cycles in CI, but there's a point we're the additional complexity results in
a maintenance nightmare.


In fact it's not just the number of CPU cycles, it's also a matter of
interactivity. A long time ago I remember that we used to push and wait
for the build to complete. For a long time it has changed, we push and
switch to something else, and completely forget that we'd submitted a
build. For sure it's not vtest alone that will radically change this,
but my impression is that it could help, and near-zero risk. But that's
my perception and I could be mistaken.


I can only comment from my side: For me any patches are sent are 
inherently asynchronously processed by the CI, because I don't know when 
you are taking them. So I test them as good as I can locally and usually 
the CI is green afterwards.


Looking at this: 
https://github.com/haproxy/haproxy/actions/runs/1952139455, the fastest 
build is the gcc no features build with 1:33. Even if we subtract the 8 
seconds for VTest, then that's more than I'd be willing to synchronously 
wait. The slowest are the QUICTLS builds with 6:21, because QUICTLS is 
not currently cached.


FWIW: Even 6:21 is super fast compared to other projects. I've recently 
contributed to PHP and CI results appear after 20 minutes or so.


In any case I believe that our goal should be that one does not need to 
check the CI, because no bad commits make it into the repository. 
Unfortunately we still have some flaky tests, that needlessly take up 
attention, because one will need to check the red builds. If those are 
gone, then I expect the vast majority of the commits to be green so that 
it only catches mistakes in strange configurations that one cannot 
easily test locally.


Of course if you think that the 8 seconds will improve your workflow, 
then by all means: Commit it. From my POV in this case the cost is 
larger than the benefit. Caching is one of the hard things in computer 
science :-)


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-08 Thread Илья Шипицин
I thought to build "vtest" just once and deliver using artifacts to all
jobs. It will save some electricity, also GitHub sometimes throw 429 when
we download "vtest" in too many parallel ways.
however, it will not speed up, so I postoned that idea (something like that
https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts
)


I'm fine with swapping "vtest" <--> "haproxy" order.

also, I do not think current patch is ugly, it is acceptable for me (if we
agree to save 8 sec). I'm afraid that current patch require some fix,
because GitHub uses cache in exclusive way, i.e.
you need unique cache key per job, current cache key is not job dependent
(but the rest looks fine)

вт, 8 мар. 2022 г. в 20:06, William Lallemand :

> Hello,
>
> The attached patch implements a somehow ugly way to cache the VTest
> binary, basically it gets the commit ID by doing a curl of the
> master.patch on the github URL.
>
> It allows to save ~8s per matrix row, which is around 160s in total. I
> know there is a small window where the curl and the git clone won't have
> the same ID but that will be rebuild anyway for the next build, so
> that's fine in my opinion.
>
> We could probably use the same approach to cache quictls or anything
> that uses a git repository.
>
> Also, I'm wondering if we could also cache the build of HAProxy, you
> could think that weird, but in fact it will help relaunch the tests when
> one is failing, without rebuilding the whole thing.
>
> Let me know if we can improve the attached patch, otherwise I'll merge
> it.
>
> Regards,
>
> --
> William Lallemand
>


Re: CI caching improvement

2022-03-08 Thread Willy Tarreau
Hi Tim,

On Tue, Mar 08, 2022 at 04:17:00PM +0100, Tim Düsterhus wrote:
> William
> 
> On 3/8/22 16:06, William Lallemand wrote:
> > Also, I'm wondering if we could also cache the build of HAProxy, you
> > could think that weird, but in fact it will help relaunch the tests when
> > one is failing, without rebuilding the whole thing.
> 
> Please don't. You always want to run a clean build, otherwise you're going
> to get super-hard-to-debug failures if some object file is accidentally
> taken from the cache instead of a clean build.

What risk are you having in mind for example ? I'm personally thinking
that vtest is sufficiently self-contained to represent almost zero risk.
We could possibly even pre-populate it and decide to rebuild if it's not
able to dump its help page.

> I don't like it. As you say: It's ugly and introduces complexity for a mere
> 8 second gain. Sure, we should avoid burning the planet by wasting CPU
> cycles in CI, but there's a point we're the additional complexity results in
> a maintenance nightmare.

In fact it's not just the number of CPU cycles, it's also a matter of
interactivity. A long time ago I remember that we used to push and wait
for the build to complete. For a long time it has changed, we push and
switch to something else, and completely forget that we'd submitted a
build. For sure it's not vtest alone that will radically change this,
but my impression is that it could help, and near-zero risk. But that's
my perception and I could be mistaken.

Thanks,
Willy



Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

Willy,
William,

On 3/8/22 16:14, Willy Tarreau wrote:

Due to this I think we should move the build of vtest after the build
of haproxy (and generally, anything that's not needed for the build
ought to be moved after). This will at least save whatever can be
saved on failed builds.


That on the other hand makes sense to me. It just changes the order of 
the steps and thus brings a benefit without adding complexity.


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-08 Thread Tim Düsterhus

William

On 3/8/22 16:06, William Lallemand wrote:

Also, I'm wondering if we could also cache the build of HAProxy, you
could think that weird, but in fact it will help relaunch the tests when
one is failing, without rebuilding the whole thing.


Please don't. You always want to run a clean build, otherwise you're 
going to get super-hard-to-debug failures if some object file is 
accidentally taken from the cache instead of a clean build.


I've heard that redoing a single failed build instead of the full matrix 
is already on GitHub's roadmap, so the problem will solve itself.



Let me know if we can improve the attached patch, otherwise I'll merge
it.



I don't like it. As you say: It's ugly and introduces complexity for a 
mere 8 second gain. Sure, we should avoid burning the planet by wasting 
CPU cycles in CI, but there's a point we're the additional complexity 
results in a maintenance nightmare.


Best regards
Tim Düsterhus



Re: CI caching improvement

2022-03-08 Thread Willy Tarreau
Hi William,

On Tue, Mar 08, 2022 at 04:06:45PM +0100, William Lallemand wrote:
> Hello,
> 
> The attached patch implements a somehow ugly way to cache the VTest
> binary, basically it gets the commit ID by doing a curl of the
> master.patch on the github URL.
> 
> It allows to save ~8s per matrix row, which is around 160s in total. I
> know there is a small window where the curl and the git clone won't have
> the same ID but that will be rebuild anyway for the next build, so
> that's fine in my opinion.
> 
> We could probably use the same approach to cache quictls or anything
> that uses a git repository.
> 
> Also, I'm wondering if we could also cache the build of HAProxy, you
> could think that weird, but in fact it will help relaunch the tests when
> one is failing, without rebuilding the whole thing.
> 
> Let me know if we can improve the attached patch, otherwise I'll merge
> it.

Thanks for this. I can't judge on how this can be improved, however
I noticed today that one of the important goals of the CI is to see
if stuff builds at all (and we often break one platform or another).
Due to this I think we should move the build of vtest after the build
of haproxy (and generally, anything that's not needed for the build
ought to be moved after). This will at least save whatever can be
saved on failed builds.

Willy



CI caching improvement

2022-03-08 Thread William Lallemand
Hello,

The attached patch implements a somehow ugly way to cache the VTest
binary, basically it gets the commit ID by doing a curl of the
master.patch on the github URL.

It allows to save ~8s per matrix row, which is around 160s in total. I
know there is a small window where the curl and the git clone won't have
the same ID but that will be rebuild anyway for the next build, so
that's fine in my opinion.

We could probably use the same approach to cache quictls or anything
that uses a git repository.

Also, I'm wondering if we could also cache the build of HAProxy, you
could think that weird, but in fact it will help relaunch the tests when
one is failing, without rebuilding the whole thing.

Let me know if we can improve the attached patch, otherwise I'll merge
it.

Regards,

-- 
William Lallemand
>From 34649ae5549a73d0f43530794f47861fb679510e Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Tue, 8 Mar 2022 14:49:25 +0100
Subject: [PATCH] CI: github: add VTest to the github actions cache

Get the latest master commit ID from VTest and use it as a key for the
cache. VTest takes around 8s to build per matrix row, we save around
160s of CI with this.
---
 .github/workflows/vtest.yml | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
index a9e86b6a22..38099093cd 100644
--- a/.github/workflows/vtest.yml
+++ b/.github/workflows/vtest.yml
@@ -55,6 +55,11 @@ jobs:
   run: |
 echo "::set-output name=key::$(echo ${{ matrix.name }} | sha256sum | awk '{print $1}')"
 
+- name: Get VTest master commit ID
+  id: vtest-id
+  run: |
+echo "::set-output name=key::$(curl -s https://github.com/vtest/VTest/commit/master.patch 2>/dev/null | head -n1 | awk '{print $2}')"
+
 - name: Cache SSL libs
   if: ${{ matrix.ssl && matrix.ssl != 'stock' && matrix.ssl != 'BORINGSSL=yes' && matrix.ssl != 'QUICTLS=yes' }}
   id: cache_ssl
@@ -70,6 +75,14 @@ jobs:
   with:
 path: '~/opt-ot/'
 key: ot-${{ matrix.CC }}-${{ env.OT_CPP_VERSION }}-${{ contains(matrix.name, 'ASAN') }}
+
+- name: Cache VTest binary
+  id: cache_vtest
+  uses: actions/cache@v2
+  with:
+path: '~/vtest/'
+key: vtest-${{ steps.vtest-id.outputs.key }}-${{ steps.generate-cache-key.outputs.key }}
+
 - name: Install apt dependencies
   if: ${{ startsWith(matrix.os, 'ubuntu-') }}
   run: |
@@ -86,8 +99,11 @@ jobs:
 brew install socat
 brew install lua
 - name: Install VTest
+  if: ${{ steps.cache_vtest.outputs.cache-hit != 'true' }}
   run: |
 scripts/build-vtest.sh
+mkdir ~/vtest/
+cp ../vtest/vtest ~/vtest/
 - name: Install SSL ${{ matrix.ssl }}
   if: ${{ matrix.ssl && matrix.ssl != 'stock' && steps.cache_ssl.outputs.cache-hit != 'true' }}
   run: env ${{ matrix.ssl }} scripts/build-ssl.sh
@@ -134,7 +150,7 @@ jobs:
 # This is required for macOS which does not actually allow to increase
 # the '-n' soft limit to the hard limit, thus failing to run.
 ulimit -n 5000
-make reg-tests VTEST_PROGRAM=../vtest/vtest REGTESTS_TYPES=default,bug,devel
+make reg-tests VTEST_PROGRAM=~/vtest/vtest REGTESTS_TYPES=default,bug,devel
 - name: Show VTest results
   if: ${{ failure() && steps.vtest.outcome == 'failure' }}
   run: |
-- 
2.32.0



Re: [PATCH] CI: introduce caching for ssl libs (except BoringSSL, QUICTLS)

2022-01-25 Thread Willy Tarreau
On Sat, Jan 22, 2022 at 12:07:37AM +0500,  ??? wrote:
> Hello,
> 
> this patch introduces github actions cache for SSL libs.
> hope it will save couple of minutes.

Merged, thanks Ilya!
Willy



[PATCH] CI: introduce caching for ssl libs (except BoringSSL, QUICTLS)

2022-01-21 Thread Илья Шипицин
Hello,

this patch introduces github actions cache for SSL libs.
hope it will save couple of minutes.

cheers,
Ilya
From 5c62945e56f3bd36432483b01cba4e734dd44979 Mon Sep 17 00:00:00 2001
From: Ilya Shipitsin 
Date: Sat, 22 Jan 2022 00:00:44 +0500
Subject: [PATCH] CI: github actions: use cache for SSL libs

we have two kinds of SSL libs built - git based and version based.
this commit introduces caching for version based SSL libs.
---
 .github/workflows/vtest.yml | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/.github/workflows/vtest.yml b/.github/workflows/vtest.yml
index d748ee28f..ac516054b 100644
--- a/.github/workflows/vtest.yml
+++ b/.github/workflows/vtest.yml
@@ -47,6 +47,22 @@ jobs:
 - uses: actions/checkout@v2
   with:
 fetch-depth: 100
+#
+# Github Action cache key cannot contain comma, so we calculate it based on job name
+#
+- name: Generate cache key
+  id: generate-cache-key
+  run: |
+echo "::set-output name=key::$(echo ${{ matrix.name }} | sha256sum | awk '{print $1}')"
+
+- name: Cache SSL libs
+  if: ${{ matrix.ssl && matrix.ssl != 'stock' && matrix.ssl != 'BORINGSSL=yes' && matrix.ssl != 'QUICTLS=yes' }}
+  id: cache_ssl
+  uses: actions/cache@v2
+  with:
+path: '~/opt/'
+key: ssl-${{ steps.generate-cache-key.outputs.key }}
+
 - name: Cache OpenTracing
   if: ${{ contains(matrix.FLAGS, 'USE_OT=1') }}
   id: cache_ot
@@ -73,7 +89,7 @@ jobs:
   run: |
 scripts/build-vtest.sh
 - name: Install SSL ${{ matrix.ssl }}
-  if: ${{ matrix.ssl && matrix.ssl != 'stock' }}
+  if: ${{ matrix.ssl && matrix.ssl != 'stock' && steps.cache_ssl.outputs.cache-hit != 'true' }}
   run: env ${{ matrix.ssl }} scripts/build-ssl.sh
 - name: Install OpenTracing libs
   if: ${{ contains(matrix.FLAGS, 'USE_OT=1') && steps.cache_ot.outputs.cache-hit != 'true'  }}
-- 
2.34.1



Re: [cache] allow caching of OPTIONS request

2019-08-20 Thread Baptiste
On Mon, Aug 12, 2019 at 10:19 PM Willy Tarreau  wrote:

> Hi Baptiste,
>
> On Mon, Aug 12, 2019 at 09:35:56PM +0200, Baptiste wrote:
> > The use case is to avoid too many requests hitting an application server
> > for "preflight requests".
>
> But does this *really* happen to a point of being a concern with OPTIONS
> requests ? I mean, if OPTIONS represent a small percentage of the traffic
> I'd rather not start to hack around the standards and regret in 2 versions
> later...
>
> > It seems it owns its own header for caching:
> > https://www.w3.org/TR/cors/#access-control-max-age-response-header.
> > Some description here:
> https://www.w3.org/TR/cors/#preflight-result-cache-0
>
> But all this spec is explicitly for user-agents and not at all for
> intermediaries. And it doesn't make use of any single Cache-Control
> header field, it solely uses its own set of headers precisely to
> avoid mixing the two! And it doesn't suggest to violate the HTTP
> standards.
>
> > I do agree we should disable this by default and add an option
> > "enable-caching-cors-responses" to enable it on demand and clearly state
> in
> > the doc that this is not RFC compliant.
> > Let me know if that is ok for you.
>
> I still feel extremely uncomfortable with this because given that it
> requires to violate the basic standards to achieve something that is
> expected to be normal, that smells strongly like there is a wrong
> assumption somewhere in the chain, either regarding how it's being
> used or about some requirements.
>
> If you don't mind I'd rather bring the question on the HTTP working
> group to ask if we're missing something obvious or if user-agents
> suddenly decided to break the internet by purposely making non-
> cacheable requests, which is totally contrary to their tradition.
>
> As you know we've known a period many years ago where people used
> to say "I inserted haproxy and my application stopped working". Now
> these days are over (the badmouth will say haproxy stopped working)
> in main part because we took care of properly dealing with the
> standards. And clearly I'm extremely cautious not to revive these
> bad memories.
>
> Thanks,
> Willy
>

Hi Willy,

Yes I understand.
Would be great to have the feedback from the http working group.

In the mean time, if some people here would like to share with Willy and I
privately some numbers on what percentage of the traffic do OPTIONS
requests represent, this would be helpful.

Baptiste


Re: [cache] allow caching of OPTIONS request

2019-08-12 Thread Willy Tarreau
Hi Baptiste,

On Mon, Aug 12, 2019 at 09:35:56PM +0200, Baptiste wrote:
> The use case is to avoid too many requests hitting an application server
> for "preflight requests".

But does this *really* happen to a point of being a concern with OPTIONS
requests ? I mean, if OPTIONS represent a small percentage of the traffic
I'd rather not start to hack around the standards and regret in 2 versions
later...

> It seems it owns its own header for caching:
> https://www.w3.org/TR/cors/#access-control-max-age-response-header.
> Some description here: https://www.w3.org/TR/cors/#preflight-result-cache-0

But all this spec is explicitly for user-agents and not at all for
intermediaries. And it doesn't make use of any single Cache-Control
header field, it solely uses its own set of headers precisely to
avoid mixing the two! And it doesn't suggest to violate the HTTP
standards.

> I do agree we should disable this by default and add an option
> "enable-caching-cors-responses" to enable it on demand and clearly state in
> the doc that this is not RFC compliant.
> Let me know if that is ok for you.

I still feel extremely uncomfortable with this because given that it
requires to violate the basic standards to achieve something that is
expected to be normal, that smells strongly like there is a wrong
assumption somewhere in the chain, either regarding how it's being
used or about some requirements.

If you don't mind I'd rather bring the question on the HTTP working
group to ask if we're missing something obvious or if user-agents
suddenly decided to break the internet by purposely making non-
cacheable requests, which is totally contrary to their tradition.

As you know we've known a period many years ago where people used
to say "I inserted haproxy and my application stopped working". Now
these days are over (the badmouth will say haproxy stopped working)
in main part because we took care of properly dealing with the
standards. And clearly I'm extremely cautious not to revive these
bad memories.

Thanks,
Willy



Re: [cache] allow caching of OPTIONS request

2019-08-12 Thread Baptiste
On Mon, Aug 12, 2019 at 8:14 AM Willy Tarreau  wrote:

> Guys,
>
> On Wed, Aug 07, 2019 at 02:07:09PM +0200, Baptiste wrote:
> > Hi Vincent,
> >
> > HAProxy does not follow the max-age in the Cache-Control anyway.
>
> I know it's a bit late but I'm having an objection against this change.
> The reason is simple, OPTIONS is explicitly documented as being
> non-cacheable : https://tools.ietf.org/html/rfc7231#section-4.3.7
>
> So not only by implementing it we're going to badly break a number
> of properly running applications, but in addition we cannot expect
> any cache-control from the server in response to an OPTIONS request
> precisely because this is forbidden by the HTTP standard.
>
> When I search for OPTIONS and cache on the net, I only find AWS's
> Cloudfront which offers an option to enable it, and a number of
> feature requests responded to by "don't do that you're wrong". So
> at the very least we need to disable this by default, and possibly
> condition it with a well visible option such as "yes-i-know-i-am-
> breaking-the-cache-and-promise-never-to-file-a-bug-report" but what
> would be better would be to understand the exact use case and why it
> is considered to be valid despite being a blatant violation of the
> HTTP standard! History tells us that purposely violating standards
> only happens for bad reasons and systematically results in security
> issues.
>
> Thanks,
> Willy
>

Hi Willy,

The use case is to avoid too many requests hitting an application server
for "preflight requests".
It seems it owns its own header for caching:
https://www.w3.org/TR/cors/#access-control-max-age-response-header.
Some description here: https://www.w3.org/TR/cors/#preflight-result-cache-0

I do agree we should disable this by default and add an option
"enable-caching-cors-responses" to enable it on demand and clearly state in
the doc that this is not RFC compliant.
Let me know if that is ok for you.

Baptiste


Re: [cache] allow caching of OPTIONS request

2019-08-12 Thread Willy Tarreau
Guys,

On Wed, Aug 07, 2019 at 02:07:09PM +0200, Baptiste wrote:
> Hi Vincent,
> 
> HAProxy does not follow the max-age in the Cache-Control anyway.

I know it's a bit late but I'm having an objection against this change.
The reason is simple, OPTIONS is explicitly documented as being
non-cacheable : https://tools.ietf.org/html/rfc7231#section-4.3.7

So not only by implementing it we're going to badly break a number
of properly running applications, but in addition we cannot expect
any cache-control from the server in response to an OPTIONS request
precisely because this is forbidden by the HTTP standard.

When I search for OPTIONS and cache on the net, I only find AWS's
Cloudfront which offers an option to enable it, and a number of
feature requests responded to by "don't do that you're wrong". So
at the very least we need to disable this by default, and possibly
condition it with a well visible option such as "yes-i-know-i-am-
breaking-the-cache-and-promise-never-to-file-a-bug-report" but what
would be better would be to understand the exact use case and why it
is considered to be valid despite being a blatant violation of the
HTTP standard! History tells us that purposely violating standards
only happens for bad reasons and systematically results in security
issues. 

Thanks,
Willy



Re: [cache] allow caching of OPTIONS request

2019-08-08 Thread William Lallemand
On Wed, Aug 07, 2019 at 03:20:34PM +0200, Baptiste wrote:
> On Wed, Aug 7, 2019 at 3:18 PM William Lallemand 
> wrote:
> 
> > On Wed, Aug 07, 2019 at 12:38:05PM +0200, Baptiste wrote:
> > > Hi there,
> > >
> > > Please find in attachement a couple of patches to allow caching responses
> > > to OPTIONS requests, used in CORS pattern.
> > > In modern API where CORS is applied, there may be a bunch of OPTIONS
> > > requests coming in to the API servers, so caching these responses will
> > > improve API response time and lower the load on the servers.
> > > Given that HAProxy does not yet support the Vary header, this means this
> > > feature is useful in a single case, when the server send the following
> > > header "set access-control-allow-origin: *".
> > >
> > > William, can you check if my patches look correct, or if this is totally
> > > wrong and then I'll open an issue on github for tracking this one.
> > >
> >
> > Looks good to me, pushed in master.
> >
> > --
> > William Lallemand
> >
> 
> Great, thanks!
> 
> Baptiste

Don't forget to update the documentation,

Thanks

-- 
William Lallemand



Re: [cache] allow caching of OPTIONS request

2019-08-07 Thread Baptiste
On Wed, Aug 7, 2019 at 3:18 PM William Lallemand 
wrote:

> On Wed, Aug 07, 2019 at 12:38:05PM +0200, Baptiste wrote:
> > Hi there,
> >
> > Please find in attachement a couple of patches to allow caching responses
> > to OPTIONS requests, used in CORS pattern.
> > In modern API where CORS is applied, there may be a bunch of OPTIONS
> > requests coming in to the API servers, so caching these responses will
> > improve API response time and lower the load on the servers.
> > Given that HAProxy does not yet support the Vary header, this means this
> > feature is useful in a single case, when the server send the following
> > header "set access-control-allow-origin: *".
> >
> > William, can you check if my patches look correct, or if this is totally
> > wrong and then I'll open an issue on github for tracking this one.
> >
>
> Looks good to me, pushed in master.
>
> --
> William Lallemand
>

Great, thanks!

Baptiste


Re: [cache] allow caching of OPTIONS request

2019-08-07 Thread William Lallemand
On Wed, Aug 07, 2019 at 12:38:05PM +0200, Baptiste wrote:
> Hi there,
> 
> Please find in attachement a couple of patches to allow caching responses
> to OPTIONS requests, used in CORS pattern.
> In modern API where CORS is applied, there may be a bunch of OPTIONS
> requests coming in to the API servers, so caching these responses will
> improve API response time and lower the load on the servers.
> Given that HAProxy does not yet support the Vary header, this means this
> feature is useful in a single case, when the server send the following
> header "set access-control-allow-origin: *".
> 
> William, can you check if my patches look correct, or if this is totally
> wrong and then I'll open an issue on github for tracking this one.
> 

Looks good to me, pushed in master.

-- 
William Lallemand



Re: [cache] allow caching of OPTIONS request

2019-08-07 Thread Baptiste
Hi Vincent,

HAProxy does not follow the max-age in the Cache-Control anyway.
Here is what the configuration would look like:

backend X
  http-request cache-use cors if METH_OPTIONS
  http-response cache-store cors if METH_OPTIONS

cache cors
 total-max-size 64
 max-object-size 1024
 max-age 60

You see, the time the object will be cached by HAProxy is defined in your
cache storage bucket.

Baptiste




On Wed, Aug 7, 2019 at 1:47 PM GALLISSOT VINCENT 
wrote:

> Hi there,
>
>
> May I add that, in the CORS implementation, there is a specific header
> used for the caching duration: *Access-Control-Max-Age*
>
> This header is supported by most of browsers and its specification is
> available : https://fetch.spec.whatwg.org/#http-access-control-max-age
>
> One would think of using this header value instead of the well known
> Cache-Control header when dealing with CORS and OPTIONS requests.
>
> Cheers,
> Vincent
>
> --
> *De :* Baptiste 
> *Envoyé :* mercredi 7 août 2019 12:38
> *À :* HAProxy; William Lallemand
> *Objet :* [cache] allow caching of OPTIONS request
>
> Hi there,
>
> Please find in attachement a couple of patches to allow caching responses
> to OPTIONS requests, used in CORS pattern.
> In modern API where CORS is applied, there may be a bunch of OPTIONS
> requests coming in to the API servers, so caching these responses will
> improve API response time and lower the load on the servers.
> Given that HAProxy does not yet support the Vary header, this means this
> feature is useful in a single case, when the server send the following
> header "set access-control-allow-origin: *".
>
> William, can you check if my patches look correct, or if this is totally
> wrong and then I'll open an issue on github for tracking this one.
>
> Baptiste
>


RE: [cache] allow caching of OPTIONS request

2019-08-07 Thread GALLISSOT VINCENT
Hi there,


May I add that, in the CORS implementation, there is a specific header used for 
the caching duration: Access-Control-Max-Age

This header is supported by most of browsers and its specification is available 
: https://fetch.spec.whatwg.org/#http-access-control-max-age

One would think of using this header value instead of the well known 
Cache-Control header when dealing with CORS and OPTIONS requests.

Cheers,
Vincent


De : Baptiste 
Envoyé : mercredi 7 août 2019 12:38
À : HAProxy; William Lallemand
Objet : [cache] allow caching of OPTIONS request

Hi there,

Please find in attachement a couple of patches to allow caching responses to 
OPTIONS requests, used in CORS pattern.
In modern API where CORS is applied, there may be a bunch of OPTIONS requests 
coming in to the API servers, so caching these responses will improve API 
response time and lower the load on the servers.
Given that HAProxy does not yet support the Vary header, this means this 
feature is useful in a single case, when the server send the following header 
"set access-control-allow-origin: *".

William, can you check if my patches look correct, or if this is totally wrong 
and then I'll open an issue on github for tracking this one.

Baptiste


[cache] allow caching of OPTIONS request

2019-08-07 Thread Baptiste
Hi there,

Please find in attachement a couple of patches to allow caching responses
to OPTIONS requests, used in CORS pattern.
In modern API where CORS is applied, there may be a bunch of OPTIONS
requests coming in to the API servers, so caching these responses will
improve API response time and lower the load on the servers.
Given that HAProxy does not yet support the Vary header, this means this
feature is useful in a single case, when the server send the following
header "set access-control-allow-origin: *".

William, can you check if my patches look correct, or if this is totally
wrong and then I'll open an issue on github for tracking this one.

Baptiste
From b1ed59901522dc32fa112e77c93c9a723ecc2189 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Wed, 7 Aug 2019 12:24:36 +0200
Subject: [PATCH 2/2] MINOR: http: allow caching of OPTIONS request

Allow HAProxy to cache responses to OPTIONS HTTP requests.
This is useful in the use case of "Cross-Origin Resource Sharing" (cors)
to cache CORS responses from API servers.

Since HAProxy does not support Vary header for now, this would be only
useful for "access-control-allow-origin: *" use case.
---
 src/cache.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/src/cache.c b/src/cache.c
index 5b4062384..001532651 100644
--- a/src/cache.c
+++ b/src/cache.c
@@ -560,8 +560,8 @@ enum act_return http_action_store_cache(struct act_rule *rule, struct proxy *px,
 	if (!(txn->req.flags & HTTP_MSGF_VER_11))
 		goto out;
 
-	/* cache only GET method */
-	if (txn->meth != HTTP_METH_GET)
+	/* cache only GET or OPTIONS method */
+	if (txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_OPTIONS)
 		goto out;
 
 	/* cache key was not computed */
@@ -1058,6 +1058,9 @@ int sha1_hosturi(struct stream *s)
 	ctx.blk = NULL;
 
 	switch (txn->meth) {
+	case HTTP_METH_OPTIONS:
+		chunk_memcat(trash, "OPTIONS", 7);
+		break;
 	case HTTP_METH_HEAD:
 	case HTTP_METH_GET:
 		chunk_memcat(trash, "GET", 3);
@@ -1093,10 +1096,10 @@ enum act_return http_action_req_cache_use(struct act_rule *rule, struct proxy *p
 	struct cache_flt_conf *cconf = rule->arg.act.p[0];
 	struct cache *cache = cconf->c.cache;
 
-	/* Ignore cache for HTTP/1.0 requests and for requests other than GET
-	 * and HEAD */
+	/* Ignore cache for HTTP/1.0 requests and for requests other than GET,
+	 * HEAD and OPTIONS */
 	if (!(txn->req.flags & HTTP_MSGF_VER_11) ||
-	(txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_HEAD))
+	(txn->meth != HTTP_METH_GET && txn->meth != HTTP_METH_HEAD && txn->meth != HTTP_METH_OPTIONS))
 		txn->flags |= TX_CACHE_IGNORE;
 
 	http_check_request_for_cacheability(s, >req);
-- 
2.17.1

From e3aee8fe302e108e2652842f537dc850978d2e59 Mon Sep 17 00:00:00 2001
From: Baptiste Assmann 
Date: Mon, 5 Aug 2019 16:55:32 +0200
Subject: [PATCH 1/2] MINOR: http: add method to cache hash

Current HTTP cache hash contains only the Host header and the url path.
That said, request method should also be added to the mix to support
caching other request methods on the same URL. IE GET and OPTIONS.
---
 src/cache.c | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/src/cache.c b/src/cache.c
index 9cef0cab6..5b4062384 100644
--- a/src/cache.c
+++ b/src/cache.c
@@ -1041,9 +1041,9 @@ enum act_parse_ret parse_cache_store(const char **args, int *orig_arg, struct pr
 	return ACT_RET_PRS_OK;
 }
 
-/* This produces a sha1 hash of the concatenation of the first
- * occurrence of the Host header followed by the path component if it
- * begins with a slash ('/'). */
+/* This produces a sha1 hash of the concatenation of the HTTP method,
+ * the first occurrence of the Host header followed by the path component
+ * if it begins with a slash ('/'). */
 int sha1_hosturi(struct stream *s)
 {
 	struct http_txn *txn = s->txn;
@@ -1056,6 +1056,16 @@ int sha1_hosturi(struct stream *s)
 
 	trash = get_trash_chunk();
 	ctx.blk = NULL;
+
+	switch (txn->meth) {
+	case HTTP_METH_HEAD:
+	case HTTP_METH_GET:
+		chunk_memcat(trash, "GET", 3);
+		break;
+	default:
+		return 0;
+	}
+
 	if (!http_find_header(htx, ist("Host"), , 0))
 		return 0;
 	chunk_memcat(trash, ctx.value.ptr, ctx.value.len);
-- 
2.17.1



Re: issue with small object caching

2019-07-05 Thread Christopher Faulet

Le 04/07/2019 à 16:54, Senthil Naidu a écrit :

Hi,

I there any fix that can be applied for small object caching to work with 
Firefox and chrome too, With Internet Explorer I am able to hit the cache 
properly.

As mentioned in my earlier  , I can see that firefox is sending  header 
Cache-Control: no-cache and  Pragma: no-cache , While IE is not sending this 
headers



Hi,

So you must find a way to enable the cache on Firefox and Chrome. I don't know 
how to to so, but you may take a look to the developer tools of each browser. 
May be the cache is explicitly disabled.


On HAProxy side, you can remove the header "Cache-Control" before using the 
cache. But, IMHO, it is a really bad idea. If the client set this header, it is 
on purpose (or it should be).


--
Christopher Faulet



RE: issue with small object caching

2019-07-04 Thread Senthil Naidu
Hi,

I there any fix that can be applied for small object caching to work with 
Firefox and chrome too, With Internet Explorer I am able to hit the cache 
properly.

As mentioned in my earlier  , I can see that firefox is sending  header 
Cache-Control: no-cache and  Pragma: no-cache , While IE is not sending this 
headers

Regards
Senthil




Senthil Naidu
General Manager - IT Engineering
IT Engineering

Netmagic (An NTT Communications Company)

Direct: +91 22 40090100 | Cell: 7738784713
Email: sent...@netmagicsolutions.com

Linkedin | Twitter | Youtube | Blog | News | Whitepapers | Case Studies

-Original Message-
From: Senthil Naidu
Sent: 03 July 2019 17:00
To: 'Tim Düsterhus'; Christopher Faulet; haproxy@formilux.org
Subject: RE: issue with small object caching

Hi Tim,

Thanks , I am seeing the same headers on chrome too any idea how to disable 
this.

Regards
Senthil

-Original Message-
From: Tim Düsterhus [mailto:t...@bastelstu.be]
Sent: 03 July 2019 16:57
To: Senthil Naidu; Christopher Faulet; haproxy@formilux.org
Subject: Re: issue with small object caching

Senthil,

Am 03.07.19 um 13:20 schrieb Senthil Naidu:
> From Firefox
> *snip*
> Cache-Control: no-cache

This is the issue. Firefox requests that an intermediate proxy does not cache 
(or rather: that it re-validates its cached copy).

Best regards
Tim Düsterhus


RE: issue with small object caching

2019-07-03 Thread Senthil Naidu
Hi Tim,

Thanks , I am seeing the same headers on chrome too any idea how to disable 
this.

Regards
Senthil

-Original Message-
From: Tim Düsterhus [mailto:t...@bastelstu.be] 
Sent: 03 July 2019 16:57
To: Senthil Naidu; Christopher Faulet; haproxy@formilux.org
Subject: Re: issue with small object caching

Senthil,

Am 03.07.19 um 13:20 schrieb Senthil Naidu:
> From Firefox
> *snip*
> Cache-Control: no-cache

This is the issue. Firefox requests that an intermediate proxy does not cache 
(or rather: that it re-validates its cached copy).

Best regards
Tim Düsterhus


Re: issue with small object caching

2019-07-03 Thread Tim Düsterhus
Senthil,

Am 03.07.19 um 13:20 schrieb Senthil Naidu:
> From Firefox
> *snip*
> Cache-Control: no-cache

This is the issue. Firefox requests that an intermediate proxy does not
cache (or rather: that it re-validates its cached copy).

Best regards
Tim Düsterhus



RE: issue with small object caching

2019-07-03 Thread Senthil Naidu
Hi,

Have tried setting "no option http-use-htx" but still the cache is not working 
from Firefox its working only from IE.

Below are the request headers taken from developer options of both IE and 
Firefox

From Firefox

Host: testingsite.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:67.0) Gecko/20100101 
Firefox/67.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Pragma: no-cache
Cache-Control: no-cache

From IE

Key Value
Request GET / HTTP/1.1
Accept  text/html, application/xhtml+xml, */*
Accept-Language en-IN
User-Agent  Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; 
Trident/5.0)
UA-CPU  AMD64
Accept-Encoding gzip, deflate
Hosttestingsite.com
If-Modified-Since   Wed, 19 Jun 2019 07:56:46 GMT
If-None-Match   "119-58ba89226aecd"
Connection  Keep-Alive

Regards
Senthil

-Original Message-
From: Christopher Faulet [mailto:cfau...@haproxy.com] 
Sent: 21 June 2019 23:10
To: Senthil Naidu; haproxy@formilux.org
Subject: Re: issue with small object caching

Le 21/06/2019 à 17:00, Senthil Naidu a écrit :
> Hi,
> 
> I am using haproxy 2.0.0 , when I am using IE I can see in the logs the first 
> request is reaching the real server and when I do the refresh all subsequent 
> request is hitting the cache of haproxy, when I use firefox/crome all the 
> request are served by real server only its not hitting the cache on haproxy.
> 
> Configuration
> 
> 
> global
> log /dev/loglocal0 info
> stats socket /var/run/haproxy.stat
> 
> defaults
> option httplog
> cache test
> total-max-size 100
> max-object-size 100
> max-age 240
> 
> #TESTGRP STARTS# frontend  TESTGRP bind 
> 0.0.0.0:80 mode http http-request cache-use test http-response 
> cache-store test log global option httplog option forwardfor maxconn 
> 2000 timeout client 180s bind-process 1-2 default_backend  TESTGRPBACK
> 
> #TESTGRPBACK STARTS# backend TESTGRPBACK 
> balance roundrobin mode http log global option httpchk HEAD / fullconn  
> 2000 timeout server 180s default-server inter 3s rise 2 fall 3 
> slowstart 0 server vm-ayw87o 172.30.1.250:80 weight 12 maxconn 2000 
> #TESTGRP ENDS#
> 
> ==
> 
> haproxy -vv
> HA-Proxy version 2.0.0 2019/06/16 - https://haproxy.org/ Build options 
> :
>TARGET  = linux-glibc
>CPU = x86_64
>CC  = gcc
>CFLAGS  = -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
> -Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
> -Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
> -Wno-missing-field-initializers -Wtype-limits
>OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1
> 
> Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE 
> -PCRE_JIT -PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD 
> -PTHREAD_PSHARED -REGPARM -STATIC_PCRE -STATIC_PCRE2 +TPROXY 
> +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT +CRYPT_H -VSYSCALL +GETADDRINFO 
> +OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB -SLZ +CPU_AFFINITY 
> +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD 
> -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS
> 
> Default settings :
>bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Built with multi-threading support (MAX_THREADS=64, default=2).
> Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019 Running on 
> OpenSSL version : OpenSSL 1.1.1c  28 May 2019 OpenSSL library supports 
> TLS extensions : yes OpenSSL library supports SNI : yes OpenSSL 
> library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3 Built with network 
> namespace support.
> Built with transparent proxy support using: IP_TRANSPARENT 
> IPV6_TRANSPARENT IP_FREEBIND Built with zlib version : 1.2.7 Running 
> on zlib version : 1.2.7 Compression algorithms supported : 
> identity("identity"), deflate("deflate"), raw-deflate("deflate"), 
> gzip("gzip") Built with PCRE version : 8.32 2012-11-30 Running on PCRE 
> version : 8.32 2012-11-30 PCRE library supports JIT : no (USE_PCRE_JIT 
> not set) Encrypted password support via crypt(3): yes
> 
> Available polling systems :
>epoll : pref=300,  test result OK
> poll : pref=200,  test result OK
>   select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Available multiplexer protocols :
> (protocols marked as  cannot be specified using 'proto' keyword)
>h2 : mode=HTTP   side=FEmux=H2
>   

Re: issue with small object caching

2019-06-21 Thread Christopher Faulet

Le 21/06/2019 à 17:00, Senthil Naidu a écrit :

Hi,

I am using haproxy 2.0.0 , when I am using IE I can see in the logs the first 
request is reaching the real server and when I do the refresh all subsequent 
request is hitting the cache of haproxy, when I use firefox/crome all the 
request are served by real server only its not hitting the cache on haproxy.

Configuration


global
log /dev/loglocal0 info
stats socket /var/run/haproxy.stat

defaults
option httplog
cache test
total-max-size 100
max-object-size 100
max-age 240

#TESTGRP STARTS#
frontend  TESTGRP
bind 0.0.0.0:80
mode http
http-request cache-use test
http-response cache-store test
log global
option httplog
option forwardfor
maxconn 2000
timeout client 180s
bind-process 1-2
default_backend  TESTGRPBACK

#TESTGRPBACK STARTS#
backend TESTGRPBACK
balance roundrobin
mode http
log global
option httpchk HEAD /
fullconn  2000
timeout server 180s
default-server inter 3s rise 2 fall 3 slowstart 0
server vm-ayw87o 172.30.1.250:80 weight 12 maxconn 2000
#TESTGRP ENDS#

==

haproxy -vv
HA-Proxy version 2.0.0 2019/06/16 - https://haproxy.org/
Build options :
   TARGET  = linux-glibc
   CPU = x86_64
   CC  = gcc
   CFLAGS  = -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits
   OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE -PCRE_JIT 
-PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB 
-SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD 
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
   bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019
Running on OpenSSL version : OpenSSL 1.1.1c  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes

Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
   h2 : mode=HTTP   side=FEmux=H2
   h2 : mode=HTXside=FE|BE mux=H2
 : mode=HTXside=FE|BE mux=H1
 : mode=TCP|HTTP   side=FE|BE mux=PASS

Available services : none

Available filters :
 [SPOE] spoe
 [COMP] compression
 [CACHE] cache
 [TRACE] trace

Regards


Thanks. Everything seems to be ok.

Just to be sure, is there any chance that your server set the header "Vary" on 
responses to FF/Chrome but not on responses to IE ?


Otherwsise, you may try to disable the HTX by setting the directive "no option 
http-use-htx" in your default section.


It could also be helpful to have the request headers as sent from IE and from FF 
and the response headers as sent from your server.


--
Christopher Faulet



RE: issue with small object caching

2019-06-21 Thread Senthil Naidu
Hi,

I am using haproxy 2.0.0 , when I am using IE I can see in the logs the first 
request is reaching the real server and when I do the refresh all subsequent 
request is hitting the cache of haproxy, when I use firefox/crome all the 
request are served by real server only its not hitting the cache on haproxy.

Configuration


global
log /dev/loglocal0 info
stats socket /var/run/haproxy.stat

defaults
option httplog
cache test
   total-max-size 100
   max-object-size 100
   max-age 240

#TESTGRP STARTS#
frontend  TESTGRP
bind 0.0.0.0:80
mode http
http-request cache-use test
http-response cache-store test
log global
option httplog
option forwardfor
maxconn 2000
timeout client 180s
bind-process 1-2
default_backend  TESTGRPBACK

#TESTGRPBACK STARTS#
backend TESTGRPBACK
balance roundrobin
mode http
log global
option httpchk HEAD /
fullconn  2000
timeout server 180s
default-server inter 3s rise 2 fall 3 slowstart 0
server vm-ayw87o 172.30.1.250:80 weight 12 maxconn 2000
#TESTGRP ENDS#

==

haproxy -vv
HA-Proxy version 2.0.0 2019/06/16 - https://haproxy.org/
Build options :
  TARGET  = linux-glibc
  CPU = x86_64
  CC  = gcc
  CFLAGS  = -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter 
-Wno-old-style-declaration -Wno-ignored-qualifiers -Wno-clobbered 
-Wno-missing-field-initializers -Wtype-limits
  OPTIONS = USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1

Feature list : +EPOLL -KQUEUE -MY_EPOLL -MY_SPLICE +NETFILTER +PCRE -PCRE_JIT 
-PCRE2 -PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED -REGPARM 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H -VSYSCALL +GETADDRINFO +OPENSSL -LUA +FUTEX +ACCEPT4 -MY_ACCEPT4 +ZLIB 
-SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL +SYSTEMD 
-OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=2).
Built with OpenSSL version : OpenSSL 1.1.1c  28 May 2019
Running on OpenSSL version : OpenSSL 1.1.1c  28 May 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with network namespace support.
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FEmux=H2
  h2 : mode=HTXside=FE|BE mux=H2
: mode=HTXside=FE|BE mux=H1
: mode=TCP|HTTP   side=FE|BE mux=PASS

Available services : none

Available filters :
[SPOE] spoe
[COMP] compression
[CACHE] cache
[TRACE] trace

Regards
Senthil

-Original Message-
From: Christopher Faulet [mailto:cfau...@haproxy.com] 
Sent: 21 June 2019 20:23
To: Senthil Naidu; haproxy@formilux.org
Subject: Re: issue with small object caching

Le 21/06/2019 à 16:39, Senthil Naidu a écrit :
> I am testing the small object caching feature , when I am trying to 
> browse the site behind haproxy using IE the cache is working but when 
> I try the same from firefox or crome the caching functionality is not working.
> 
> Has anybody faced this issue.
> 

Hi,

Could you provide more information please ? The HAProxy version and if 
possible, a "minimal" configuration to reproduce your issue. Also, share the 
ouput of "haproxy -vv".

When you said the cache doesn't work, you mean there is no caching at all or 
you get an error ? Do you have any logs that could help to understand what's 
happening ?

--
Christopher Faulet


Re: issue with small object caching

2019-06-21 Thread Christopher Faulet

Le 21/06/2019 à 16:39, Senthil Naidu a écrit :
I am testing the small object caching feature , when I am trying to browse the 
site behind haproxy using IE the cache is working but when I try the same from 
firefox or crome the caching functionality is not working.


Has anybody faced this issue.



Hi,

Could you provide more information please ? The HAProxy version and if possible, 
a "minimal" configuration to reproduce your issue. Also, share the ouput of 
"haproxy -vv".


When you said the cache doesn't work, you mean there is no caching at all or you 
get an error ? Do you have any logs that could help to understand what's happening ?


--
Christopher Faulet



issue with small object caching

2019-06-21 Thread Senthil Naidu
Hi,

I am testing the small object caching feature , when I am trying to browse the 
site behind haproxy using IE the cache is working but when I try the same from 
firefox or crome the caching functionality is not working.
Has anybody faced this issue.

Regards
Senthil


Senthil Naidu
General Manager - IT Engineering
IT Engineering
Netmagic (An NTT Communications Company)
Direct: +91 +91 22 40090100
Cell: 7738784713
Email: sent...@netmagicsolutions.com
[https://www.netmagicsolutions.com/assets/images/EDM/images/ntt-com-netmagic-logo2018.jpg]
 <https://www.netmagicsolutions.com/>

Data Center Services <http://www.netmagicsolutions.com/datacenter-services> | 
Hosted IT 
Infrastructure<http://www.netmagicsolutions.com/managed-dedicated-servers-hosting-in-india.html>
 | Cloud Services 
<http://www.netmagicsolutions.com/cloud-infrastructure-services> | Managed 
Services<http://www.netmagicsolutions.com/infrastructure-management-services> | 
Infrastructure Application<http://www.netmagicsolutions.com/simpliapp-overview> 
| SD-WAN<https://www.netmagicsolutions.com/sd-wan>


[https://www.netmagicsolutions.com/assets/images/EDM/images/CIO-Choice-email-2019.jpg]


Re: Question on Caching.

2018-05-07 Thread Aaron West
Hi Willy,

I think what we are looking for is some kind of small cache to
accelerate the load times of a single page; this is particularly for
things such as WordPress where page load times can be slow. I imagine
it being set to cache the homepage only, fairly small(just a few K)
and I guess it would need to only cache the HTML body rather than
headers... Does that make any sense at all?

It may be that the small object cache would help? Or the idea itself
may be a waste of time... Currently, I've been looking at the Apache
module mod_cache.

I'd value your opinion either way.

Aaron West

Loadbalancer.org Ltd.

www.loadbalancer.org

+1 888 867 9504 / +44 (0)330 380 1064
aa...@loadbalancer.org



Re: Question on Caching.

2018-04-30 Thread Willy Tarreau
Hi Andrew,

On Mon, Apr 30, 2018 at 10:08:11AM +0100, Andrew Smalley wrote:
> Hi Willy
> 
> Thank you for you for your detailed reply explaining why you think only the
> favicon cache is sensible and that a full-blown cache within Haproxy
> is not the best of ideas although interesting.
> 
> I will continue the search for a viable yet small cache.

What are you looking for exactly ? What makes you think the small object
cache would not be suited to your use case, or that it would be desirable
to have a more complete cache inside the load balancer ? We didn't get
much feedback on the cache, so your opinion on this is obviously interesting.

Thanks,
Willy



Re: Question on Caching.

2018-04-30 Thread Andrew Smalley
Hi Willy

Thank you for you for your detailed reply explaining why you think only the
favicon cache is sensible and that a full-blown cache within Haproxy
is not the best of ideas although interesting.

I will continue the search for a viable yet small cache.



Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog


On 28 April 2018 at 06:48, Willy Tarreau <w...@1wt.eu> wrote:
> Hi Andrew,
>
> On Thu, Apr 26, 2018 at 10:06:00PM +0100, Andrew Smalley wrote:
>> Hello Haproxy mailing list
>>
>> I have been looking at caching technology and have found this
>>
>> https://github.com/jiangwenyuan/nuster/
>>
>> It claims to be a v1.7  / v1.8 branch fully compatible with haproxy
>> and indeed based on haproxy with the added capibility of having a
>> really fast cache as described here
>> https://github.com/jiangwenyuan/nuster/wiki/Web-cache-server-performance-benchmark:-nuster-vs-nginx-vs-varnish-vs-squid
>>
>> It looks interesting but I would love some feedback please
>
> It's indeed interesting. By the way it's only for 1.7 as the 1.8 branch also
> contains 1.7. First, he found that nginx's primary job is not to be a cache
> (just like haproxy is not), and that in the end, only squid and varnish are
> real caches.
>
> Second, he focuses on performance. It's not new for many of us that haproxy
> rocks here, being 3 times faster than nginx in single core and 3 times faster
> than varnish using 12 cores is easily expected since haproxy never makes any
> single I/O access. He could even have compared with the small object cache
> in 1.8.
>
> But there's an important point which is missed there : manageability.
> Varnish is a real cache and made for being manageable and flexible. It
> probably has its own shortcomings, but it does the job perfectly for those
> who need a fully manageable cache. Putting a full-blown cache into haproxy
> is not a good idea in my opinion. A load balancer must be mostly stateless
> so that it can be killed, rebooted or tweaked. Implementing a full-blown
> cache into it seriously affects this capacity. It may even require some
> reloads just to flush the cache, while a load balancer should never have
> to be touched for no reason, especially when it's shared between multiple
> customers.
>
> The reason I was OK with the "favicon cache" in haproxy is that I noticed
> that when placing haproxy in front of varnish, we wasted more CPU and time
> processing the connection between haproxy and varnish than delivering a
> very small object from memory. And others had noticed that before, seeing
> certain configs use dummy backends with "errorfile 503" to deliver very
> small objects. So I thought that a short-lived, tiny objects cache saving
> us from having to connect to varnish would benefit both components without
> adding any requirement for cache maintenance. It's really where I draw the
> line between what is acceptable in haproxy and what is not. The day someone
> asks here if we can implement a cache flush on the CLI will indicate we've
> gone too far already, and we purposely refrained from implementing it.
>
> With this said, I can understand why some people would like to have more,
> especially when seeing the performance numbers on the site above. Possibly
> that we should think how to make it easier for these people to maintain
> their code without having to rebase too much (eg they may need some extra
> register functions or hooks to avoid patching the core).
>
> Regards,
> Willy



Re: Question on Caching.

2018-04-27 Thread Willy Tarreau
Hi Andrew,

On Thu, Apr 26, 2018 at 10:06:00PM +0100, Andrew Smalley wrote:
> Hello Haproxy mailing list
> 
> I have been looking at caching technology and have found this
> 
> https://github.com/jiangwenyuan/nuster/
> 
> It claims to be a v1.7  / v1.8 branch fully compatible with haproxy
> and indeed based on haproxy with the added capibility of having a
> really fast cache as described here
> https://github.com/jiangwenyuan/nuster/wiki/Web-cache-server-performance-benchmark:-nuster-vs-nginx-vs-varnish-vs-squid
> 
> It looks interesting but I would love some feedback please

It's indeed interesting. By the way it's only for 1.7 as the 1.8 branch also
contains 1.7. First, he found that nginx's primary job is not to be a cache
(just like haproxy is not), and that in the end, only squid and varnish are
real caches.

Second, he focuses on performance. It's not new for many of us that haproxy
rocks here, being 3 times faster than nginx in single core and 3 times faster
than varnish using 12 cores is easily expected since haproxy never makes any
single I/O access. He could even have compared with the small object cache
in 1.8.

But there's an important point which is missed there : manageability.
Varnish is a real cache and made for being manageable and flexible. It
probably has its own shortcomings, but it does the job perfectly for those
who need a fully manageable cache. Putting a full-blown cache into haproxy
is not a good idea in my opinion. A load balancer must be mostly stateless
so that it can be killed, rebooted or tweaked. Implementing a full-blown
cache into it seriously affects this capacity. It may even require some
reloads just to flush the cache, while a load balancer should never have
to be touched for no reason, especially when it's shared between multiple
customers.

The reason I was OK with the "favicon cache" in haproxy is that I noticed
that when placing haproxy in front of varnish, we wasted more CPU and time
processing the connection between haproxy and varnish than delivering a
very small object from memory. And others had noticed that before, seeing
certain configs use dummy backends with "errorfile 503" to deliver very
small objects. So I thought that a short-lived, tiny objects cache saving
us from having to connect to varnish would benefit both components without
adding any requirement for cache maintenance. It's really where I draw the
line between what is acceptable in haproxy and what is not. The day someone
asks here if we can implement a cache flush on the CLI will indicate we've
gone too far already, and we purposely refrained from implementing it.

With this said, I can understand why some people would like to have more,
especially when seeing the performance numbers on the site above. Possibly
that we should think how to make it easier for these people to maintain
their code without having to rebase too much (eg they may need some extra
register functions or hooks to avoid patching the core).

Regards,
Willy



Question on Caching.

2018-04-26 Thread Andrew Smalley
Hello Haproxy mailing list

I have been looking at caching technology and have found this

https://github.com/jiangwenyuan/nuster/

It claims to be a v1.7  / v1.8 branch fully compatible with haproxy
and indeed based on haproxy with the added capibility of having a
really fast cache as described here
https://github.com/jiangwenyuan/nuster/wiki/Web-cache-server-performance-benchmark:-nuster-vs-nginx-vs-varnish-vs-squid

It looks interesting but I would love some feedback please


Andruw Smalley

Loadbalancer.org Ltd.

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review | Deployment Guides | Blog



Re: HAProxy 1.8.3 SSL caching regression

2018-01-05 Thread Willy Tarreau
On Thu, Jan 04, 2018 at 02:14:41PM -0500, Jeffrey J. Persch wrote:
> Hi William,
> 
> Verified.
> 
> Thanks for the quick fix,

Great, patch now merged. Thanks!
Willy



Re: HAProxy 1.8.3 SSL caching regression

2018-01-04 Thread Jeffrey J. Persch
Hi William,

Verified.

Thanks for the quick fix,
Jeffrey J. Persch


On Wed, Jan 3, 2018 at 2:02 PM, Jeffrey J. Persch 
wrote:

> Hi William,
>
> The test case now works. I will do load testing with the patch today.
>
> Thanks,
> Jeffrey J. Persch
>
> On Wed, Jan 3, 2018 at 1:25 PM, William Lallemand 
> wrote:
>
>> On Wed, Jan 03, 2018 at 06:41:01PM +0100, William Lallemand wrote:
>> > I'm able to reproduce the problem thanks to your detailed example, it
>> looks
>> > like a regression in the code.
>> >
>> > I will check the code to see what's going on.
>>
>> I found the issue, would you mind trying the attached patch?
>>
>> Thanks.
>>
>> --
>> William Lallemand
>>
>
>


Re: HAProxy 1.8.3 SSL caching regression

2018-01-03 Thread Jeffrey J. Persch
Hi William,

The test case now works. I will do load testing with the patch today.

Thanks,
Jeffrey J. Persch

On Wed, Jan 3, 2018 at 1:25 PM, William Lallemand 
wrote:

> On Wed, Jan 03, 2018 at 06:41:01PM +0100, William Lallemand wrote:
> > I'm able to reproduce the problem thanks to your detailed example, it
> looks
> > like a regression in the code.
> >
> > I will check the code to see what's going on.
>
> I found the issue, would you mind trying the attached patch?
>
> Thanks.
>
> --
> William Lallemand
>


Re: HAProxy 1.8.3 SSL caching regression

2018-01-03 Thread William Lallemand
On Wed, Jan 03, 2018 at 06:41:01PM +0100, William Lallemand wrote:
> I'm able to reproduce the problem thanks to your detailed example, it looks
> like a regression in the code.
> 
> I will check the code to see what's going on.

I found the issue, would you mind trying the attached patch?

Thanks.

-- 
William Lallemand
>From da786103ff39a0bed8efbde120808b2ee2ec Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Wed, 3 Jan 2018 19:15:51 +0100
Subject: [PATCH] BUG/MEDIUM: ssl: cache doesn't release shctx blocks

Since the rework of the shctx with the hot list system, the ssl cache
was putting session inside the hot list, without removing them.
Once all block were used, they were all locked in the hot list, which
was forbiding to reuse them for new sessions.

Bug introduced by 4f45bb9 ("MEDIUM: shctx: separate ssl and shctx")

Thanks to Jeffrey J. Persch for reporting this bug.

Must be backported to 1.8.
---
 src/ssl_sock.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index f9d5f2567..322b05409 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -3849,8 +3849,12 @@ static int sh_ssl_sess_store(unsigned char *s_id, unsigned char *data, int data_
 		first->len = sizeof(struct sh_ssl_sess_hdr);
 	}
 
-	if (shctx_row_data_append(ssl_shctx, first, data, data_len) < 0)
+	if (shctx_row_data_append(ssl_shctx, first, data, data_len) < 0) {
+		shctx_row_dec_hot(ssl_shctx, first);
 		return 0;
+	}
+
+	shctx_row_dec_hot(ssl_shctx, first);
 
 	return 1;
 }
-- 
2.13.6



Re: HAProxy 1.8.3 SSL caching regression

2018-01-03 Thread William Lallemand
On Wed, Jan 03, 2018 at 12:04:50PM -0500, Jeffrey J. Persch wrote:
> Greetings,
> 

Hi Jeffrey,

> We have been load testing 1.8.3 and noticed SSL caching was broken in 1.8
> during the shctx refactoring.
> 
> New SSL connections will cache up until tune.ssl.cachesize, then no
> connections will ever be cached again.
> 
> In haproxy 1.7 and before, the SSL cache works correctly as a LRU cache.
> 
> 
> [...] 
> 
> This appears to independent of target & openssl version, we have reproduced
> on linux2628 openssl 1.0.1k-fips and osx openssl 1.0.2n.
> 
> Any insights appreciated.
> 

I'm able to reproduce the problem thanks to your detailed example, it looks
like a regression in the code.

I will check the code to see what's going on.

-- 
William Lallemand



HAProxy 1.8.3 SSL caching regression

2018-01-03 Thread Jeffrey J. Persch
Greetings,

We have been load testing 1.8.3 and noticed SSL caching was broken in 1.8
during the shctx refactoring.

New SSL connections will cache up until tune.ssl.cachesize, then no
connections will ever be cached again.

In haproxy 1.7 and before, the SSL cache works correctly as a LRU cache.


Example configuration file, haproxy-ssl-cache.cfg, with cachesize set to 3
to easily reproduce:

global
ssl-default-bind-ciphers HIGH:!aNULL:!MD5
ssl-default-bind-options no-sslv3 no-tls-tickets
tune.ssl.default-dh-param 2048
tune.ssl.cachesize 3
tune.ssl.lifetime 60

defaults
stats enable
stats uri /haproxy/stats

frontend some-frontend
bind :8443 ssl crt self-signed.pem
mode http
timeout client 15s
timeout http-request 15s
use_backend some-backend

backend some-backend
mode http
timeout connect 1s
timeout queue 0s
timeout server 1s
server some-server 127.0.0.1:8091 check


Example script to build and test on macosx:

srcdir=haproxy-1.8

# Install openssl library
brew install openssl

# Build HAProxy with OpenSSL support
make -C $srcdir TARGET=osx USE_OPENSSL=1
SSL_INC=/usr/local/opt/openssl/include
SSL_LIB=/usr/local/opt/openssl/lib USE_ZLIB=1

# Generate self signed cert
openssl req -newkey rsa:2048 -nodes -keyout self-signed.key -x509
-days 365 -out self-signed.crt -subj
"/C=US/ST=Pennsylvania/L=Philadelphia/O=HAProxy/OU=QA/CN=localhost"
cat self-signed.crt self-signed.key >>self-signed.pem

# Run HAProxy
$srcdir/haproxy -f haproxy-ssl-cache.cfg &

# Demonstrate failure to cache new sessions after cache fills
openssl s_client -connect localhost:8443 -reconnect -no_ticket
verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused
openssl s_client -connect localhost:8443 -reconnect -no_ticket
verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused
openssl s_client -connect localhost:8443 -reconnect -no_ticket
verify.err | egrep 'New|Reused' # PASS: 1 New, 5 Reused
openssl s_client -connect localhost:8443 -reconnect -no_ticket
verify.err | egrep 'New|Reused' # FAIL: 6 New

# Demonstrate failure to evict old entries from cache
sleep 65
openssl s_client -connect localhost:8443 -reconnect -no_ticket
verify.err | egrep 'New|Reused' # FAIL: 6 New


This appears to independent of target & openssl version, we have reproduced
on linux2628 openssl 1.0.1k-fips and osx openssl 1.0.2n.

Any insights appreciated.

Thanks,
Jeffrey J. Persch


Re: dns resoluton and caching

2014-07-03 Thread Baptiste
On Wed, Jul 2, 2014 at 5:03 AM, Yumerefendi, Aydan
aydan.yumerefe...@inin.com wrote:
 We are using haproxy to route traffic to several AWS services that are
 behind an ELB and noticed the following behavior:
   - haproxy resolves the ELB address at startup and routes traffic just fine
 (not sure if haproxy uses the first IP or all resolved IPs and round-robins
 between them, though)
   - however,  Amazon uses short TTL for ELB DNS entries, 60s or so. If the
 ELB is modified, due to load, or internal reconfiguration, Amazon can modify
 the ELB DNS mapping
   - once the IP(s) mapped to the ELB are completely replaced, relative to
 the initially resolved ones at startup, haproxy fails to route traffic and
 returns status 503

 Is there a way to configure haproxy to respect DNS TTL when resolving dns
 names? If not, is there something you can recommend that would allow us to
 deal with this problem?

 Our current plan is to stop using DNS for the ELB and instead to use its ip
 addresses. We'll then periodically do DNS resolutions and once we detect a
 change, we'll rewrite the configuration and have haproxy reload it.

 Thanks for you help and for this great product!

 --aydan

Hi,

This is not yet available in HAProxy.
It's a common request and should be available some day, but no idea when!

Baptiste



Re: dns resoluton and caching

2014-07-03 Thread Yumerefendi, Aydan
Thank you Baptiste. I think it will be very useful feature to add for any
service that uses dynamic dns of some sort.

Thanks for your reply,

Best,
‹aydan

On 7/3/14, 4:41 PM, Baptiste bed...@gmail.com wrote:

On Wed, Jul 2, 2014 at 5:03 AM, Yumerefendi, Aydan
aydan.yumerefe...@inin.com wrote:
 We are using haproxy to route traffic to several AWS services that are
 behind an ELB and noticed the following behavior:
   - haproxy resolves the ELB address at startup and routes traffic just
fine
 (not sure if haproxy uses the first IP or all resolved IPs and
round-robins
 between them, though)
   - however,  Amazon uses short TTL for ELB DNS entries, 60s or so. If
the
 ELB is modified, due to load, or internal reconfiguration, Amazon can
modify
 the ELB DNS mapping
   - once the IP(s) mapped to the ELB are completely replaced, relative
to
 the initially resolved ones at startup, haproxy fails to route traffic
and
 returns status 503

 Is there a way to configure haproxy to respect DNS TTL when resolving
dns
 names? If not, is there something you can recommend that would allow us
to
 deal with this problem?

 Our current plan is to stop using DNS for the ELB and instead to use
its ip
 addresses. We'll then periodically do DNS resolutions and once we
detect a
 change, we'll rewrite the configuration and have haproxy reload it.

 Thanks for you help and for this great product!

 --aydan

Hi,

This is not yet available in HAProxy.
It's a common request and should be available some day, but no idea when!

Baptiste




dns resoluton and caching

2014-07-01 Thread Yumerefendi, Aydan
We are using haproxy to route traffic to several AWS services that are behind 
an ELB and noticed the following behavior:
  - haproxy resolves the ELB address at startup and routes traffic just fine 
(not sure if haproxy uses the first IP or all resolved IPs and round-robins 
between them, though)
  - however,  Amazon uses short TTL for ELB DNS entries, 60s or so. If the ELB 
is modified, due to load, or internal reconfiguration, Amazon can modify the 
ELB DNS mapping
  - once the IP(s) mapped to the ELB are completely replaced, relative to the 
initially resolved ones at startup, haproxy fails to route traffic and returns 
status 503

Is there a way to configure haproxy to respect DNS TTL when resolving dns 
names? If not, is there something you can recommend that would allow us to deal 
with this problem?

Our current plan is to stop using DNS for the ELB and instead to use its ip 
addresses. We'll then periodically do DNS resolutions and once we detect a 
change, we'll rewrite the configuration and have haproxy reload it.

Thanks for you help and for this great product!

-aydan


Re: problem with sort of caching of use_backend with socket.io and apache

2012-11-29 Thread david rene comba lareu
Hi,

many thanks, your link was exactly what i needed ! :D

Regards,
Shadow.

2012/11/29 Baptiste bed...@gmail.com:
 Hi David,

 For more information about HAProxy and websockets, please have a look at:
 http://blog.exceliance.fr/2012/11/07/websockets-load-balancing-with-haproxy/

 It may give you some hints and point you to the right direction.

 cheers


 On Wed, Nov 28, 2012 at 6:34 PM, david rene comba lareu
 shadow.of.sou...@gmail.com wrote:
 Thanks willy, i solved it as soon you answer me but i'm still dealing
 to the configuration to make it work as i need:

 my last question was this:
 http://serverfault.com/questions/451690/haproxy-is-caching-the-forwarding
 and i got it working, but for some reason, after the authentication is
 made and the some commands are sent, the connection is dropped and a
 new connection is made as you can see here:

   info  - handshake authorized 2ZqGgU2L5RNksXQRWuhi
   debug - setting request GET /socket.io/1/websocket/2ZqGgU2L5RNksXQRWuhi
   debug - set heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:3+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::3+[woot]
   info  - transport end (socket end)
   debug - set close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - discarding transport
   debug - client authorized
   info  - handshake authorized WkHV-B80ejP6MHQTWuhj
   debug - setting request GET /socket.io/1/websocket/WkHV-B80ejP6MHQTWuhj
   debug - set heartbeat interval for client WkHV-B80ejP6MHQTWuhj
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:4+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::4+[woot]
   info  - transport end (socket end)

 i tried several configurations, something like this:
 http://stackoverflow.com/questions/4360221/haproxy-websocket-disconnection/

 and also declaring 2 backends, and using ACL to forward to a backend
 that has the
   option http-pretend-keepalive
 when the request is a websocket request and to a backend that has
 http-server-close when the request is only for socket.io static files
 or is any other type of request that is not websocket.

 i would clarify that http-server-close is only on the nginx backend
 and in the static files backend, http-pretend-keepalive is on frontend
 all and in the websocket backend.

 anyone could point me to the right direction? i tried several
 combinations and none worked so far :(

 thanks in advance for your time and patience :)

 2012/11/24 Willy Tarreau w...@1wt.eu:
 Hi David,

 On Sat, Nov 24, 2012 at 09:26:56AM -0300, david rene comba lareu wrote:
 Hi everyone,

 i'm little disappointed with a problem i'm having trying to configure
 HAproxy in the way i need, so i need a little of help of you guys,
 that knows a lot more than me about this, as i reviewed all the
 documentation and tried several things but nothing worked :(.

 basically, my structure is:

 HAproxy as frontend, in 80 port - forwards by default to webserver
 (in this case is apache, in other machines could be nginx)
  - depending the domain
 and the request, forwards to an Node.js app

 so i have something like this:

 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 daemon

   defaults
 log global
 modehttp
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5


 frontend all 0.0.0.0:80
 timeout client 5000
 default_backend www_backend

 acl is_soio url_dom(host) -i socket.io #if the request contains socket.io

 acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com

 use_backend chat_backend if is_chat is_soio

 backend www_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout server 5000
 timeout connect 4000
 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to 
 apache2

 backend chat_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout queue 5
 timeout server 5
 timeout connect 5
 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to
 node.js app

 my application uses socket.io, so anything that match the domain and
 has socket.io in the request, should forward to the chat_backend.

 The problem is that if i load directly from the browser, let say, the
 socket.io file (it will be something like
 http://www.chaturl.com/socket.io/socket.io.js) loads perfectly, but
 then when i try to load index.html (as
 http

Re: problem with sort of caching of use_backend with socket.io and apache

2012-11-28 Thread david rene comba lareu
Thanks willy, i solved it as soon you answer me but i'm still dealing
to the configuration to make it work as i need:

my last question was this:
http://serverfault.com/questions/451690/haproxy-is-caching-the-forwarding
and i got it working, but for some reason, after the authentication is
made and the some commands are sent, the connection is dropped and a
new connection is made as you can see here:

  info  - handshake authorized 2ZqGgU2L5RNksXQRWuhi
  debug - setting request GET /socket.io/1/websocket/2ZqGgU2L5RNksXQRWuhi
  debug - set heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
  debug - client authorized for
  debug - websocket writing 1::
  debug - websocket received data packet
5:3+::{name:ferret,args:[tobi]}
  debug - sending data ack packet
  debug - websocket writing 6:::3+[woot]
  info  - transport end (socket end)
  debug - set close timeout for client 2ZqGgU2L5RNksXQRWuhi
  debug - cleared close timeout for client 2ZqGgU2L5RNksXQRWuhi
  debug - cleared heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
  debug - discarding transport
  debug - client authorized
  info  - handshake authorized WkHV-B80ejP6MHQTWuhj
  debug - setting request GET /socket.io/1/websocket/WkHV-B80ejP6MHQTWuhj
  debug - set heartbeat interval for client WkHV-B80ejP6MHQTWuhj
  debug - client authorized for
  debug - websocket writing 1::
  debug - websocket received data packet
5:4+::{name:ferret,args:[tobi]}
  debug - sending data ack packet
  debug - websocket writing 6:::4+[woot]
  info  - transport end (socket end)

i tried several configurations, something like this:
http://stackoverflow.com/questions/4360221/haproxy-websocket-disconnection/

and also declaring 2 backends, and using ACL to forward to a backend
that has the
  option http-pretend-keepalive
when the request is a websocket request and to a backend that has
http-server-close when the request is only for socket.io static files
or is any other type of request that is not websocket.

i would clarify that http-server-close is only on the nginx backend
and in the static files backend, http-pretend-keepalive is on frontend
all and in the websocket backend.

anyone could point me to the right direction? i tried several
combinations and none worked so far :(

thanks in advance for your time and patience :)

2012/11/24 Willy Tarreau w...@1wt.eu:
 Hi David,

 On Sat, Nov 24, 2012 at 09:26:56AM -0300, david rene comba lareu wrote:
 Hi everyone,

 i'm little disappointed with a problem i'm having trying to configure
 HAproxy in the way i need, so i need a little of help of you guys,
 that knows a lot more than me about this, as i reviewed all the
 documentation and tried several things but nothing worked :(.

 basically, my structure is:

 HAproxy as frontend, in 80 port - forwards by default to webserver
 (in this case is apache, in other machines could be nginx)
  - depending the domain
 and the request, forwards to an Node.js app

 so i have something like this:

 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 daemon

   defaults
 log global
 modehttp
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5


 frontend all 0.0.0.0:80
 timeout client 5000
 default_backend www_backend

 acl is_soio url_dom(host) -i socket.io #if the request contains socket.io

 acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com

 use_backend chat_backend if is_chat is_soio

 backend www_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout server 5000
 timeout connect 4000
 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to 
 apache2

 backend chat_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout queue 5
 timeout server 5
 timeout connect 5
 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to
 node.js app

 my application uses socket.io, so anything that match the domain and
 has socket.io in the request, should forward to the chat_backend.

 The problem is that if i load directly from the browser, let say, the
 socket.io file (it will be something like
 http://www.chaturl.com/socket.io/socket.io.js) loads perfectly, but
 then when i try to load index.html (as
 http://www.chaturl.com/index.html) most of the times, is still
 redirect to socket.io. after refreshing a few time, it finally loads
 index.html, but then, doesn't load the socket.io.js file inserted in
 the file (why it redirect to the apache server, and not the node.js
 app). so as i said, it sort of caching the request.

 i tried several ACL combinations, i disabled the domain check, only
 checking for socket.io but is still the same. Reading again the
 documentation i tried to use hdr_dir, hdr_dom

Re: problem with sort of caching of use_backend with socket.io and apache

2012-11-28 Thread Baptiste
Hi David,

For more information about HAProxy and websockets, please have a look at:
http://blog.exceliance.fr/2012/11/07/websockets-load-balancing-with-haproxy/

It may give you some hints and point you to the right direction.

cheers


On Wed, Nov 28, 2012 at 6:34 PM, david rene comba lareu
shadow.of.sou...@gmail.com wrote:
 Thanks willy, i solved it as soon you answer me but i'm still dealing
 to the configuration to make it work as i need:

 my last question was this:
 http://serverfault.com/questions/451690/haproxy-is-caching-the-forwarding
 and i got it working, but for some reason, after the authentication is
 made and the some commands are sent, the connection is dropped and a
 new connection is made as you can see here:

   info  - handshake authorized 2ZqGgU2L5RNksXQRWuhi
   debug - setting request GET /socket.io/1/websocket/2ZqGgU2L5RNksXQRWuhi
   debug - set heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:3+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::3+[woot]
   info  - transport end (socket end)
   debug - set close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared close timeout for client 2ZqGgU2L5RNksXQRWuhi
   debug - cleared heartbeat interval for client 2ZqGgU2L5RNksXQRWuhi
   debug - discarding transport
   debug - client authorized
   info  - handshake authorized WkHV-B80ejP6MHQTWuhj
   debug - setting request GET /socket.io/1/websocket/WkHV-B80ejP6MHQTWuhj
   debug - set heartbeat interval for client WkHV-B80ejP6MHQTWuhj
   debug - client authorized for
   debug - websocket writing 1::
   debug - websocket received data packet
 5:4+::{name:ferret,args:[tobi]}
   debug - sending data ack packet
   debug - websocket writing 6:::4+[woot]
   info  - transport end (socket end)

 i tried several configurations, something like this:
 http://stackoverflow.com/questions/4360221/haproxy-websocket-disconnection/

 and also declaring 2 backends, and using ACL to forward to a backend
 that has the
   option http-pretend-keepalive
 when the request is a websocket request and to a backend that has
 http-server-close when the request is only for socket.io static files
 or is any other type of request that is not websocket.

 i would clarify that http-server-close is only on the nginx backend
 and in the static files backend, http-pretend-keepalive is on frontend
 all and in the websocket backend.

 anyone could point me to the right direction? i tried several
 combinations and none worked so far :(

 thanks in advance for your time and patience :)

 2012/11/24 Willy Tarreau w...@1wt.eu:
 Hi David,

 On Sat, Nov 24, 2012 at 09:26:56AM -0300, david rene comba lareu wrote:
 Hi everyone,

 i'm little disappointed with a problem i'm having trying to configure
 HAproxy in the way i need, so i need a little of help of you guys,
 that knows a lot more than me about this, as i reviewed all the
 documentation and tried several things but nothing worked :(.

 basically, my structure is:

 HAproxy as frontend, in 80 port - forwards by default to webserver
 (in this case is apache, in other machines could be nginx)
  - depending the domain
 and the request, forwards to an Node.js app

 so i have something like this:

 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 daemon

   defaults
 log global
 modehttp
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5


 frontend all 0.0.0.0:80
 timeout client 5000
 default_backend www_backend

 acl is_soio url_dom(host) -i socket.io #if the request contains socket.io

 acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com

 use_backend chat_backend if is_chat is_soio

 backend www_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout server 5000
 timeout connect 4000
 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to 
 apache2

 backend chat_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout queue 5
 timeout server 5
 timeout connect 5
 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to
 node.js app

 my application uses socket.io, so anything that match the domain and
 has socket.io in the request, should forward to the chat_backend.

 The problem is that if i load directly from the browser, let say, the
 socket.io file (it will be something like
 http://www.chaturl.com/socket.io/socket.io.js) loads perfectly, but
 then when i try to load index.html (as
 http://www.chaturl.com/index.html) most of the times, is still
 redirect to socket.io. after refreshing a few time, it finally loads

Re: problem with sort of caching of use_backend with socket.io and apache

2012-11-24 Thread Willy Tarreau
Hi David,

On Sat, Nov 24, 2012 at 09:26:56AM -0300, david rene comba lareu wrote:
 Hi everyone,
 
 i'm little disappointed with a problem i'm having trying to configure
 HAproxy in the way i need, so i need a little of help of you guys,
 that knows a lot more than me about this, as i reviewed all the
 documentation and tried several things but nothing worked :(.
 
 basically, my structure is:
 
 HAproxy as frontend, in 80 port - forwards by default to webserver
 (in this case is apache, in other machines could be nginx)
  - depending the domain
 and the request, forwards to an Node.js app
 
 so i have something like this:
 
 global
 log 127.0.0.1   local0
 log 127.0.0.1   local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 daemon
 
   defaults
 log global
 modehttp
 maxconn 2000
 contimeout  5000
 clitimeout  5
 srvtimeout  5
 
 
 frontend all 0.0.0.0:80
 timeout client 5000
 default_backend www_backend
 
 acl is_soio url_dom(host) -i socket.io #if the request contains socket.io
 
 acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com
 
 use_backend chat_backend if is_chat is_soio
 
 backend www_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout server 5000
 timeout connect 4000
 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2
 
 backend chat_backend
 balance roundrobin
 option forwardfor # This sets X-Forwarded-For
 timeout queue 5
 timeout server 5
 timeout connect 5
 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to
 node.js app
 
 my application uses socket.io, so anything that match the domain and
 has socket.io in the request, should forward to the chat_backend.
 
 The problem is that if i load directly from the browser, let say, the
 socket.io file (it will be something like
 http://www.chaturl.com/socket.io/socket.io.js) loads perfectly, but
 then when i try to load index.html (as
 http://www.chaturl.com/index.html) most of the times, is still
 redirect to socket.io. after refreshing a few time, it finally loads
 index.html, but then, doesn't load the socket.io.js file inserted in
 the file (why it redirect to the apache server, and not the node.js
 app). so as i said, it sort of caching the request.
 
 i tried several ACL combinations, i disabled the domain check, only
 checking for socket.io but is still the same. Reading again the
 documentation i tried to use hdr_dir, hdr_dom, with other headers as
 URI, url, Request (btw, where i can find a list of headers supported
 by the layer 7 ACL ?).
 
 so, nothing worked, if someone could help me, and point me to the
 right direction, i would be really grateful :D

You're missing option http-server-close in your config, so after
the first request is done, haproxy switches to tunnel mode and maintains
the client-server connection without inspecting anything in it.

Regards,
Willy




Re: Caching

2011-09-20 Thread Christophe Rahier
Hi,

What do you mean when you say running -c?

Here's my config file.

Thanks for your help.

Christophe

global
log 192.168.0.2 local0
log 127.0.0.1 local1 notice
maxconn 10240
defaults
logglobal
option dontlognull
retries2
timeout client 35s
timeout server 35s
timeout connect 5s
timeout http-keep-alive 10s

listen WebPlayer-Farm 192.168.0.2:80
mode http
option httplog
balance source
#balance leastconn
option forwardfor
stats enable
option http-server-close
server Player1 192.168.0.10:80 check
server Player2 192.168.0.11:80 check
server Player3 192.168.0.12:80 check
server Player4 192.168.0.13:80 check
server Player5 192.168.0.14:80 check
option httpchk HEAD /checkcf.cfm HTTP/1.0

listen WebPlayer-Farm-SSL 192.168.0.2:443
mode tcp
option ssl-hello-chk
balance source
server Player1 192.168.0.10:443 check
server Player2 192.168.0.11:443 check
server Player3 192.168.0.12:443 check
server Player4 192.168.0.13:443 check
server Player5 192.168.0.14:443 check

listen  Manager-Farm192.168.0.2:81
mode http
option httplog
balance source
option forwardfor
stats enable
option http-server-close
server  Manager1 192.168.0.60:80 check
server  Manager2 192.168.0.61:80 check
option httpchk HEAD /testcf/checkcf.cfm HTTP/1.0

listen Manager-Farm-SSL 192.168.0.2:444
mode tcp
option ssl-hello-chk
balance source
server Manager1 192.168.0.60:443 check
server Manager2 192.168.0.61:443 check

listen  info 192.168.0.2:90
mode http
balance source
stats uri /






Le 20/09/11 01:27, « Hank A. Paulson » h...@spamproof.nospammail.net a
écrit :

You can get weird results like this sometimes if you don't use http-close
or 
any other http closing option on http backends. You should paste your
config.

Maybe there should be a warning, if there is not already, for that
situation - 
maybe just when running -c.

On 9/19/11 5:46 AM, Christophe Rahier wrote:
 I don't use Apache but IIS.

 I tried to disable caching on IIS but the problem is still there.

 There's no proxy, all requests are sent from pfSense.

 Christophe




 Le 19/09/11 13:45, « Baptiste »bed...@gmail.com  a écrit :

 hi Christophe,

 HAProxy is *only* a reverse proxy.
 No caching functions in it.

 Have you tried to browse your backend servers directly?
 Can it be related to your browser's cache?

 cheers

 On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier
 christo...@qualifio.com  wrote:
 Hi,
 Is there a caching system at HAProxy?

 In fact, we find that when we put online new files (CSS, for example)
 that
 they are not addressed directly, it usually takes about ten minutes.

 Thank you in advance for your help.

 Christophe










Caching

2011-09-19 Thread Christophe Rahier
Hi,

Is there a caching system at HAProxy?

In fact, we find that when we put online new files (CSS, for example) that they 
are not addressed directly, it usually takes about ten minutes.

Thank you in advance for your help.

Christophe


Re: Caching

2011-09-19 Thread Christophe Rahier
Hi,

I thought the problem was in my browser but when I empty the cache, I've
the same problem.

To be sure, I tried with an other browser and the problem is the same.

When I call my page locally from the server, the result is OK.

Christophe


Le 19/09/11 13:45, « Baptiste » bed...@gmail.com a écrit :

hi Christophe,

HAProxy is *only* a reverse proxy.
No caching functions in it.

Have you tried to browse your backend servers directly?
Can it be related to your browser's cache?

cheers

On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 Is there a caching system at HAProxy?

 In fact, we find that when we put online new files (CSS, for example)
that
 they are not addressed directly, it usually takes about ten minutes.

 Thank you in advance for your help.

 Christophe







Re: Caching

2011-09-19 Thread Baptiste
In any case HAProxy can be pointed about this problem.

Do you have a proxy on your LAN?
or Apache mod_cache enabled?

cheers



On Mon, Sep 19, 2011 at 2:30 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,

 I thought the problem was in my browser but when I empty the cache, I've
 the same problem.

 To be sure, I tried with an other browser and the problem is the same.

 When I call my page locally from the server, the result is OK.

 Christophe


 Le 19/09/11 13:45, « Baptiste » bed...@gmail.com a écrit :

hi Christophe,

HAProxy is *only* a reverse proxy.
No caching functions in it.

Have you tried to browse your backend servers directly?
Can it be related to your browser's cache?

cheers

On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier
christo...@qualifio.com wrote:
 Hi,
 Is there a caching system at HAProxy?

 In fact, we find that when we put online new files (CSS, for example)
that
 they are not addressed directly, it usually takes about ten minutes.

 Thank you in advance for your help.

 Christophe








Re: Caching

2011-09-19 Thread Hank A. Paulson
You can get weird results like this sometimes if you don't use http-close or 
any other http closing option on http backends. You should paste your config.


Maybe there should be a warning, if there is not already, for that situation - 
maybe just when running -c.


On 9/19/11 5:46 AM, Christophe Rahier wrote:

I don't use Apache but IIS.

I tried to disable caching on IIS but the problem is still there.

There's no proxy, all requests are sent from pfSense.

Christophe




Le 19/09/11 13:45, « Baptiste »bed...@gmail.com  a écrit :


hi Christophe,

HAProxy is *only* a reverse proxy.
No caching functions in it.

Have you tried to browse your backend servers directly?
Can it be related to your browser's cache?

cheers

On Mon, Sep 19, 2011 at 1:39 PM, Christophe Rahier
christo...@qualifio.com  wrote:

Hi,
Is there a caching system at HAProxy?

In fact, we find that when we put online new files (CSS, for example)
that
they are not addressed directly, it usually takes about ten minutes.

Thank you in advance for your help.

Christophe