Re: Changes in HAProxy 3.0's Makefile and build options

2024-04-11 Thread Dinko Korunic


> On 11.04.2024., at 21:32, William Lallemand  wrote:
> 
> If I remember correctly github actions VMs only had 2 vCPU in the past,
> I think they upgraded to 4 vCPU last year but I can't find anything in
> their documentation.

Hi William,

GitHub runners Instance sizes for public repositories are now as you said, 4 
vCPU / 16 GB RAM:

https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: HAProxy CE Docker Debian and Ubuntu images with QUIC

2023-05-09 Thread Dinko Korunic
Dear community,

We have been asked quite a few times to also provide haproxytech Docker images 
in GHCR (GitHub Container Registry), due to the sad fact that Docker Hub has 
been throttling image downloads (https://www.docker.com/increase-rate-limits/) 
for a while now. I am happy to announce we are now providing all of our images 
through GHCR as well, all flavours/architectures/branches as before:

1. QUIC-enabled images (Alpine, Ubuntu, Debian):

https://github.com/haproxytech/haproxy-docker-alpine-quic/pkgs/container/haproxy-docker-alpine-quic
https://github.com/haproxytech/haproxy-docker-debian-quic/pkgs/container/haproxy-docker-debian-quic
https://github.com/haproxytech/haproxy-docker-ubuntu-quic/pkgs/container/haproxy-docker-ubuntu-quic

2. Regular distro-OpenSSL images (Alpine, Ubuntu, Debian):

https://github.com/haproxytech/haproxy-docker-alpine/pkgs/container/haproxy-docker-alpine
https://github.com/haproxytech/haproxy-docker-ubuntu/pkgs/container/haproxy-docker-ubuntu
https://github.com/haproxytech/haproxy-docker-debian/pkgs/container/haproxy-docker-debian


Kind regards,
D.
 

> On 19.03.2023., at 19:54, Dinko Korunic  wrote:
> 
> Dear community,
> 
> As previously requested, we have also started building HAProxy CE  for 2.6, 
> 2.7 and 2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic Release 1) built 
> on top of Debian 11 Bullseye and Ubuntu 22.04 Jammy Jellyfish base images.
> 
> Images are being built for only two architectures listed below due to 
> build/stability issues (as opposed to Alpine variant, which is also built for 
> linux/arm/v6 and linux/arm/v7):
> - linux/amd64
> - linux/arm64
> 
> Images are available at the usual Docker Hub repositories:
> - https://hub.docker.com/repository/docker/haproxytech/haproxy-debian-quic
> - https://hub.docker.com/repository/docker/haproxytech/haproxy-ubuntu-quic
> 
> The corresponding Github repositories with update scripts, Dockerfiles, 
> configurations and GA workflows are at the respective places:
> - https://github.com/haproxytech/haproxy-docker-debian-quic
> - https://github.com/haproxytech/haproxy-docker-ubuntu-quic
> 
> Let me know if you spot any issues and/or have any problems with these.
> 
> As other our haproxytech Docker images, these will auto-rebuild on:
> - dataplaneapi releases
> - HAProxy CE releases
> 
> including also:
> - QUICTLS/OpenSSL releases
> 
> 
> Kind regards,
> D.
> 
> -- 
> Dinko Korunic   ** Standard disclaimer applies **
> Sent from OSF1 osf1v4b V4.0 564 alpha
> 

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: HAProxy CE Docker Debian and Ubuntu images with QUIC

2023-03-19 Thread Dinko Korunic
On 19.03.2023., at 19:54, Dinko Korunic  wrote:
> Images are available at the usual Docker Hub repositories:
> - https://hub.docker.com/repository/docker/haproxytech/haproxy-debian-quic
> - https://hub.docker.com/repository/docker/haproxytech/haproxy-ubuntu-quic

Ah, my apologies, these seem to be non-public links. These should work:

- https://hub.docker.com/r/haproxytech/haproxy-debian-quic
- https://hub.docker.com/r/haproxytech/haproxy-ubuntu-quic

Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



HAProxy CE Docker Debian and Ubuntu images with QUIC

2023-03-19 Thread Dinko Korunic
Dear community,

As previously requested, we have also started building HAProxy CE  for 2.6, 2.7 
and 2.8 branches with QUIC (based on OpenSSL 1.1.1t-quic Release 1) built on 
top of Debian 11 Bullseye and Ubuntu 22.04 Jammy Jellyfish base images.

Images are being built for only two architectures listed below due to 
build/stability issues (as opposed to Alpine variant, which is also built for 
linux/arm/v6 and linux/arm/v7):
- linux/amd64
- linux/arm64

Images are available at the usual Docker Hub repositories:
- https://hub.docker.com/repository/docker/haproxytech/haproxy-debian-quic
- https://hub.docker.com/repository/docker/haproxytech/haproxy-ubuntu-quic

The corresponding Github repositories with update scripts, Dockerfiles, 
configurations and GA workflows are at the respective places:
- https://github.com/haproxytech/haproxy-docker-debian-quic
- https://github.com/haproxytech/haproxy-docker-ubuntu-quic

Let me know if you spot any issues and/or have any problems with these.

As other our haproxytech Docker images, these will auto-rebuild on:
- dataplaneapi releases
- HAProxy CE releases

including also:
- QUICTLS/OpenSSL releases


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: HAProxy CE Docker Alpine image with QUIC

2023-03-19 Thread Dinko Korunic

> On 18.03.2023., at 20:01, Aleksandar Lazic  wrote:
> 
> 

[…]

> ```
> My choice not to do TCP in musl's stub resolver was based on an 
> interpretation that truncated results are not just acceptable but better ux - 
> not only do you save major round-trip delays to DNS but you also get a 
> reasonable upper bound on # of addrs in result.
> 
>-Rich Felker (via twitter)
> ```
> 
> Any chance to get also a libc based image with quic?
> 

[…]

Hi Alex,

Certainly, that’s not a problem at all and I will look into it early this week 
and decide to do Debian or Ubuntu first depending on existing Docker Hub 
statistics for the popularity of haproxytech images.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Docker image 2.5.8

2022-07-26 Thread Dinko Korunic
haproxytech (https://hub.docker.com/u/haproxytech) Docker images are usually 
built 1-2 hours from public releases — since it takes that much time to do 
multi-arch image builds through Github Actions.


> On 25.07.2022., at 22:40, Stevenson, Bob (CHICO)  
> wrote:
> 
> Hello, I have not been able to find the docker image for haproxy. 2.5.8 
> today. Will this be posted today?
>  
> Keep up the good work.
> Thanks all

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: docker hub not updated with most recent releases (2.5.4/2.4.14)

2022-02-28 Thread Dinko Korunic
Dear Olaf,

As Tim said, official Docker images are not (directly) provided by HAProxy 
team. We do however provide the following Docker images which are usually up to 
date:

https://hub.docker.com/u/haproxytech <https://hub.docker.com/u/haproxytech>

They come in several flavours, namely Alpine, Debian and Ubuntu depending on 
which base image has been used.

> On 28.02.2022., at 15:24, Olaf Buitelaar  wrote:
> 
> Dear Maintainers,
> 
> It looks like the official docker image's aren't updated to the most recent 
> release. Looking at the jenkins jobs; 
> https://github.com/docker-library/repo-info/tree/master/repos/haproxy 
> <https://github.com/docker-library/repo-info/tree/master/repos/haproxy> => 
> https://doi-janky.infosiftr.net/job/repo-info/job/remote/ 
> <https://doi-janky.infosiftr.net/job/repo-info/job/remote/> it appears it 
> fails to checkout from the remote repository.
> 
> Thanks Olaf

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: FYI: kubernetes api deprecation in 1.22

2021-07-16 Thread Dinko Korunic
Hi Илья,

It’s up to Ingress Controller and optionally up to a corresponding Helm Chart 
to generate proper configuration depending on detected K8s version. For 
instance:

https://github.com/haproxytech/helm-charts/blob/1d6a9f5b2137e0c11a634c4baafc602f209c6717/kubernetes-ingress/templates/controller-ingressclass.yaml#L17
 
<https://github.com/haproxytech/helm-charts/blob/1d6a9f5b2137e0c11a634c4baafc602f209c6717/kubernetes-ingress/templates/controller-ingressclass.yaml#L17>


Kind regards,
D.

> On 16.07.2021., at 10:27, Илья Шипицин  wrote:
> 
> I wonder if Kubernetes has sort of ingress compliance test. Or is it up to 
> ingress itself
> 
> On Fri, Jul 16, 2021, 1:21 PM Aleksandar Lazic  <mailto:al-hapr...@none.at>> wrote:
> Hi.
> 
> FYI that the 1.22 have some changes which also impacts Ingress and Endpoints.
> 
> https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22 
> <https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22>
> 
> Regards
> Alex
> 

--  
Dinko Korunic Senior System Engineer / R 
https://hr.linkedin.com/in/dkorunic https://twitter.com/dkorunic 



Re: [PATCH] CI: enable openssl-3.0.0 builds

2021-06-02 Thread Dinko Korunic

On 02.06.2021., at 13:27, Tim Düsterhus  wrote:
> 
> Ilya,
> 
> On 6/2/21 12:58 PM, Илья Шипицин wrote:
>> as openssl-3.0.0 is getting close to release, let us add it to build matrix.
> 
> I dislike that this is going to report all commits in red, until the build 
> failures with OpenSSL 3 are fixed. I did a quick research, whether some job 
> could be marked as "experimental" / "allowed failure" with GitHub Actions, 
> but could not find anything.
> 
> Do you know whether this is possible?
> 

[…]

Yes, GItHub Actions has support for flagging some of the jobs in matrix as 
experimental (which will permit job to fail and other jobs in the matrix will 
continue to execute), for instance:

https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-including-new-combinations
 
<https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-including-new-combinations>
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-preventing-a-specific-failing-matrix-job-from-failing-a-workflow-run
 
<https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#example-preventing-a-specific-failing-matrix-job-from-failing-a-workflow-run>


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Docker HAProxy (haproxytech) multi-arch repositories

2021-05-18 Thread Dinko Korunic
Dear HAProxy community,

I wanted to give you a heads up that our Docker haproxytech repositories, have 
been converted to multiarch repositories.

This covers namely the following repositories:
- haproxytech/kubernetes-ingress: HAProxy Kubernetes Ingress Controller image
- haproxytech/haproxy-alpine: HAProxy CE 1.7-2.5 with Alpine (3.12) as base 
image
- haproxytech/haproxy-ubuntu: HAProxy CE 1.7-2.5 with Ubuntu (Focal Fossa) as 
base image
- haproxytech/haproxy-debian: HAProxy CE 1.7-2.5 with Debian (Buster slim) as 
base image

These above have been re-made as multiarch Docker repositories, supporting as 
many Linux architectures as we could get to build (problems have been spotted 
mostly with Debian and Ubuntu images for non-major platforms).

For now we have decided to go with these platforms:
- linux/386
- linux/amd64
- linux/arm/v6
- linux/arm/v7
- linux/arm64
- linux/ppc64le
- linux/s390x

At this moment, dataplane binaries for non-amd64 platforms are yet to be 
completely built, but at next release we’ll also have each multiarch image (for 
non-amd64) contain correct dataplane binary as well.

We have also stopped using native Docker Hub builds mainly due to unreliability 
and slowness, especially when doing large batches of builds.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: Proposal about libslz integration into haproxy

2021-04-21 Thread Dinko Korunic
On 21.04.2021., at 08:04, Willy Tarreau  wrote:
> 
> Hi all,
> […]

> So after changing my mind, I would go with the following approach:
> 
>  - building with USE_SLZ=1  => always use the embedded one
>  - building with USE_ZLIB=1 => always build using the user-provided zlib
> 
> We'd enable USE_SLZ by default and it would be disabled when ZLIB is used
> (or when SLZ is forced off). This way we could have fast and memory-less
> compression available by default without the hassle of cloning an unknown
> repository.
> 
> Does anyone have any opinion, objection or suggestion on this (especially
> those in CC who participated to the first discussion or who could have
> packaging concerns) ? Barring any comment I think I'm going to do this
> tomorrow so that we have it in -dev17, leaving a bit of time for distro
> packagers to test and adjust their build process if needed.
> 

Willy,

I think this is great from the packagers’ point of view, as it will make build
scripts (for Docker, embedded devices and what not) easier to maintain and one
important dependancy less to worry about.

D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: zlib vs slz (perfoarmance)

2021-04-07 Thread Dinko Korunic



> […]
> 
> 
> Hi Lukas,
> 
> I am maintaining haproxytech Docker images and I can easily make that (slz 
> being used) happen, if that’s what community would like to see.
> 

Hi,

Given quite a few positive responses from community about including SLZ by 
default, all new haproxytech/haproxy* Docker images have SLZ enabled and 
statically linked in the haproxy binary.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: zlib vs slz (perfoarmance)

2021-03-30 Thread Dinko Korunic
On 30.03.2021., at 08:17, Илья Шипицин  wrote:
> 
> I would really like to know whether zlib was chosen for purpose or by chance.
> 
> And yes, some marketing campaign makes sense
> 

Hi Илья,

People tend to spawn Docker images in dozens and/or even hundreds and we always 
have to consider that adding a single library on top of what’s already present 
in the minimal Docker base image (libz is pretty much always present in base 
images) will result in additional size which is in general frown upon by Docker 
users.

But then again, if the community (aka you) thinks that pros (performance) 
outweigh the cons (increased target size), I’ll take care of it for haproxytech 
images and these changes would be also propagated to Ingress Controller image 
as well.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: zlib vs slz (perfoarmance)

2021-03-29 Thread Dinko Korunic


> On 29.03.2021., at 23:06, Lukas Tribus  wrote:
> 

[…]

> Like I said last year, this needs a marketing campaign:
> https://www.mail-archive.com/haproxy@formilux.org/msg38044.html
> 
> 
> What about the docker images from haproxytech? Are those zlib or slz
> based? Perhaps that would be a better starting point?
> 
> https://hub.docker.com/r/haproxytech/haproxy-alpine



Hi Lukas,

I am maintaining haproxytech Docker images and I can easily make that (slz 
being used) happen, if that’s what community would like to see.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: OpenSSL Security Advisory

2021-03-25 Thread Dinko Korunic
[…]

> On 25.03.2021., at 17:03, Tim Düsterhus  wrote:
> 

[…]

> 
> The 'haproxy' image for Docker is maintained by the Docker Official
> Images Team [1] [2]. They also handle the necessary rebuilds when the
> base image changes. I maintain 2 images as part of the Official Images
> program and also contribute to the HAProxy image via Pull Requests. I am
> not part of the DOI Team, though.
> 
> Independently from your email I already asked in their IRC whether the
> 'debian' base image is going to be rebuilt due to the OpenSSL update.
> This would then cause a rebuild of the 'haproxy' image.
> 
> For the images that contain a username (e.g. timwolla/haproxy) the
> authors are responsible to trigger a rebuild.
> 

Just to follow-up on this: As Tim has already kindly summarised, the same thing
also applies for haproxytech (https://hub.docker.com/u/haproxytech) HAProxy CE
images as well, they will get rebuilt on official base image (Debian, Ubuntu,
Alpine, etc.) being rebuilt.

This includes regular HAProxy CE images, Ingress Controller images, etc.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: [PATCH] enable coverity daily scan again

2020-12-28 Thread Dinko Korunic
Dear all,

I’ve just sent Coverity project token to Willy; it’s the same as before as we 
haven’t regenerated project token yet.


Kind regards,
D.

> On 28.12.2020., at 12:02, Илья Шипицин  wrote:
> 
> 
> 
> пн, 28 дек. 2020 г. в 15:57, Tim Düsterhus  <mailto:t...@bastelstu.be>>:
> Willy,
> 
> Am 25.12.20 um 19:38 schrieb Илья Шипицин:
> > final patch attached.
> 
> That one looks good to me. Can you take it? When the patch is taken you
> will need to add a secret called 'COVERITY_SCAN_TOKEN' here:
> 
> https://github.com/haproxy/haproxy/settings/secrets/actions 
> <https://github.com/haproxy/haproxy/settings/secrets/actions>
> 
> (In case I messed up the direct link: It's at "Settings" -> "Secrets" ->
> "New Repository Secret")
> 
> I don't have it. Ilya will need to send it to you unless you happen to
> have it somewhere in your email archives.
> 
> it is supposed to be the next step :)
> 
> I do not have access neither to Coverity token, nor to github.
> Dinko can grab a token from Coverity and pass to Willy.
> 
>  
> 
> Best regards
> Tim Düsterhus

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: DNS Load balancing needs feedback and advice.

2020-11-04 Thread Dinko Korunic
On 3 Nov 2020, at 16:38, Emeric Brun  wrote:
> 
> […]
> 
> But the question is targeting also DNS servers found in cloud environments 
> such as kube-dns, coreDNS or consul.
> 
> They seem supporting TCP but I'm not sure about pipelined queries

Hi Emeric,

I had CoreDNS 1.6.6 and Consul v1.8.5 around. To my great surprise, both seem 
to be supporting TCP pipelined requests and a persistent TCP connection. I have 
tested with getdns_query (https://getdnsapi.net/) and with few hundred requests 
send with:

getdns_query @IPADDR -s -a -A -l T -O -I < input

Sadly I haven’t had Kube-DNS anywhere and i think that CoreDNS is supposed to 
be way to go from Kube-DNS. Hope this helps.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: DNS Load balancing needs feedback and advice.

2020-11-03 Thread Dinko Korunic
On 3 Nov 2020, at 10:51, Emeric Brun  wrote:
> 
>> […]
>> 
>> We are requesting the community and experienced users of DNS servers to 
>> share their thoughts about this.
> 
> sub-questions are about modern DNS servers:
> - do they support DNS over TCP?
> - do they support persistent connections with pipelined requests?
> 

a) Yes, DNS over TCP is in fact pretty much mandatory nowadays and every modern 
DNS server should support it. Some DNS servers also support DNS over TLS. In 
fact, some queries (AXFR/IXFR) are always TCP.

b) Yes, but that’s recent addition as per RFC 7766 and AFAIK only Bind 9, 
PowerDNS and Unbound support it but I am honestly not sure if there are others 
supporting that feature. Historically there were also some security issues 
considering concurrent tcp clients limits like CVE-2019-6477 in early 
implementations.

My apologies if I have missed to mention anything, I am not up to date with 
current DNS changes as I used to be.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: OSX builds in Travis

2020-07-09 Thread Dinko Korunic
Hi Илья,

I think that Travis’ Homebrew plugin is just fine, but I would definitely avoid 
updating/upgrading Homebrew as that’s certainly going to make builds much 
slower.

Do you have any sample logs of the situation where socat failed to install with 
non-updated Homebrew? It doesn’t make sense that socat has been failing to 
install, as there are binary downloads available for almost every OSX and even 
more so, Formula/socat.rb is being changed really infrequently (once per year 
at most, according to git history for socat.rb).


> On 9 Jul 2020, at 10:07, Илья Шипицин  wrote:
> 
> we have homebrew --> update --> true
> 
> https://github.com/haproxy/haproxy/blob/master/.travis.yml#L26 
> <https://github.com/haproxy/haproxy/blob/master/.travis.yml#L26>
> 
> 
> if we remove it, brew will not get updated.
> 
> 
> most of the time it is not an issue and we can install socat. but under some 
> circumstances socat refuses to install without brew update
> 
> чт, 9 июл. 2020 г. в 12:42, Dinko Korunic  <mailto:dinko.koru...@gmail.com>>:
> I would suggest using HOMEBREW_NO_AUTO_UPDATE environment variable to avoid 
> Brew auto-updating where it’s not really needed, for instance as in:
> 
> HOMEBREW_NO_AUTO_UPDATE=1 brew install socat
> 
> If that doesn’t work (but I think it should), pinning will cause none of 
> dependancies to be installed automatically and only through manual install:
> 
> brew list | xargs brew pin
> brew install socat
> 
> 
>> On 9 Jul 2020, at 09:19, Илья Шипицин > <mailto:chipits...@gmail.com>> wrote:
>> 
>> We install socat, because it is (or was?) needed for some tests. OSX 
>> requires to update whole brew for that. Otherwise it works unstable
>> 
>> On Thu, Jul 9, 2020, 9:16 AM Willy Tarreau > <mailto:w...@1wt.eu>> wrote:
>> Hi Ilya,
>> 
>> is it normal that the OSX build procedure in travis pulls gigabytes of
>> ruby and python crap, including fonts, libgpg, gnutls, qt, postgresql
>> and whatnot for many minutes ?
>> 
>>  https://travis-ci.com/github/haproxy/haproxy/jobs/359124175 
>> <https://travis-ci.com/github/haproxy/haproxy/jobs/359124175>
>> 
>> It's been doing this for more than 12 minutes now without even starting
>> to build haproxy. I'm suspecting that it's reinstalling a full-featured
>> desktop operating system, which seems awkward and counter productive at
>> best.
>> 
>> I don't know if that's automatically done based on broken dependencies
>> or if it is caused by a preliminary upgrade of the whole system, but I
>> think we need to address this as it's quite not efficient in this form.
>> 
>> Thanks!
>> Willy
> 
> -- 
> Dinko Korunic   ** Standard disclaimer applies **
> Sent from OSF1 osf1v4b V4.0 564 alpha
> 

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: OSX builds in Travis

2020-07-09 Thread Dinko Korunic
I would suggest using HOMEBREW_NO_AUTO_UPDATE environment variable to avoid 
Brew auto-updating where it’s not really needed, for instance as in:

HOMEBREW_NO_AUTO_UPDATE=1 brew install socat

If that doesn’t work (but I think it should), pinning will cause none of 
dependancies to be installed automatically and only through manual install:

brew list | xargs brew pin
brew install socat


> On 9 Jul 2020, at 09:19, Илья Шипицин  wrote:
> 
> We install socat, because it is (or was?) needed for some tests. OSX requires 
> to update whole brew for that. Otherwise it works unstable
> 
> On Thu, Jul 9, 2020, 9:16 AM Willy Tarreau mailto:w...@1wt.eu>> 
> wrote:
> Hi Ilya,
> 
> is it normal that the OSX build procedure in travis pulls gigabytes of
> ruby and python crap, including fonts, libgpg, gnutls, qt, postgresql
> and whatnot for many minutes ?
> 
>  https://travis-ci.com/github/haproxy/haproxy/jobs/359124175 
> <https://travis-ci.com/github/haproxy/haproxy/jobs/359124175>
> 
> It's been doing this for more than 12 minutes now without even starting
> to build haproxy. I'm suspecting that it's reinstalling a full-featured
> desktop operating system, which seems awkward and counter productive at
> best.
> 
> I don't know if that's automatically done based on broken dependencies
> or if it is caused by a preliminary upgrade of the whole system, but I
> think we need to address this as it's quite not efficient in this form.
> 
> Thanks!
> Willy

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: kernel panics after updating to 2.0

2019-12-06 Thread Dinko Korunic


> On 6 Dec 2019, at 10:36, Sander Hoentjen  wrote:
> 
> 
> On 12/6/19 10:20 AM, Pavlos Parissis wrote:
>> On Παρασκευή, 6 Δεκεμβρίου 2019 9:23:24 Π.Μ. CET Sander Hoentjen wrote:
>>> Hi list,
>>> 
>>> After updating from 1.8.13 to 2.0.5 (also with 2.0.10) we are seeing
>>> kernel panics on our production servers. I haven't been able to trigger
>>> them on a test server, and we rollbacked haproxy to 1.8 for now.
>>> 
>>> I am attaching a panic log, hope something useful is in there.
>>> 
>>> Anybody an idea what might be going on here?
>>> 
>> Have you noticed any high CPU utilization prior the panic?
>> 
> Nothing out of the ordinary, but I have only minute data, so I don't know for 
> sure things about seconds before crash.
> 
> Also of interest might be that at all the crashes I checked there were 2 
> haproxy instances running. One of them has-sf , so the 
> other is in "finishing mode".
> 
> 


Sander, can you try with “nosplice”?

https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#nosplice 
<https://cbonte.github.io/haproxy-dconv/2.0/configuration.html#nosplice>

It should be enough to put in in “global” section.


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Coverity scan findings

2019-09-10 Thread Dinko Korunic
Hi Dave,

Just browse to https://scan.coverity.com/projects/haproxy 
<https://scan.coverity.com/projects/haproxy> and make a request for access, 
I’ll gladly add you to the project.

> On 10 Sep 2019, at 16:49, Dave Chiluk  wrote:
> 
> Are these scans publicly available *(I'm looking for a link)?  They look 
> interesting, but without line numbers it looks a lot less useful.
> 
> Dave.
> 
> On Thu, Aug 8, 2019 at 2:49 AM Илья Шипицин  <mailto:chipits...@gmail.com>> wrote:
> Hello,
> 
> coverity found tens of "null pointer dereference".
> also, there's a good correlation, after "now fixed, good catch" coverity 
> usually dismiss some bug.
> 
> should we revisit those findings ?
> 
> 
> 


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: patch: enable Coverity scan in travis-ci builds

2019-08-07 Thread Dinko Korunic
Hi,

This has been disabled as soon as we have started testing out Travis-CI 
integration with Coverity, so we are all good.
Cheers!


Kind regards,
D.

> On 7 Aug 2019, at 09:50, Илья Шипицин  wrote:
> 
> Dinko, can you please disable your schedule (is it cron or whatever) ?
> 
> ср, 7 авг. 2019 г. в 12:07, Willy Tarreau mailto:w...@1wt.eu>>:
> Hi Ilya,
> 
> On Tue, Aug 06, 2019 at 10:55:47AM +0500,  ??? wrote:
> > 1) follow to https://travis-ci.com/haproxy/haproxy/settings 
> > <https://travis-ci.com/haproxy/haproxy/settings>
> > 
> > 
> > 2) setup COVERITY_SCAN_TOKEN = P6rHpv1618gwKWkVs7FfKQ
> > 
> > 3) setup daily schedule
> 
> All done now (with Dinko's new token). Thanks very much for the
> screenshot, it did help me.
> 
> Let's see how it goes now. It says it's scheduled in about a minute.
> 
> Willy

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Coverity scans?

2019-08-01 Thread Dinko Korunic
Hi Илья,

Haproxy Coverity project token is: 9Zw8bB4a
Given that it’s per-project token, that can be hardcoded in TravisCI 
configuration without any issues.

In regards to notification mails, well for now I think that can have your and 
my mail for now until we think of something better. Willy, any suggestions 
here? Those mails which come from Coverity usually confirm a code submission 
for analysis, defect status changes and a state of defects for a current build.

Илья, given that you have been doing Coverity for extended periods of time in 
SoftEther projects, did you have any luck with custom Coverity Scan function 
models yet?


Kind regards,
D.


> On 1 Aug 2019, at 11:59, Илья Шипицин  wrote:
> 
> also, I've no idea what to specify in COVERITY_SCAN_NOTIFICATION_EMAIL (which 
> is mandatory)
> 
> чт, 1 авг. 2019 г. в 12:32, Dinko Korunic  <mailto:dinko.koru...@gmail.com>>:
> Hey Илья,
> 
> Looks fine and clean. I guess that we would use existing project name 
> (Haproxy) or you would like to continue with your own?
> 
> Last, I wonder do we really need verbose (V=1) builds and do you think if 
> they make sense for Coverity builds?
> 
> 
> Thanks,
> D.
> 
>> On 30 Jul 2019, at 10:35, Илья Шипицин > <mailto:chipits...@gmail.com>> wrote:
>> 
>> Dinko,
>> 
>> please have a look
>> 
>> https://github.com/chipitsine/haproxy/blob/coverity/.travis.yml#L37-L45 
>> <https://github.com/chipitsine/haproxy/blob/coverity/.travis.yml#L37-L45>
>> 
>> 
>> what do you think (if we will move that to 
>> https://github.com/haproxy/haproxy <https://github.com/haproxy/haproxy>) ?
>> 
>> ср, 17 июл. 2019 г. в 16:36, Dinko Korunic > <mailto:dinko.koru...@gmail.com>>:
>> Dear Илья,
>> 
>> I’ve increased your access level to Contributor/Member. I terms of Travis-CI 
>> scans, there are some catch22s with current Coverity suite as it is compiled 
>> against ancient glibc and ancient kernel headers and requires 
>> vsyscall=emulate kernel boot option to properly work — not sure if that will 
>> be possible on Travis VMs at all.
>> 
>> I have actual weekly builds that are auto-published to our Coverity Scan 
>> account and they well, require manual interventions, flagging and some day 
>> to day work to get to more usable levels — let me know if you need a hand 
>> with this. You should have all the access required for doing so right now.
>> 
>> 
>> Kind regards,
>> D.
>> 
>>> On 17 Jul 2019, at 13:18, Илья Шипицин >> <mailto:chipits...@gmail.com>> wrote:
>>> 
>>> Hello, yep, contributor/member would be nice. Also, I can setup automated 
>>> travis-ci scans
>>> 
>>> On Wed, Jul 17, 2019, 3:27 PM Dinko Korunic >> <mailto:dinko.koru...@gmail.com>> wrote:
>>> Hey Илья,
>>> 
>>> Let me know if you would like Contributor/Member role for your account on 
>>> Haproxy Coverity account. I was initially more involved and I have started 
>>> configuring modules and parts of code blocks into coherent units, but 
>>> stopped at some point due to lack of time and interest.
>>> 
>>> There have been a lot of false positives however, I dare to say even in 
>>> excessive volumes.
>>> 
>>> > On 17 Jul 2019, at 07:48, Илья Шипицин >> > <mailto:chipits...@gmail.com>> wrote:
>>> > 
>>> > Hello, I played with Coverity. Definitely it shows "issues resolved" 
>>> > after bugfixes pushed to git. I know Willy does not like static analysis 
>>> > because of noise. Anyway, it finds bugs, why not to use it?
>>> 
>>> 
>>> Kind regards,
>>> D.
>>> 
>>> -- 
>>> Dinko Korunic   ** Standard disclaimer applies **
>>> Sent from OSF1 osf1v4b V4.0 564 alpha
>>> 
>>> 
>> 
>> -- 
>> Dinko Korunic   ** Standard disclaimer applies **
>> Sent from OSF1 osf1v4b V4.0 564 alpha
>> 
>> 
> 
> -- 
> Dinko Korunic   ** Standard disclaimer applies **
> Sent from OSF1 osf1v4b V4.0 564 alpha
> 

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Coverity scans?

2019-08-01 Thread Dinko Korunic
Hey Илья,

Looks fine and clean. I guess that we would use existing project name (Haproxy) 
or you would like to continue with your own?

Last, I wonder do we really need verbose (V=1) builds and do you think if they 
make sense for Coverity builds?


Thanks,
D.

> On 30 Jul 2019, at 10:35, Илья Шипицин  wrote:
> 
> Dinko,
> 
> please have a look
> 
> https://github.com/chipitsine/haproxy/blob/coverity/.travis.yml#L37-L45 
> <https://github.com/chipitsine/haproxy/blob/coverity/.travis.yml#L37-L45>
> 
> 
> what do you think (if we will move that to https://github.com/haproxy/haproxy 
> <https://github.com/haproxy/haproxy>) ?
> 
> ср, 17 июл. 2019 г. в 16:36, Dinko Korunic  <mailto:dinko.koru...@gmail.com>>:
> Dear Илья,
> 
> I’ve increased your access level to Contributor/Member. I terms of Travis-CI 
> scans, there are some catch22s with current Coverity suite as it is compiled 
> against ancient glibc and ancient kernel headers and requires 
> vsyscall=emulate kernel boot option to properly work — not sure if that will 
> be possible on Travis VMs at all.
> 
> I have actual weekly builds that are auto-published to our Coverity Scan 
> account and they well, require manual interventions, flagging and some day to 
> day work to get to more usable levels — let me know if you need a hand with 
> this. You should have all the access required for doing so right now.
> 
> 
> Kind regards,
> D.
> 
>> On 17 Jul 2019, at 13:18, Илья Шипицин > <mailto:chipits...@gmail.com>> wrote:
>> 
>> Hello, yep, contributor/member would be nice. Also, I can setup automated 
>> travis-ci scans
>> 
>> On Wed, Jul 17, 2019, 3:27 PM Dinko Korunic > <mailto:dinko.koru...@gmail.com>> wrote:
>> Hey Илья,
>> 
>> Let me know if you would like Contributor/Member role for your account on 
>> Haproxy Coverity account. I was initially more involved and I have started 
>> configuring modules and parts of code blocks into coherent units, but 
>> stopped at some point due to lack of time and interest.
>> 
>> There have been a lot of false positives however, I dare to say even in 
>> excessive volumes.
>> 
>> > On 17 Jul 2019, at 07:48, Илья Шипицин > > <mailto:chipits...@gmail.com>> wrote:
>> > 
>> > Hello, I played with Coverity. Definitely it shows "issues resolved" after 
>> > bugfixes pushed to git. I know Willy does not like static analysis because 
>> > of noise. Anyway, it finds bugs, why not to use it?
>> 
>> 
>> Kind regards,
>> D.
>> 
>> -- 
>> Dinko Korunic   ** Standard disclaimer applies **
>> Sent from OSF1 osf1v4b V4.0 564 alpha
>> 
>> 
> 
> -- 
> Dinko Korunic   ** Standard disclaimer applies **
> Sent from OSF1 osf1v4b V4.0 564 alpha
> 
> 

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Coverity scans?

2019-07-17 Thread Dinko Korunic
Dear Илья,

I’ve increased your access level to Contributor/Member. I terms of Travis-CI 
scans, there are some catch22s with current Coverity suite as it is compiled 
against ancient glibc and ancient kernel headers and requires vsyscall=emulate 
kernel boot option to properly work — not sure if that will be possible on 
Travis VMs at all.

I have actual weekly builds that are auto-published to our Coverity Scan 
account and they well, require manual interventions, flagging and some day to 
day work to get to more usable levels — let me know if you need a hand with 
this. You should have all the access required for doing so right now.


Kind regards,
D.

> On 17 Jul 2019, at 13:18, Илья Шипицин  wrote:
> 
> Hello, yep, contributor/member would be nice. Also, I can setup automated 
> travis-ci scans
> 
> On Wed, Jul 17, 2019, 3:27 PM Dinko Korunic  <mailto:dinko.koru...@gmail.com>> wrote:
> Hey Илья,
> 
> Let me know if you would like Contributor/Member role for your account on 
> Haproxy Coverity account. I was initially more involved and I have started 
> configuring modules and parts of code blocks into coherent units, but stopped 
> at some point due to lack of time and interest.
> 
> There have been a lot of false positives however, I dare to say even in 
> excessive volumes.
> 
> > On 17 Jul 2019, at 07:48, Илья Шипицин  > <mailto:chipits...@gmail.com>> wrote:
> > 
> > Hello, I played with Coverity. Definitely it shows "issues resolved" after 
> > bugfixes pushed to git. I know Willy does not like static analysis because 
> > of noise. Anyway, it finds bugs, why not to use it?
> 
> 
> Kind regards,
> D.
> 
> -- 
> Dinko Korunic   ** Standard disclaimer applies **
> Sent from OSF1 osf1v4b V4.0 564 alpha
> 
> 

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha



Re: Coverity scans?

2019-07-17 Thread Dinko Korunic
Hey Илья,

Let me know if you would like Contributor/Member role for your account on 
Haproxy Coverity account. I was initially more involved and I have started 
configuring modules and parts of code blocks into coherent units, but stopped 
at some point due to lack of time and interest.

There have been a lot of false positives however, I dare to say even in 
excessive volumes.

> On 17 Jul 2019, at 07:48, Илья Шипицин  wrote:
> 
> Hello, I played with Coverity. Definitely it shows "issues resolved" after 
> bugfixes pushed to git. I know Willy does not like static analysis because of 
> noise. Anyway, it finds bugs, why not to use it?


Kind regards,
D.

-- 
Dinko Korunic   ** Standard disclaimer applies **
Sent from OSF1 osf1v4b V4.0 564 alpha




Re: [PATCH] Fix OSX compilation errors

2016-09-11 Thread Dinko Korunic
Hi Willy,

I can backport them — in 1.5 and 1.4 we’d also have to use IPPROTO_IP
instead of SOL_IP on OSX, as SOL_IP is simply not defined for
setsockopt(). I think that we could use something along these lines:

#ifndef SOL_IP
# define SOL_IP IPPROTO_IP
#endif

Let me know what you think.


Kind regards,
D.


-- Dinko Korunic ** Standard disclaimer applies **

On 11 September 2016 at 08:06:11, Willy Tarreau
(w...@1wt.eu(mailto:w...@1wt.eu)) wrote:

> Hi Dinko,
>
> On Fri, Sep 09, 2016 at 12:52:55AM -0700, Dinko Korunic wrote:
> > The following really trivial patch fixes compilation issues on OSX (El
> > Capitan at least).
>
> Applied, thanks. Do you want it backported to 1.6 and maybe even 1.5 ?
> The seem to be affected as well.
>
> Thanks,
> Willy



[PATCH] Fix OSX compilation errors

2016-09-09 Thread Dinko Korunic
Hi,

The following really trivial patch fixes compilation issues on OSX (El
Capitan at least).


Kind regards,
D.


0001-BUG-MINOR-fix-osx-compilation-errors.patch
Description: Binary data


Re: HAProxy 1.5-dev18 logs messages twice

2013-06-19 Thread Dinko Korunic
On 18.06.2013 17:36, Chris Fryer wrote:
[...]

 I notice that each request is logged once, then logged again immediately
 before the next request is logged.  If there is no next request, the
 request is logged a second time after a pause of between 60 and 70 seconds.
 
 If I comment out the log global line from the frontend configuration,
 only one request is logged.
 
 This did not used to happen with HAProxy 1.4

Hi,

This is due to 1.5 supporting several log targets, so in your
configuration that's effectively having same log target twice. I've
reported this and the explanation was that it's known and intended behavior.


Kind regards,
D.

-- 
Dinko Korunic   PGP:0xEA160D0B
RD Department Manager at Reflected