After a quicksearch in Haproxy's doc I've found a way to do what you
suggested :
http-request set-header X-City-Map
%[src,map_ip(/etc/haproxy/geocity_map.lst)]
http-request set-header X-Country-Map
%[src,map_ip(/etc/haproxy/geocountry_map.lst)]
http-request set-header X-City-Name %[hdr(X-City-Map),word(1,' ')]
http-request set-header X-Geoname-City-Id
%[hdr(X-City-Map),word(2,' ')]
http-request set-header X-Latitude %[hdr(X-City-Map),word(3,' ')]
http-request set-header X-Longitude %[hdr(X-City-Map),word(4,' ')]
http-request set-header X-Accuracy-Radius
%[hdr(X-City-Map),word(5,' ')]
http-request set-header X-Country-Code %[hdr(X-Country-Map),word(1,' ')]
http-request set-header X-Country-Code2
%[hdr(X-Country-Map),word(1,' ')]
http-request set-header X-Country-Id %[hdr(X-Country-Map),word(2,' ')]
http-request del-header X-City-Map
http-request del-header X-Country-Map
My city mapfile is around 800Mib and HAproxy uses 2 times less RAM and
CPU after this modification, this setup is now extensible at will.
Thank's Willy for the head's up, I hope it'll be useful to somebody else :-)
04/05/2017 à 07:30, Willy Tarreau a écrit :
> Hi Arnaud,
>
> On Wed, May 03, 2017 at 01:54:13PM +0200, Arnaud B. wrote:
>> Hi there,
>>
>> I'm currently wondering, based on
>> https://www.haproxy.com/blog/use-geoip-database-within-haproxy/ and
>> related notes, is there a more convenient way now ? I've created
>> mapfiles for latitude, longitude, accuracy radius, country code, cities
>> names etc.
>>
>> Right now, my process is based on Maxmind's databases scraping and
>> generating mapfiles for each added header.
>>
>> Now my HAProxy is around 10GB ram and takes about 70secs to reload.
> Why does it take that long ? That seems totally abnormal. Do you use
> your maps at many places, possibly causing them from being loaded
> multiple times maybe ? In theory even with this they should be merged.
>
> Now regarding the size, do you think your maps could be aggregated
> so that you have all fields at once on a line ? If so you could set
> a variable based on a single lookup of the source address in the
> map, then use the "word" converter to split it depending on the
> field you're interested in. The risk is to end up with an even larger
> map but I think that most entries can be merged.
>
> Willy
>
>