I'm sorry for losing my mail subject   and thanks for your very detail
solution description.

   Our appliction service  a small image that about 10k-200k.  All cache
servers share nothing and image data come from "backend image bridge
server".   The bridge server read image from storage. We design a main URL
for a image, such as  "/img/photo.jpg", in order to reduce storage disk, all
scaled images was generated dynermic by  backend image bridge server. The
scaled image marked as  in url : "/img/photo.300x300.jpg", or
"/img/photo.200x200.jpg".  So if an image cache server is down at peak time,
backend image bridge server would scale image heavy loading.  As you
said---"buddy server"----is very important for us, it is a reliable
design---forward "down server" requests to  related backup server.
Consistent Hash is also important  for us  if we add image cache server,and
we can deploy it at deep night(it can protect "backend image bridge
server").

    Now We have more than 20 image cache nodes,  as you said, we need set
up  "listen pool1-> listen pool30 ", "start up 30 ports" and "too many local
loopback data transfer" will reduce  much performance? Now our network need
2Gb bandwidth for image acess.  If " local loopback" reduce performance,  we
can configure TCP SPLICE for " local loopback" only?
   I  test configuration that you provide at now.

Regards,

Leon Liu


ps:  I am a Java programmer,  it is diffcult for me provider such as patch.


2010/3/23 Willy Tarreau <[email protected]>

> Hi Leon,
>
> [ first, please use meaningful mail subject next time, I was about
>  to delete this one believing it was a spam ]
>
> On Mon, Mar 22, 2010 at 02:38:54PM +0800, ?????? wrote:
> > Hi all
> >     I have some question about Haproxy that we are teting.  We want to
> > loadbance disk cache of  huge image application.  Haproxy provide a
> > consistent hash for backend.
> >    As we know haproxy's backup server  can not config for main server
> > 1 vs 1 .
>
> This is something which is planned ("buddy server") but it's not as
> trivial to implement as it first seems, so maybe we'll see this in
> 1.5, maybe not.
>
> >  If an image cache server is down,  Consistent hash  also
> > cause  image cache server rebuilding cache.
>
> But only that server, that's the purpose of consistent hashing.
>
> >   It is dangerous because robust image cache rebuilding, storage
> > server can not accept the fact.
>
> Well, then you don't even need consistent hashing. The very purpose
> of consistent hashing is to be able to add/remove servers in the pool
> with minimal redistribution, leading to minimal impact. If your servers
> don't even support a minimal redistribution, you should change the
> method and think about it differently.
>
> >   So haproxy can provide a pool based hash ?  such as:
> > ===orgin===
> > backend image-cache-server-end
> >        mode           http
> >        balance uri
> >        hash-type  consistent
> >        option allbackups
> >        server webA 127.0.0.1:8081 check inter 1000
> >        server webB 127.0.0.2:8081 check inter 1000
> >        server webC 127.0.0.3:8081 check inter 1000 backup
> >        server webD 127.0.0.4:8081 check inter 1000 backup
> >
> > ====my suggest====
> >
> > serversdefine  image-farm-pool1
> >           option allbackups   #define every server provide service in the
> pool
> >           server 127.0.0.1:8081 check inter 1000
> >           server 127.0.0.2:8081 check inter 1000 backup
> >           server 127.0.0.3:8081 check inter 1000 backup
> >
> > serversdefine  image-farm-pool2
> >           option allbackups
> >           server 127.0.0.4:8081 check inter 1000
> >           server 127.0.0.5:8081 check inter 1000 backup
> >           server 127.0.0.6:8081 check inter 1000 backup
> >
> > serversdefine  image-farm-pool3
> >           option allbackups
> >           server 127.0.0.7:8081 check inter 1000
> >           server 127.0.0.8:8081 check inter 1000 backup
> >           server 127.0.0.9:8081 check inter 1000 backup
> >
> > backend image-cache-server-end
> >        mode           http
> >        balance uri
> >        hash-type  consistent    #just only for servers hash.
> >        option allbackups
> >        servers image-farm-pool1
> >        servers image-farm-pool2
> >        servers image-farm-pool3
> >
> > If haproxy provide such as configure.  We can  build a reliable disk
> > cache for huge image cache cluster.
>
> Yes that's doable. Just proceed as we did in the old days when
> there were no frontend/backend distinction. You basically want
> to have two levels of load balancing, so use two frontend/backend
> levels :
>
> first level, the backend connects to one of the second level
> frontends :
>
>    backend image-cache-server-end
>         balance uri
>        # not needed: hash-type  consistent    #just only for servers hash.
>        servers image-farm-pool1 127.0.0.1:60001
>        servers image-farm-pool2 127.0.0.1:60002
>        servers image-farm-pool3 127.0.0.1:60003
>
>    listen pool1
>        bind 127.0.0.1:60001
>         server webA 127.0.0.1:8081 check inter 1000
>         server webB 127.0.0.2:8081 check inter 1000 backup
>         server webC 127.0.0.3:8081 check inter 1000 backup
>
>     listen pool2
>        bind 127.0.0.1:60002
>        server webA 127.0.0.4:8081 check inter 1000
>        server webB 127.0.0.5:8081 check inter 1000 backup
>        server webC 127.0.0.6:8081 check inter 1000 backup
>
>    listen pool3
>        bind 127.0.0.1:60003
>        server webA 127.0.0.7:8081 check inter 1000
>        server webB 127.0.0.8:8081 check inter 1000 backup
>        server webC 127.0.0.9:8081 check inter 1000 backup
>
> That way you have 3 servers, each one with two backups. If
> the disk caches are shared between all servers in a pool,
> you can even use other algorithms such as leastconn or
> roundrobin and remove the "backup" keywords. One could even
> imagine performing balancing on a limited uri depth/length
> at the first level and using the whole length at the second
> level.
>
> If you're afraid of losing a complete pool, it's easy to add
> checks using a monitor-uri in each "listen" for instance and
> enabling the checks on the first level backend.
>
> You just need to be aware that proceeding like this will
> reduce the performance, but if the images are huge and the
> connection rate is low, you could use TCP splicing on Linux
> and then the performance drop should be very small.
>
> Regards,
> Willy
>
>

Reply via email to