Téléchargé gratuitement votre livre blanc ACT!

2010-10-06 Thread SAGE ACT
Porquoi est-il nécéssaire de gérez efficacement vos contacts?

Dans un environnement économique incertain, il est primordial de se
concentrer sur ce 
qui est le plus important pour la pérennité de son entreprise, à
savoir son métier et ses clients.
ACT! -- La solution de Gestion de Contacts, Clients et Prospects
adaptée aux besoins de la petite entreprise. 
Efficace et simple d,installation ACT! offre de nombreuses
possibilités en terme de : 
•   FIDELISATION : vous améliorez la qualité de votre relation
client en optimisant la connaissance et la gestion de ces derniers.
•   PROSPECTION : vous planifiez et réalisez des actions de
prospections ciblées.
•   COMMUNICATION : vous gagnez du temps en partageant facilement
toutes les informations client dans l'entreprise. 
•   PRODUCTIVITE : vous analysez et organisez efficacement votre
activité commerciale
Plus d'informations: 
http://track.effiliation.com/servlet/effi.redir?id_compteur=11366157&url=http://www.monact.fr/formulaire_livreblanc_act


Conformément à la loi informatique & libertés du 6 janvier 1978, je
dispose d'un droit d'accès, de rectification et d'opposition aux
données personnelles me concernant. 
Ce message commercial vous est envoyé par “Team Leaders”.. Vous
recevez ce message parce que vous vous êtes inscrit sur l'un des
sites partenaires de “Team Leaders”. Vos données nominatives
n'ont pas été transmises à l'annonceur. Si vous ne souhaitez plus
recevoir notre lettre d'information Remplissez ce formulaire: 
http://87.255.69.213/unsubscribe/index.php?q=hapr...@formilux.org





Re: x-forwarded-for logging

2010-10-06 Thread Graeme Donaldson
Hi Joe

Yes, it is possible, but there's a little more work involved than just
applying the patch to stunnel.

Firstly, you need to specify in your stunnel.conf that you want stunnel to
add the X-Forwarded-For header:

[https]
accept  = 1.2.3.4:443
connect = 1.2.3.4:80
TIMEOUTclose = 0
xforwardedfor=yes

Next you need to have the following in your haproxy config, it can go in
defaults, frontend, listen or backend as appropriate for your setup:

option forwardfor

Finally, you need to configure your web server to use the X-Forwarded-For
header.

We're using Apache's mod_rpaf to do this (http://stderr.net/apache/rpaf/).

Regards,
Graeme.

On 7 October 2010 00:31, Joe Williams  wrote:

>
> I applied the x-forwarded-for patch to stunnel in hopes that haproxy would
> log the forwarded for address but it doesn't seem to. Is this possible?
>
> Thanks.
> -Joe
>
>
>
> Name: Joseph A. Williams
> Email: j...@joetify.com
> Blog: http://www.joeandmotorboat.com/
> Twitter: http://twitter.com/williamsjoe
>
>
>


Re: Interest in patch for web interface to enable/disable servers

2010-10-06 Thread Willy Tarreau
Hi Judd,

On Thu, Oct 07, 2010 at 12:10:02AM -0400, Judd Montgomery wrote:
> On 10/06/2010 05:37 PM, Willy Tarreau wrote:
> >On Wed, Oct 06, 2010 at 11:18:21PM +0200, Cyril Bonté wrote:
> >
> >I have no doubt people will want it, I was wondering if they'd possibly
> >wait for 1.5 or want it in 1.4 in fact.
> >
> >>For our needs, we can wait for the 1.5 but I'm not against having it 
> >>directly
> >>in 1.4, and I guess that Judd will likely appreciate to have it in 1.4 
> >>too ;-)
> >
> >Yes, for sure. I'll let Judd decide then. Both of you have done a great 
> >work
> >on this, it's the least I can do to let you have it in 1.4 if you think you
> >need it.
> >
> Yes, I would like to have it in 1.4.  I'd guess it would get accepted in 
> Debian and make its way down the distribution chains that way.  I 
> appreciate Willy and Cyril's help with this.

OK, so let's merge it too !

Thanks for your feedback,
Willy




Re: Performance Question

2010-10-06 Thread Hank A. Paulson

What did the haproxy stats web page show during the test?
How long was each test run? many people seem to run ab for a few seconds.
Was tomcat "doing" anything for the test urls, I am a bit shocked you got 3700 
rps from tomcat. Most apps I have seen on it fail at much lower rps.


Raise the max conn for each server and for the front end and see if you get 
better results.


On 10/6/10 7:11 PM, Les Stroud wrote:

I did a little more digging and found several blogs that suggest that I will
take a performance hit on virtual platforms. In fact, this guy
(http://www.mail-archive.com/haproxy@formilux.org/msg03119.html) seems to have
the same problem. The part that is concerning me is not the overall
performance, but that I am getting worse performance with 4 servers than I am
with 1 server. I realize there are a lot of complications, but I have to be
doing something very wrong to get a decrease.

I have even tried putting haproxy on the same server with 2 tomcat servers and
used 127.0.0.1 to take as much of the network out as possible. I still get a
lower number of requests per second when going through haproxy to the 2
tomcats (as opposed to going directly to one of the tomcats). This test is
using ab locally on the same machine.

I have tried all of the sysctl settings that I have found listed on the board.
Is there anything I am missing??

I appreciate the help,
Les Stroud

On Oct 6, 2010, at 3:56 PM, Les Stroud wrote:


I’ve figured I would find answers to this in the archive, but have been
unable to. So, I appreciate the time.

I am setting up an haproxy instance in front of some tomcat instances. As a
test, I ran ab against one of the tomcat instances directly with an
increasing number of concurrent connections. I then repeated the same test
with haproxy fronting 4 tomcat servers. I was hoping to see that the haproxy
setup would perform a higher number of requests per second and hold that
higher number with increasingly high traffic. Unfortunately, it did not.

Hitting the tomcat servers directly, I was able to get in excess of 3700
rqs/s. With haproxy in front of that tomcat instance and three others (using
roundrobin), I never surpassed 2500. I also did not find that I was able to
handle an increased amount of concurrency (both started giving errors around
2).

I have tuned the tcp params on the linux side per the suggestions I have
seen on here. Are there any other places I can start to figure out what I
have wrong in my configuration??

Thanx,
LES


———

haproxy.cfg

global
#log loghost local0 info
maxconn 500
nbproc 4
stats socket /tmp/haproxy.sock level admin
defaults
log global
clitimeout 6
srvtimeout 3
contimeout 4000
retries 3
option redispatch
option httpclose
option abortonclose

listen stats 192.168.60.158:8081
mode http
stats uri /stat #Comment this if you need to specify diff stat path for
viewing stat page
stats enable
listen erp_cluster_https 0.0.0.0:81
mode http
balance roundrobin
option forwardfor except 0.0.0.0
reqadd X-Forwarded-Proto:\ https
cookie SERVERID insert indirect
server tomcat01-instance1 192.168.60.156:8080 cookie A check
server tomcat01-instance2 192.168.60.156:18080 cookie A check
server tomcat02-instance1 192.168.60.157:8080 cookie A check
server tomcat02-instance2 192.168.60.157:18080 cookie A check






Re: Interest in patch for web interface to enable/disable servers

2010-10-06 Thread Judd Montgomery

On 10/06/2010 05:37 PM, Willy Tarreau wrote:

On Wed, Oct 06, 2010 at 11:18:21PM +0200, Cyril Bonté wrote:

I have no doubt people will want it, I was wondering if they'd possibly
wait for 1.5 or want it in 1.4 in fact.


For our needs, we can wait for the 1.5 but I'm not against having it directly
in 1.4, and I guess that Judd will likely appreciate to have it in 1.4 too ;-)


Yes, for sure. I'll let Judd decide then. Both of you have done a great work
on this, it's the least I can do to let you have it in 1.4 if you think you
need it.

Yes, I would like to have it in 1.4.  I'd guess it would get accepted in 
Debian and make its way down the distribution chains that way.  I 
appreciate Willy and Cyril's help with this.


Judd



Re: Performance Question

2010-10-06 Thread Les Stroud
I did a little more digging and found several blogs that suggest that I will 
take a performance hit on virtual platforms.  In fact, this guy 
(http://www.mail-archive.com/haproxy@formilux.org/msg03119.html) seems to have 
the same problem.  The part that is concerning me is not the overall 
performance, but that I am getting worse performance with 4 servers than I am 
with 1 server.  I realize there are a lot of complications, but I have to be 
doing something very wrong to get a decrease.  

I have even tried putting haproxy on the same server with 2 tomcat servers and 
used 127.0.0.1 to take as much of the network out as possible.  I still get a 
lower number of requests per second when going through haproxy to the 2 tomcats 
(as opposed to going directly to one of the tomcats).  This test is using ab 
locally on the same machine.

I have tried all of the sysctl settings that I have found listed on the board.  
Is there anything I am missing??

I appreciate the help,
Les Stroud

On Oct 6, 2010, at 3:56 PM, Les Stroud wrote:

> I’ve figured I would find answers to this in the archive, but have been 
> unable to.  So, I appreciate the time.
> 
> I am setting up an haproxy instance in front of some tomcat instances.  As a 
> test, I ran ab against one of the tomcat instances directly with an 
> increasing number of concurrent connections.  I then repeated the same test 
> with haproxy fronting 4 tomcat servers.  I was hoping to see that the haproxy 
> setup would perform a higher number of requests per second and hold that 
> higher number with increasingly high traffic.  Unfortunately, it did not.  
> 
> Hitting the tomcat servers directly, I was able to get in excess of 3700 
> rqs/s.  With haproxy in front of that tomcat instance and three others (using 
> roundrobin), I never surpassed 2500.  I also did not find that I was able to 
> handle an increased amount of concurrency (both started giving errors around 
> 2).
> 
> I have tuned the tcp params on the linux side per the suggestions I have seen 
> on here. Are there any other places I can start to figure out what I have 
> wrong in my configuration??
> 
> Thanx,
> LES
> 
> 
> ———
> 
> haproxy.cfg
> 
> global
>#log loghostlocal0 info
>maxconn 500
>nbproc 4
>stats socket/tmp/haproxy.sock level admin
> defaults
>log global
>clitimeout 6
>srvtimeout 3
>contimeout 4000
>retries 3
>option redispatch
>option httpclose
>option abortonclose
> 
> listen stats 192.168.60.158:8081
>modehttp
>stats  uri /stat  #Comment this if you need to specify diff 
> stat path for viewing stat page
>stats enable  
> listen erp_cluster_https 0.0.0.0:81
>   mode http
>   balance roundrobin
>   option forwardfor except 0.0.0.0
>   reqadd X-Forwarded-Proto:\ https
>   cookie SERVERID insert indirect
>   server tomcat01-instance1 192.168.60.156:8080 cookie A check
>   server tomcat01-instance2 192.168.60.156:18080 cookie A check
>   server tomcat02-instance1 192.168.60.157:8080 cookie A check
>   server tomcat02-instance2 192.168.60.157:18080 cookie A check



Re: Consistent hashing question

2010-10-06 Thread David Birdsong
I'm pretty sure it's the id field on the server line.  If you don't specify
one, then one is assigned for you.

For any installation using consistent hashing, it is good to set the id
explicitly so as to have control of you hash buckets.

On Oct 6, 2010 1:41 PM, "Dmitri Smirnov"  wrote:

Hi all,

While doing consistent hashing I observed ( as expected) that the order of
backend servers in the configuratio affects the distribution of the load.

Being in the cloud, I am forced to regenerate the configuration file and
restart  because both the public host name and their addresses change most
of the time as instances are replaced. To keep the distribution consistent I
sort the list of servers by host name.

However, I am not sure if this is exactly right thing to do.

Question, what exactly is inserted into the consistent hashing tree:

1) name of the server that I specify
2) host name
3) resolved ten dot address.

I am not looking at the source code now in  hopes that the community will
provide more insight faster,

thank you,

-- 
Dmitri Smirnov


Re: stunnel patch updates

2010-10-06 Thread Joe Williams

Here's an updated listen queue depth patch for stunnel 4.32

-Joe


On Oct 4, 2010, at 1:09 PM, Jim Riggs wrote:

> On Oct 4, 2010, at 2:42 PM, Joe Williams wrote:
> 
>> Anyone have updated patches for stunnel 4.34, specifically for the listen 
>> queue length and X-Forwarded-For? The patches on the haproxy site don't seem 
>> to work.
> 
> 
> Attached is an updated version of the xforwardedfor patch that I use for 
> 4.32.  I haven't tried it with 4.34 yet...
> 
> 


stunnel-4.32-listen-queue.diff
Description: Binary data



Name: Joseph A. Williams
Email: j...@joetify.com
Blog: http://www.joeandmotorboat.com/
Twitter: http://twitter.com/williamsjoe



x-forwarded-for logging

2010-10-06 Thread Joe Williams

I applied the x-forwarded-for patch to stunnel in hopes that haproxy would log 
the forwarded for address but it doesn't seem to. Is this possible?

Thanks.
-Joe



Name: Joseph A. Williams
Email: j...@joetify.com
Blog: http://www.joeandmotorboat.com/
Twitter: http://twitter.com/williamsjoe




Re: Interest in patch for web interface to enable/disable servers

2010-10-06 Thread Willy Tarreau
Hi Cyril,

On Wed, Oct 06, 2010 at 11:18:21PM +0200, Cyril Bonté wrote:
> Hi,
> 
> Le mardi 5 octobre 2010 17:01:24, Willy Tarreau a écrit :
> > Hi guys,
> > 
> > sorry for the delay, I missed that thread. I'm currently collecting
> > missing patches for next 1.4.
> 
> That gave me time to apply the patches on a 1.5 snapshot and test it in a 
> small production environment. It works well without any modifications.

Thanks for the feedback, this will mean less worries for the porting work.

(...)
> > So I'd like to get your opinion on that. I'm not for adding features when
> > people don't need them, but if there is demand, we can have them.
> 
> I'm currently giving training sessions for my colleagues and from their 
> feedbacks, such a feature would be very appreciated, as it will let them 
> control the servers similarly to what they use to do with mod_proxy_balancer 
> and mod_jk.

I have no doubt people will want it, I was wondering if they'd possibly
wait for 1.5 or want it in 1.4 in fact.

> For our needs, we can wait for the 1.5 but I'm not against having it directly 
> in 1.4, and I guess that Judd will likely appreciate to have it in 1.4 too ;-)

Yes, for sure. I'll let Judd decide then. Both of you have done a great work
on this, it's the least I can do to let you have it in 1.4 if you think you
need it.

> I just don't want to provide a patch that break things in the stable branch 
> :-)

I know ;-)

> Do you plan to release version 1.4.9 soon ? I ask that because I've not 
> started to write the documentation for "stats admin", then in the case it will
> be merged in 1.4, I'll try to work on it this week-end.

I've been spending some time these last days chasing the fixes from 1.4 to
port them into 1.5 and conversely, in order to ensure that nothing gets lost.
I spent about 2-3 days fixing and documenting the ECV patch that I've merged
into 1.4 too, because I know that many people already patch their code with
the buggy version and sometimes report issues. I've backported the cookie
parser fixes from 1.5 (eg: support spaces around cookies). I still have some
work to finish on the cookies (support for expirable persistence cookies)
that some people have requested for 1.4 (even 1.3 in fact, we'll see). I'd
like to hope that I can finish it by the week-end, but I can't be sure.
However, the work on cookies is the last thing I have pending, so once that's
done we can release fast. Maybe I'll emit an 1.4.9-rc1 first, in order to
spot possible build issues that I have not detected.

Cheers,
Willy




Re: Interest in patch for web interface to enable/disable servers

2010-10-06 Thread Cyril Bonté
Hi,

Le mardi 5 octobre 2010 17:01:24, Willy Tarreau a écrit :
> Hi guys,
> 
> sorry for the delay, I missed that thread. I'm currently collecting
> missing patches for next 1.4.

That gave me time to apply the patches on a 1.5 snapshot and test it in a 
small production environment. It works well without any modifications.

> At first glance, both of your patches look clean and separated enough from
> the rest of the code to limit the risk of regression. So I'm not opposed to
> merge it into 1.4. However, I'd say that if the current users of the patch
> are currently migrating to 1.5, probably that we'd better put that into 1.5
> only.
> 
> So I'd like to get your opinion on that. I'm not for adding features when
> people don't need them, but if there is demand, we can have them.

I'm currently giving training sessions for my colleagues and from their 
feedbacks, such a feature would be very appreciated, as it will let them 
control the servers similarly to what they use to do with mod_proxy_balancer 
and mod_jk.

For our needs, we can wait for the 1.5 but I'm not against having it directly 
in 1.4, and I guess that Judd will likely appreciate to have it in 1.4 too ;-)
I just don't want to provide a patch that break things in the stable branch 
:-)

Do you plan to release version 1.4.9 soon ? I ask that because I've not 
started to write the documentation for "stats admin", then in the case it will 
be merged in 1.4, I'll try to work on it this week-end.

-- 
Cyril Bonté



Consistent hashing question

2010-10-06 Thread Dmitri Smirnov

Hi all,

While doing consistent hashing I observed ( as expected) that the order 
of backend servers in the configuratio affects the distribution of the load.


Being in the cloud, I am forced to regenerate the configuration file and 
restart  because both the public host name and their addresses change 
most of the time as instances are replaced. To keep the distribution 
consistent I sort the list of servers by host name.


However, I am not sure if this is exactly right thing to do.

Question, what exactly is inserted into the consistent hashing tree:

1) name of the server that I specify
2) host name
3) resolved ten dot address.

I am not looking at the source code now in  hopes that the community 
will provide more insight faster,


thank you,

--
Dmitri Smirnov





Performance Question

2010-10-06 Thread Les Stroud
I’ve figured I would find answers to this in the archive, but have been unable 
to.  So, I appreciate the time.

I am setting up an haproxy instance in front of some tomcat instances.  As a 
test, I ran ab against one of the tomcat instances directly with an increasing 
number of concurrent connections.  I then repeated the same test with haproxy 
fronting 4 tomcat servers.  I was hoping to see that the haproxy setup would 
perform a higher number of requests per second and hold that higher number with 
increasingly high traffic.  Unfortunately, it did not.  

Hitting the tomcat servers directly, I was able to get in excess of 3700 rqs/s. 
 With haproxy in front of that tomcat instance and three others (using 
roundrobin), I never surpassed 2500.  I also did not find that I was able to 
handle an increased amount of concurrency (both started giving errors around 
2).

I have tuned the tcp params on the linux side per the suggestions I have seen 
on here. Are there any other places I can start to figure out what I have wrong 
in my configuration??

Thanx,
LES


———

haproxy.cfg

global
#log loghostlocal0 info
maxconn 500
nbproc 4
stats socket/tmp/haproxy.sock level admin
defaults
log global
clitimeout 6
srvtimeout 3
contimeout 4000
retries 3
option redispatch
option httpclose
option abortonclose

listen stats 192.168.60.158:8081
modehttp
stats  uri /stat  #Comment this if you need to specify diff 
stat path for viewing stat page
stats enable  
listen erp_cluster_https 0.0.0.0:81
   mode http
   balance roundrobin
   option forwardfor except 0.0.0.0
   reqadd X-Forwarded-Proto:\ https
   cookie SERVERID insert indirect
   server tomcat01-instance1 192.168.60.156:8080 cookie A check
   server tomcat01-instance2 192.168.60.156:18080 cookie A check
   server tomcat02-instance1 192.168.60.157:8080 cookie A check
   server tomcat02-instance2 192.168.60.157:18080 cookie A check


¿ dynamic weights based on server l oad ?

2010-10-06 Thread Pablo Escobar Lopez

 hi,

I have seen previous post in the mailing list about this topic but 
couldn´t find any docs about dynamic weights in latest haproxy releases.


Would be great to be able to dinamically modify weight based on servers 
load. ¿is there any plan in the roadmap to implement this feature? I 
have also readed in the mailing list someone had implement a patch for 
haproxy 1.3.X to achieve this. ¿is this patch published?


many thanks in advance for any help.
pablo.

--
Pablo Escobar Lopez
Head of Infrastructure&  IT Support
Bioinformatics Department
Centro de Investigación Príncipe Felipe (CIPF)
http://bioinfo.cipf.es