Re: Two questions about lua

2015-11-30 Thread joris dedieu
Thanks Thierry, for your answers.


2015-11-30 16:53 GMT+01:00 Thierry FOURNIER :
> On Mon, 30 Nov 2015 08:37:00 +0100
> joris dedieu  wrote:
>
>> Hi all,
>>
>> I started to drive into haproxy's lua interface. I produced a few code
>> that allows dnsbl lookup and it seems to work.
>>
>> First I have a C wrapper against the libc resolver..
>>
>> #include 
>> #include 
>> #include 
>> #include 
>> #include 
>>
>> #include 
>> #include 
>>
>> static int gethostbyname_wrapper(lua_State *L)
>> {
>> const char* query = luaL_checkstring(L, 1);
>> struct hostent *he;
>> if ((he = gethostbyname(query)) != NULL) {
>> const char *first_addr =
>> inet_ntoa(*(struct in_addr*)he->h_addr_list[0]);
>> lua_pushstring(L, first_addr);
>> return 1;
>> }
>> return 0;
>> }
>>
>> static const luaL_Reg sysdb_methods[] = {
>> {"gethostbyname", gethostbyname_wrapper},
>> {NULL, NULL}
>> };
>>
>> LUALIB_API int luaopen_sysdb(lua_State *L) {
>> luaL_newlib(L, sysdb_methods);
>> return 1;
>> }
>>
>> I have some doubts on the asyncness of libc operations but in other
>> side I don't want to reinvent the wheel. Should I prefer a resolver
>> implementation that uses lua socket ? As far as I tested libc seems to
>> do the job.
>
> Hello,
>
> I confirm your doubts: gethostbyname is synchronous and it is a
> blocking call. If your hostname resolution is in the /etc/hosts file,
> it blocks while reading file. It it is from DNS server, it blocks
> waiting for the server response (or worse: wainting for the timeout).
>
> So, this code seems to run, but your HAProxy will be not efficient
> because the entire haproxy process will be blocked during each
> resolution. For example: if your DNS fails after 3s timeout, during 3s,
> HAProxy doesn't process any data.
>
> Otherwise, your code is the good way to perform fast Lua/C libraries.
>
> There are no way to simulate blocking access out of the HAProxy core,
> all the functions writed for Lua must be non block.

Ok, I will check for a non blocking solution (maybe lua socket + pack
/ unpack in C) .

>
>
>> Then the lua code
>>
>> local sysdb = require("sysdb")
>>
>> core.register_fetches("rbl", function(txn, rbl, ip)
>> if (not ip) then
>> ip = txn.sf:src()
>> end
>> if (not rbl) then
>> rbl = "zen.spamhaus.org"
>> end
>> local query = rbl
>> for x in string.gmatch(ip, "[^%.]+") do
>> query = x .. '.' .. query
>> end
>> if(sysdb.gethostbyname(query)) then
>> return 1
>> else
>> return 0
>> end
>> end)
>>
>> I want to use a sticky table as a local cache so my second question :
>> is there a way to set a gpt0 value from lua ?
>
>
> You can use the samples fetches mapper and use the sc_set_gpt0. The
> syntax is like this:
>
> For the read access:
>
>txn.sf:sc_set_gpt0()
>txn.sc:table_gpc0()
>
> For the write access, I don't have direct solution. You must use an Lua
> sample fetch and the following configuration directive:
>
>http-request sc-set-gpt0 lua.my_sample_fetch

Yes that's an option.

>
> Maybe it will be a good idea to implement the stick table access in Lua.
>
> If you want a other maneer to store shared data inhaproxy, you can use
> maps. The maps are shared by all the HAProxy process including Lua with
> a special API (see Map class)

 I thought on Maps but I didn't find a write access  in lua according
to http://www.arpalert.org/src/haproxy-lua-api/1.6/index.html#map-class
and some of my experiments

Thanks
Joris
>
> Thierry



Multiproc balance

2015-11-30 Thread Stefan Johansson
Hello,

I've started to switch to a multiproc setup for a high traffic site and I was 
pondering a potential stupid question; What is actually balancing the balancers 
so to speak? Is it Linux itself that balances the number of connections between 
the instances?
I'm running in a vSphere/ESXi machine with 5 vCores, where I use core 0 for 
interrupts, 1-3 for http and 4 for https.
Since it's a VM, NIC queueing and IRQ coalescing seems to be out of the 
question, so I'm just leaving the core 0 for interrupts and it seems to work 
fine. I just bind cores 1 through 4 to the haproxy processes and leave 0 out.
However, the three haProxy processes serving http requests, they are taking 
10%, 30% and 60% respectively of the load. It's always the same cores taking 
the same amount of load, it never changes, it's somehow "decided" that one 
process takes 10%, the other 30% and the last 60%.
What decides this "balancing" between the haproxy processes? Can it be the VM 
setup? I've never run a multiproc setup with haProxy on a physical machine, so 
I don't have any reference to such a setup.

Thank you.

Regards,
Stefan


Re: Two questions about lua

2015-11-30 Thread Thierry FOURNIER
On Mon, 30 Nov 2015 08:37:00 +0100
joris dedieu  wrote:

> Hi all,
> 
> I started to drive into haproxy's lua interface. I produced a few code
> that allows dnsbl lookup and it seems to work.
> 
> First I have a C wrapper against the libc resolver..
> 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> #include 
> #include 
> 
> static int gethostbyname_wrapper(lua_State *L)
> {
> const char* query = luaL_checkstring(L, 1);
> struct hostent *he;
> if ((he = gethostbyname(query)) != NULL) {
> const char *first_addr =
> inet_ntoa(*(struct in_addr*)he->h_addr_list[0]);
> lua_pushstring(L, first_addr);
> return 1;
> }
> return 0;
> }
> 
> static const luaL_Reg sysdb_methods[] = {
> {"gethostbyname", gethostbyname_wrapper},
> {NULL, NULL}
> };
> 
> LUALIB_API int luaopen_sysdb(lua_State *L) {
> luaL_newlib(L, sysdb_methods);
> return 1;
> }
> 
> I have some doubts on the asyncness of libc operations but in other
> side I don't want to reinvent the wheel. Should I prefer a resolver
> implementation that uses lua socket ? As far as I tested libc seems to
> do the job.

Hello,

I confirm your doubts: gethostbyname is synchronous and it is a
blocking call. If your hostname resolution is in the /etc/hosts file,
it blocks while reading file. It it is from DNS server, it blocks
waiting for the server response (or worse: wainting for the timeout).

So, this code seems to run, but your HAProxy will be not efficient
because the entire haproxy process will be blocked during each
resolution. For example: if your DNS fails after 3s timeout, during 3s,
HAProxy doesn't process any data. 

Otherwise, your code is the good way to perform fast Lua/C libraries.

There are no way to simulate blocking access out of the HAProxy core,
all the functions writed for Lua must be non block.


> Then the lua code
> 
> local sysdb = require("sysdb")
> 
> core.register_fetches("rbl", function(txn, rbl, ip)
> if (not ip) then
> ip = txn.sf:src()
> end
> if (not rbl) then
> rbl = "zen.spamhaus.org"
> end
> local query = rbl
> for x in string.gmatch(ip, "[^%.]+") do
> query = x .. '.' .. query
> end
> if(sysdb.gethostbyname(query)) then
> return 1
> else
> return 0
> end
> end)
> 
> I want to use a sticky table as a local cache so my second question :
> is there a way to set a gpt0 value from lua ?


You can use the samples fetches mapper and use the sc_set_gpt0. The
syntax is like this:

For the read access:

   txn.sf:sc_set_gpt0()
   txn.sc:table_gpc0()

For the write access, I don't have direct solution. You must use an Lua
sample fetch and the following configuration directive:

   http-request sc-set-gpt0 lua.my_sample_fetch

Maybe it will be a good idea to implement the stick table access in Lua.

If you want a other maneer to store shared data inhaproxy, you can use
maps. The maps are shared by all the HAProxy process including Lua with
a special API (see Map class)

Thierry



RE: Configuring Load Balance HAProxy

2015-11-30 Thread Mauricio Cacho Gutiérrez
You were right, there were more HAProxy processes running in the server,
that’s why it kept sending connections to the OLD_NAME_FRONTEND. I killed
all the other processes and everything is working fine now.

 

Thanks a lot.

 

De: PiBa-NL [mailto:piba.nl@gmail.com] 
Enviado el: sábado, 28 de noviembre de 2015 07:39
Para: Mauricio Cacho Gutiérrez; haproxy@formilux.org
Asunto: Re: Configuring Load Balance HAProxy

 

There is no cache to delete..

Can you check there is only 1 active haproxy process running?

Depending on how you restart haproxy it could be that old existing
connections are still served by the old process that should shutdown once
all connections are closed. The old stopping process should not serve new
connections if it properly received the shutdown signal from the new
process.

Op 27-11-2015 om 20:24 schreef Mauricio Cacho Gutiérrez:

Hi, I’ve configured HAProxy to load balance between servers running
Postgresql. At first, I setup 3 servers, one of which was the master and the
other two slaves. It worked fine; now I want only the slaves to be available
to HAProxy, so I deleted the line with the master configuration in the
haproxy.cfg, stopped the service and started it again, but I’ve notice in
the logs that HAProxy is still sending connections to the master server,
even though there’s no line in the haproxy.cfg with that IP. So I tried
changing the name in the front_end to ensure the changes where taking
effect, so I changed the name, stopped the service haproxy and started it
again; in the logs it displays 

“Proxy new_name_frontend started”

“Proxy new_name_backend started”

So nothing about the old frontend name or backend… But when I connect via
terminal to the server running HAProxy, it displays something like:

“[ip_client]:[port] [date] OLD_NAME_FRONTEND….”

And it’s using the master server… Is there some caché or something like that
that I should erase so HAProxy has no clue of the master server anymore?

 

PS: I’m using the same port.

 

Thanks

 



Re: [SPAM] Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-11-30 Thread Bryan Talbot
On Mon, Nov 30, 2015 at 3:32 PM, Olivier Doucet  wrote:

> Hello,
>
> I'm digging out this thread, because having multiple certificate for one
> single domain (SNI) but with different key types (RSA/ECDSA) can really be
> a great functionality. Is there some progress ? How can we help ?
>


I'd love to see better support for multiple certificate key types for the
same SNI too.

That said, it is possible to serve both EC and RSA keyed certificates using
haproxy 1.6 now. See
http://blog.haproxy.com/2015/07/15/serving-ecc-and-rsa-certificates-on-same-ip-with-haproxy/
for details. It's not exactly pretty but it does seem to work.




>
> A subsidiary question is : how much ECDSA certificates are supported ? So
> if I use a single ECDSA certificate, how many people wont be able to see my
> content ?
>
>
>
They're pretty well supported by modern clients. Exactly what that means is
a bit fuzzy though: see
https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_and_ECDHE_support for
additional details.

If your clients are all "modern" browsers and mobile devices, you're
probably good. If there are old clients, or other systems calling an API
there can be issues especially if they are using Java <= 7.

I've also discovered that Amazon CloudFront doesn't support EC certificates
at all. Can't use them in CloudFront distributions and CloudFront won't
connect to an Origin that uses them.

-Bryan


[SPAM] Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-11-30 Thread Olivier Doucet
Hello,

I'm digging out this thread, because having multiple certificate for one
single domain (SNI) but with different key types (RSA/ECDSA) can really be
a great functionality. Is there some progress ? How can we help ?

A subsidiary question is : how much ECDSA certificates are supported ? So
if I use a single ECDSA certificate, how many people wont be able to see my
content ?


Olivier


2015-08-25 18:16 GMT+02:00 Willy Tarreau :

> Hi Dave,
>
> On Tue, Aug 25, 2015 at 03:50:23PM +, Dave Zhu (yanbzhu) wrote:
> > Hey Willy,
> >
> > On 8/25/15, 10:36 AM, "Willy Tarreau"  wrote:
> >
> > >This means that the RSA/DSA/ECDSA cert names must be derived from the
> > >original cert name.
> >
> > Iąve thought of a way to avoid this behavior, with the end result being
> > very similar to what you/Emeric proposed.
> >
> > What if we delayed the creation of the SSL_CTX until we have all the
> certs
> > specified by the config?
>
> In my opinion that only adds extra burden because this delay adds loss of
> knowledge or association between the certs that were initially loaded at
> the same time.
>
> > We would read in all the certificates first and
> > store them based on the CN/SAN inside the cert, or the SNIs specified by
> > the admin. We would also store the auxiliary information as well at this
> > point. Your tree would look like:
> >
> >   Names -> Certificates + aux info
> >
> >
> > We then iterate on all of the Names and create an SSL_CTX for each Name
> > based on the certificates available + any wildcard/negation filters we
> > have. This will fill out our FQDN tree. After creating the SSL_CTXąs we
> > could free the original tree, as it would no longer be needed.
> >
> > In this scenario, each FQDN would have an SSL_CTX associated with it,
> > which is a departure from the current model. While this may seem like a
> > huge spike in memory footprint, consider that OpenSSL uses references for
> > keys and certificates.
>
> I'm not much concerned by this for now because when you have many FQDN,
> you already have as many SSL_CTX today. I tend to consider that large
> configs (the ones where memory footprint starts to matter) don't have
> many names for each of their certs. For example the config that led to
> crt-list being designed was working with 5 certificates. I really
> doubt that there were more than 1-2 names per cert on average, I'd even
> bet something around 1.01 or so in average.
>
> > Therefore, the additional impact is limited to the
> > extra pointers in SSL_CTX, instead of duplicating X509 or PKEY buffers.
> We
> > could also add additional logic to search through the current FQDN tree
> > for łduplicate˛ SSL_CTX that contain the same cert/keys, and just use the
> > pointer instead of creating a new SSL_CTX. Given enough metadata around
> > the SSL_CTX in the FQDN tree, this shouldnąt be too hard.
>
> That's the part I tend to dislike. If we later add extra parameters in
> crt-list, we'll be happy to keep each line separate and *not* to merge
> them. The example of validity dates was one such case but there could
> be other ones.
>
> While this may seem a stupid or completely made up example, imagine that
> we could specify on each line of the crt-list a filter on the source
> network
> to decide if the cert has to be presented or not. This way users could
> decide that certs signed with official CAs are delivered to the public
> while certs signed with internal CAs are delivered inside. Or even just
> to use different algos depending on the network, for example test ECDSA
> just on internal users first. As long as we keep all the elements of one
> crt-list entry tied together, all such fantasy remains possible. When we
> tear them apart and only focus on names to pack everything together, this
> is not possible anymore. You said yourself that the memory usage doesn't
> matter much here, let's not risk to sacrify extensivity for the sake of
> trying to compress just a few more bytes.
>
> > I feel that this would solve the problem of admins having to keep track
> of
> > the certificate names, and keep the current behavior of łLet HAProxy
> > figure out the certs, hereąs a bunch of them˛.
> >
> > It would also solve the issue of conflicting names. For example:
> >
> > Cert A(RSA):
> >
> > CN: www.blahDomain.com
> > SANs: 1.blahDomain.com
> >   2.blahDomain.com
> >   3.blahDomain.com
> >
> > Cert B(ECDSA)
> >
> > CN: www.blahDomain.com
> >
> > SANs: 2.blahDomain.com
> >   3.blahDomain.com
> >   4.blahDomain.com
> >
> >
> > If we optimize the insertion logic via metadata, we would have the
> > following in our FQDN tree:
> >
> > 1: Name=www.blahDomain.com; SSL_CTX#1={Cert A, Cert B}
> > 2: Name=1.blahDomain.com;   SSL_CTX#2={Cert A}
> >
> > 3: Name=2.blahDomain.com;   SSL_CTX#1={Cert A, Cert B}
> >
> > 4: Name=3.blahDomain.com;   SSL_CTX#1={Cert A, Cert B}
> >
> > 5: Name=4.blahDomain.com;   SSL_CTX#3={Cert B}
> >
> >
> > Like your 

RE: HAProxy: Max. throughput using HTTPs client authentication

2015-11-30 Thread Hemanth Abbina
Hi,

Sorry for minimal details. Will try to elaborate the situation.

We are developing a central log repository in Cloud, for which we are using 
HAProxy ass log balancer and backend as Flume for further processing.
We are expecting HTTPs traffic from multiple known clients  and also we need to 
authenticate these client using their client certificates.

When we used in plain HTTP mode, we could able to receive and process around 80 
sessions/second at HAProxy. Below is the configuration used.
global
log 127.0.0.1 local2
chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 1
userhaproxy
group   haproxy
daemon
stats socket /var/lib/haproxy/stats
tune.bufsize 16384
tune.maxrewrite 1024

defaults
modehttp
log global
option  httplog
option  dontlognull
option http-server-close
option forwardfor   except 127.0.0.0/8
option  redispatch
retries 3
timeout http-request10s
timeout queue   1m
timeout connect 10s
timeout client  10m
timeout server  10m
timeout http-keep-alive 10s
timeout check   10s
maxconn 1

frontend http_request
bind *:5001
mode http
default_backend handle_http_request

backend handle_http_request
mode http
balance roundrobin
server Flume1 10.15.1.31:5005

listen logstats
bind *:31337
mode http
option httpclose
balance roundrobin
stats uri /
stats realm Haproxy\ Statistics
stats refresh 10s
stats auth svcloud:svcloud

Later we changed the configuration to accept HTTPs traffic and with the same 
client & same backend server, the sessions/second dropped to 1. Below is the 
configuration used.

global
log 127.0.0.1 local1 notice   
chroot  /var/lib/haproxy  
pidfile /var/run/haproxy.pid  
maxconn 1 
userhaproxy   
group   haproxy   
daemon
stats socket /var/lib/haproxy/stats   
tune.bufsize 16384
tune.maxrewrite 1024  
tune.ssl.default-dh-param 2048
  
defaults  
modehttp  
log global
option  httplog   
option  dontlognull   
option http-server-close  
option forwardfor   except 127.0.0.0/8
option  redispatch
retries 3 
timeout http-request10s   
timeout queue   1m
timeout connect 10s   
timeout client  10m   
timeout server  10m   
timeout http-keep-alive 10s   
timeout check   10s   

[SPAM] The World's Smallest WiFi Camera

2015-11-30 Thread sales1607jmc
Dear Friend,


Good day!


The smallest WiFi camera , Smallest size for portalble and hidden ability,
convenient and easy carry with battery supported ,why not have a try now?




Features:

1. Beautiful design like a artcraft.
2. Smallest size for portalble and hidden ability.
3. Power and battery supported for easy moving, also in case of electricity 
break off.
4. Maganet bracket to suit any iron surfance.
5. Sending picture to moblile with accurate motion detection.
6. AP supported. Watch it from anywhere and anytime.



Please reply for more details if you are Interested.

Looking forward to your reply. Thanks!






Best Regards!

Merry  Lin

JMC Electron Co., Limited 

www.jmcsz.com 

Tel: +86-755-2376 4537 Ext:807

Fax: +86-755-2376 4537 

Cell: +86-134-1039-8758

Email: sale...@jmc-sz.com 

Skype: sales16jmc

Factory Address: 4th Floor, Building D of Baifuli, Yuan Ye  Industrial park, 
Baiyunshan New Village, Shang heng lang, Longhua Town,Shenzhen, P.R.C. (518109)





 





 





 





 





 





 





 





 





 





 





 





 

Re: [SPAM] Re: Contribution for HAProxy: Peer Cipher based SSL CTX switching

2015-11-30 Thread Willy Tarreau
On Mon, Nov 30, 2015 at 04:20:15PM -0800, Bryan Talbot wrote:
> On Mon, Nov 30, 2015 at 3:32 PM, Olivier Doucet  wrote:
> 
> > Hello,
> >
> > I'm digging out this thread, because having multiple certificate for one
> > single domain (SNI) but with different key types (RSA/ECDSA) can really be
> > a great functionality. Is there some progress ? How can we help ?
> >
> 
> 
> I'd love to see better support for multiple certificate key types for the
> same SNI too.
> 
> That said, it is possible to serve both EC and RSA keyed certificates using
> haproxy 1.6 now. See
> http://blog.haproxy.com/2015/07/15/serving-ecc-and-rsa-certificates-on-same-ip-with-haproxy/
> for details. It's not exactly pretty but it does seem to work.

Sure, it was an efficient solution : simple to implement and reliable.
But now we clearly need to finish the work that was started a few months
ago on the subject.

> > A subsidiary question is : how much ECDSA certificates are supported ? So
> > if I use a single ECDSA certificate, how many people wont be able to see my
> > content ?
> >
> >
> >
> They're pretty well supported by modern clients. Exactly what that means is
> a bit fuzzy though: see
> https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_and_ECDHE_support for
> additional details.
> 
> If your clients are all "modern" browsers and mobile devices, you're
> probably good. If there are old clients, or other systems calling an API
> there can be issues especially if they are using Java <= 7.

I recently stumbled on a site (which I forgot) which reported that about 75%
of their visitors support ECDSA. So in short, if we can divide the CPU usage
by 20 for 75% of the visitors, that's roughly a 3.5x performance improvement
to be expected, that would be nice!

Regards,
Willy




HAProxy: Max. throughput using HTTPs client authentication

2015-11-30 Thread Hemanth Abbina
Hi,
We are validating HAProxy for our environment, as our primary load balancer to 
receive HTTPs traffic and also needs to verify clients. We are testing it on a 
8 core 32 GB CentOS server.
In HTTP mode, we could able to send up to 80 sessions/sec with a single HTTP 
client.
The same setup when used with HTTPs along with client authentication, we could 
get only 1 session/sec. Is this performance expected or can we do anything to 
improve the performance ? Below is the ssl configuration used.
bind *:443 ssl crt ./certs/server.pem ca-file ./certs/ca.crt verify 
required

--regards
Hemanth


Re: HAProxy: Max. throughput using HTTPs client authentication

2015-11-30 Thread Baptiste
On Mon, Nov 30, 2015 at 1:20 PM, Hemanth Abbina
 wrote:
> Hi,
>
> We are validating HAProxy for our environment, as our primary load balancer
> to receive HTTPs traffic and also needs to verify clients. We are testing it
> on a 8 core 32 GB CentOS server.
>
> In HTTP mode, we could able to send up to 80 sessions/sec with a single HTTP
> client.
>
> The same setup when used with HTTPs along with client authentication, we
> could get only 1 session/sec. Is this performance expected or can we do
> anything to improve the performance ? Below is the ssl configuration used.
>
> bind *:443 ssl crt ./certs/server.pem ca-file ./certs/ca.crt verify
> required
>
>
>
> --regards
>
> Hemanth


Hi,

Sorry, but the numbers you're reporting doesn't make any sense!
Please provide full information about your haproxy box, anything which
may help us understanding what happens, such as your configuration,
sysctls, dmesg output, logs, etc...

Baptiste



Re: Email checks in defaults section

2015-11-30 Thread Sylvain Faivre

On 11/01/2015 06:34 PM, Tommy Atkinson wrote:

I want to enable email alerts for all my backends so I added the
"email-alert" options to the defaults section and a mailers section at
the top level. The documentation indicates this is supported but it
doesn't seem to work. HAProxy connects to the mail server but doesn't
actually send anything. Copy/pasting the options to a backend works.

1.6.1 on Linux




Hi,

This seems to be a bug, I had the same problem.
Copying & pasting the "email-alert" settings in all the backend sections 
did the trick.


When email-alert options are set only in the defaults section, just like 
Tommy said, HAproxy connects to the mail server but doesn't send anything.

Here are the postfix logs from the problem :
Nov 30 09:31:01 host1 postfix/smtpd[2502]: connect from localhost[127.0.0.1]
Nov 30 09:31:03 host1 postfix/smtpd[2502]: lost connection after CONNECT 
from localhost[127.0.0.1]
Nov 30 09:31:03 host1 postfix/smtpd[2502]: disconnect from 
localhost[127.0.0.1]


With fixed config :
Nov 30 10:03:58 host1 postfix/smtpd[8335]: connect from localhost[127.0.0.1]
Nov 30 10:03:58 host1 postfix/smtpd[8335]: 79A28402E7: 
client=localhost[127.0.0.1]
Nov 30 10:03:58 host1 postfix/cleanup[8338]: 79A28402E7: 
message-id=<20151130090358.79A28402E7@host1>
Nov 30 10:03:58 host1 postfix/smtpd[8335]: disconnect from 
localhost[127.0.0.1]
Nov 30 10:03:58 host1 postfix/qmgr[1346]: 79A28402E7: 
from=, size=600, nrcpt=1 (queue active)
Nov 30 10:03:58 host1 postfix/smtp[8339]: 79A28402E7: 
to=, relay=x:25, delay=0.14, 
delays=0.06/0.03/0.04/0, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 
962E420D9C)

Nov 30 10:03:58 host1 postfix/qmgr[1346]: 79A28402E7: removed


(using 1.6.2 Ubuntu package)

Best regards,
Sylvain