Combining edismax Parser with Block Join Parent Query Parser

2021-01-11 Thread Ravi Lodhi
Hello Guys,

Does Solr support edismax parser with Block Join Parent Query Parser? If
yes then could you provide me the syntax or point me to some reference
document? And how does it affect the performance?

I am working on a search screen in an eCommerce application's backend. The
requirement is to design an order search screen. We were thinking of using
a nested document approach. Order information document as parent and all
its items as child document. We need to perform keyword search on both
parent and child documents. By using Block Join Parent Query Parser we can
search only on child documents and can retrieve parents. The sample
document structure is given below. We need "OR" condition between edismax
query and Block Join Parent Query Parser.

Is the nested document a good approach for the order and order items
related data or we should denormalize the data either at parent level or
child level? Which will be the best suitable schema design in this scenario?

e.g. If I search "WEB" then if this is found in any of the child documents
then the parent doc should return or if it is found on any parent document
then that parent should return.

Sample Parent doc:
{
"orderId": "ORD1",
"orderTypeId": "SALES",
"orderStatusId": "ORDER_APPROVED",
"orderStatusDescription": "Approved",
"orderDate": "2021-01-09T07:00:00Z",
"orderGrandTotal": "200",
"salesChannel": "WEB",
"salesRepNames": "Demo Supplier",
"originFacilityId": "FACILITY_01"
}

Sample Child doc:

{
"orderItemId": "ORD1",
"itemStatusId": "ORDER_APPROVED",
"itemStatusDescription": "Approved",
"productId": "P01",
"productName": "Demo Product",
"productInternalName": "Demo Product 01",
"productBrandName": "Demo Brand"
}

Any Help on this will be much appreciated!

Thanks!
Ravi Lodhi


Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Kaushal Shriyan
On Tue, Jan 12, 2021 at 12:10 AM Dmitri Maziuk 
wrote:

> On 1/11/2021 12:30 PM, Walter Underwood wrote:
> > Use a load balancer. We’re in AWS, so we use an AWS ALB.
> >
> > If you don’t have a failure-tolerant load balancer implementation, the
> site has bigger problems than search.
>
> That is the point, you have amazon doing that for you, some of us do it
> ourselves, and it wasn't clear (to me anyway) if OP was asking about that.
>
> Dima
>

Hi,

Thanks for all the suggestions. I am hosting my Solr search service in GCP.
I have a follow-up question regarding Solr Nodes. Do I need to have a
Single Master and Multiple Slaves? I am using GCP Internal Load Balancer (
https://cloud.google.com/load-balancing/docs/l7-internal).

Internal LB -> Master Node1 and Master Node2. Master Node1 will have Slave
1 and Master Node2 will have Slave2 as per the below diagram as an example.
Please suggest further and correct me if the approach is incorrect. I am
not sure how do I replicate indices when I use Google Compute Platform
Internal LB.

[image: image.png]


Thanks in Advance.

Best Regards,

Kaushal


Re: Solr using all available CPU and becoming unresponsive

2021-01-11 Thread Michael Gibney
Hi Jeremy,
Can you share your analysis chain configs? (SOLR-13336 can manifest in a
similar way, and would affect 7.3.1 with a susceptible config, given the
right (wrong?) input ...)
Michael

On Mon, Jan 11, 2021 at 5:27 PM Jeremy Smith  wrote:

> Hello all,
>  We have been struggling with an issue where solr will intermittently
> use all available CPU and become unresponsive.  It will remain in this
> state until we restart.  Solr will remain stable for some time, usually a
> few hours to a few days, before this happens again.  We've tried adjusting
> the caches and adding memory to both the VM and JVM, but we haven't been
> able to solve the issue yet.
>
> Here is some info about our server:
> Solr:
>   Solr 7.3.1, running on Java 1.8
>   Running in cloud mode, but there's only one core
>
> Host:
>   CentOS7
>   8 CPU, 56GB RAM
>   The only other processes running on this VM are two zookeepers, one for
> this Solr instance, one for another Solr instance
>
> Solr Config:
>  - One Core
>  - 36 Million documents (Max Doc), 28 million (Num Docs)
>  - ~15GB
>  - 10-20 Requests/second
>  - The schema is fairly large (~100 fields) and we allow faceting and
> searching on many, but not all, of the fields
>  - Data are imported once per minute through the DataImportHandler, with a
> hard commit at the end.  We usually index ~100-500 documents per minute,
> with many of these being updates to existing documents.
>
> Cache settings:
>   size="256"
>  initialSize="256"
>  autowarmCount="8"
>  showItems="64"/>
>
>size="256"
>   initialSize="256"
>   autowarmCount="0"/>
>
> size="1024"
>initialSize="1024"
>autowarmCount="0"/>
>
> For the filterCache, we have tried sizes as low as 128, which caused our
> CPU usage to go up and didn't solve our issue.  autowarmCount used to be
> much higher, but we have reduced it to try to address this issue.
>
>
> The behavior we see:
>
> Solr is normally using ~3-6GB of heap and we usually have ~20GB of free
> memory.  Occasionally, though, solr is not able to free up memory and the
> heap usage climbs.  Analyzing the GC logs shows a sharp incline of usage
> with the GC (the default CMS) working hard to free memory, but not
> accomplishing much.  Eventually, it fills up the heap, maxes out the CPUs,
> and never recovers.  We have tried to analyze the logs to see if there are
> particular queries causing issues or if there are network issues to
> zookeeper, but we haven't been able to find any patterns.  After the issues
> start, we often see session timeouts to zookeeper, but it doesn't appear​
> that they are the cause.
>
>
>
> Does anyone have any recommendations on things to try or metrics to look
> into or configuration issues I may be overlooking?
>
> Thanks,
> Jeremy
>
>


RE: Query over migrating a solr database from 7.7.1 to 8.7.0

2021-01-11 Thread Dyer, Jim
When we upgraded from 7.x to 8.x, I ran into an issue similar to yours:  when 
updating an existing document in the index, the document would be duplicated 
instead of replaced as expected.  The solution was to add a "_root_" field to 
schema.xml like this:



It appeared that when a feature was added for nested documents, this field 
somehow became mandatory in order for updates to work properly, at least in 
some cases.

From: Flowerday, Matthew J 
Sent: Saturday, January 9, 2021 4:44 AM
To: solr-user@lucene.apache.org
Subject: RE: Query over migrating a solr database from 7.7.1 to 8.7.0

Hi There

As a test I stopped Solr and ran the IndexUpgrader tool on the database to see 
if this might fix the issue. It completed OK but unfortunately the issue still 
occurs - a new version of the record on solr is created rather than updating 
the original record.

It looks to me as if the record created under 7.7.1 is somehow not being 
'marked as deleted' in the way that records created under 8.7.0 are. Is there a 
way for these records to be marked as deleted when they are updated.

Many Thanks

Matthew


Matthew Flowerday | Consultant | ULEAF
Unisys | 01908 774830| 
matthew.flower...@unisys.com
Address Enigma | Wavendon Business Park | Wavendon | Milton Keynes | MK17 8LX

[unisys_logo]

THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY 
MATERIAL and is for use only by the intended recipient. If you received this in 
error, please contact the sender and delete the e-mail and its attachments from 
all devices.
[Grey_LI]  [Grey_TW] 
  [Grey_YT] 
 [Grey_FB] 
 [Grey_Vimeo]  
[Grey_UB] 

From: Flowerday, Matthew J 
mailto:matthew.flower...@gb.unisys.com>>
Sent: 07 January 2021 12:25
To: solr-user@lucene.apache.org
Subject: Query over migrating a solr database from 7.7.1 to 8.7.0

Hi There

I have recently upgraded a solr database from 7.7.1 to 8.7.0 and not wiped the 
database and re-indexed (as this would take too long to run on site).

On my local windows machine I have a single solr server 7.7.1 installation

I upgraded in the following manner


  *   Installed windows solr 8.7.0 on my machine in a different folder
  *   Copied the core related folder (holding conf, data, lib, core.properties) 
from 7.7.1 to the new 8.7.0 folder
  *   Brought up the solr
  *   Checked that queries work through the Solr Admin Tool and our application

This all worked fine until I tried to update a record which had been created 
under 7.7.1. Instead of marking the old record as deleted it effectively 
created a new copy of the record with the change in and left the old image as 
still visible. When I updated the record again it then correctly updated the 
new 8.7.0 version without leaving the old image behind. If I created a new 
record and then updated it the solr record would be updated correctly. The 
issue only seemed to affect the old 7.7.1 created records.

An example of the duplication as follows (the first record is 7.7.1 created 
version and the second record is the 8.7.0 version after carrying out an 
update):

{
  "responseHeader":{
"status":0,
"QTime":4,
"params":{
  "q":"id:9901020319M01-N26",
  "_":"1610016003669"}},
  "response":{"numFound":2,"start":0,"numFoundExact":true,"docs":[
  {
"id":"9901020319M01-N26",
"groupId":"9901020319M01",
"urn":"N26",
"specification":"nominal",
"owningGroupId":"9901020319M01",
"description":"N26, Yates, Mike, Alan, Richard, MALE",
"group_t":"9901020319M01",
"nominalUrn_t":"N26",
"dateTimeCreated_dtr":"2020-12-30T12:00:53Z",
"dateTimeCreated_dt":"2020-12-30T12:00:53Z",
"title_t":"Captain",
"surname_t":"Yates",
"qualifier_t":"Voyager",
"forename1_t":"Mike",
"forename2_t":"Alan",
"forename3_t":"Richard",
"sex_t":"MALE",
"orderedType_t":"Nominal",
"_version_":1687507566832123904},
  {
"id":"9901020319M01-N26",
"groupId":"9901020319M01",
"urn":"N26",
"specification":"nominal",
"owningGroupId":"9901020319M01",
"description":"N26, Yates, Mike, Alan, Richard, MALE",
"group_t":"9901020319M01",
"nominalUrn_t":"N26",
"dateTimeCreated_dtr":"2020-12-30T12:00:53Z",
"dateTimeCreated_dt":"2020-12-30T12:00:53Z",
"title_t":"Captain",
"surname_t":"Yates",
"qualifier_t":"Voyager enterprise defiant yorktown xx yy",
"forename1_t":"Mike",
"forename2_t":"Alan",
"forename3_t":"Richard",
"sex_t":"MALE",
"orderedType_t":"Nominal",
"_version_":1688224966566215680}]
  }}

I 

Solr using all available CPU and becoming unresponsive

2021-01-11 Thread Jeremy Smith
Hello all,
 We have been struggling with an issue where solr will intermittently use 
all available CPU and become unresponsive.  It will remain in this state until 
we restart.  Solr will remain stable for some time, usually a few hours to a 
few days, before this happens again.  We've tried adjusting the caches and 
adding memory to both the VM and JVM, but we haven't been able to solve the 
issue yet.

Here is some info about our server:
Solr:
  Solr 7.3.1, running on Java 1.8
  Running in cloud mode, but there's only one core

Host:
  CentOS7
  8 CPU, 56GB RAM
  The only other processes running on this VM are two zookeepers, one for this 
Solr instance, one for another Solr instance

Solr Config:
 - One Core
 - 36 Million documents (Max Doc), 28 million (Num Docs)
 - ~15GB
 - 10-20 Requests/second
 - The schema is fairly large (~100 fields) and we allow faceting and searching 
on many, but not all, of the fields
 - Data are imported once per minute through the DataImportHandler, with a hard 
commit at the end.  We usually index ~100-500 documents per minute, with many 
of these being updates to existing documents.

Cache settings:






For the filterCache, we have tried sizes as low as 128, which caused our CPU 
usage to go up and didn't solve our issue.  autowarmCount used to be much 
higher, but we have reduced it to try to address this issue.


The behavior we see:

Solr is normally using ~3-6GB of heap and we usually have ~20GB of free memory. 
 Occasionally, though, solr is not able to free up memory and the heap usage 
climbs.  Analyzing the GC logs shows a sharp incline of usage with the GC (the 
default CMS) working hard to free memory, but not accomplishing much.  
Eventually, it fills up the heap, maxes out the CPUs, and never recovers.  We 
have tried to analyze the logs to see if there are particular queries causing 
issues or if there are network issues to zookeeper, but we haven't been able to 
find any patterns.  After the issues start, we often see session timeouts to 
zookeeper, but it doesn't appear​ that they are the cause.



Does anyone have any recommendations on things to try or metrics to look into 
or configuration issues I may be overlooking?

Thanks,
Jeremy



Re: Highlighting large text fields

2021-01-11 Thread David Smiley
Hello!

I worked on the UnifiedHighlighter a lot and want to help you!

On Mon, Jan 11, 2021 at 9:58 AM Shaun Campbell 
wrote:

> I've been using highlighting for a while, using the original highlighter,
> and just come across a problem with fields that contain a large amount of
> text, approx 250k characters. I only have about 2,000 records but each one
> contains a journal publication to search through.
>
> What I noticed is that some records didn't return a highlight even though
> they matched on the content. I noticed the hl.maxAnalyzedChars parameter
> and increased that, but  it allowed some records to be highlighted, but not
> all, and then it caused memory problems on the server.  Performance is also
> very poor.
>

I've been thinking hl.maxAnalyzedChars should maybe default to no limit --
it's a performance threshold but perhaps better to opt-in to such a limit
then scratch your head for a long time wondering why a search result isn't
showing highlights.


> To try to fix this I've tried  to configure the unified highlighter in my
> solrconfig.xml instead.   It seems to be working but again I'm missing some
> highlighted records.
>

There is no configuration of that highlighter in solrconfig.xml; it's
entirely parameter driven (runtime).


> The other thing is I've tried to adjust my unified highlighting settings in
> solrconfig.xml and they don't  seem to be having any effect even after
> restarting Solr.  I was just wondering whether there is any highlighting
> information stored at index time. It's taking over 4hours to index my
> records so it's not easy to keep reindexing my content.
>
> Any ideas on how to handle highlighting of large content  would be
> appreciated.
>
> Shaun
>

Please read the documentation here thoroughly:
https://lucene.apache.org/solr/guide/8_6/highlighting.html#the-unified-highlighter
(or earlier version as applicable)
Since you have large bodies of text to highlight, you would strongly
benefit from putting offsets into the search index (and re-index) --
storeOffsetsWithPositions.  That's an option on the field/fieldType in your
schema; it may not be obvious reading the docs.  You have to opt-in to
that; Solr doesn't normally store any info in the index for highlighting.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Dmitri Maziuk

On 1/11/2021 12:30 PM, Walter Underwood wrote:

Use a load balancer. We’re in AWS, so we use an AWS ALB.

If you don’t have a failure-tolerant load balancer implementation, the site has 
bigger problems than search.


That is the point, you have amazon doing that for you, some of us do it 
ourselves, and it wasn't clear (to me anyway) if OP was asking about that.


Dima


Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Walter Underwood
Use a load balancer. We’re in AWS, so we use an AWS ALB.

If you don’t have a failure-tolerant load balancer implementation, the site has 
bigger problems than search.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 11, 2021, at 10:15 AM, Dmitri Maziuk  wrote:
> 
> On 1/11/2021 11:25 AM, Walter Underwood wrote:
>> There are all sorts of problems with the primary/secondary approach. How do 
>> you know
>> the secondary is working? How do you deal with cold caches on the secondary 
>> when it
>> suddenly gets lots of load?
>> Instead, size the cluster with the number of hosts you need, then add one. 
>> Send traffic
>> to all of them. If any of them goes down, you have the capacity to handle 
>> the traffic.
>> This is called “N+1 provisioning”.
> 
> Where do you send your solr queries? If you have an http server at an ip 
> address that answers them, that's a single point of failure unless you put it 
> on a heartbet'ed cluster ip. (I tend to prefer ucarp to pacemaker for that as 
> the latter is bloated and too cumbersome for simple active/passive setups, 
> but that's OT.)
> 
> Dima



Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Dmitri Maziuk

On 1/11/2021 11:25 AM, Walter Underwood wrote:

There are all sorts of problems with the primary/secondary approach. How do you 
know
the secondary is working? How do you deal with cold caches on the secondary 
when it
suddenly gets lots of load?

Instead, size the cluster with the number of hosts you need, then add one. Send 
traffic
to all of them. If any of them goes down, you have the capacity to handle the 
traffic.
This is called “N+1 provisioning”.


Where do you send your solr queries? If you have an http server at an ip 
address that answers them, that's a single point of failure unless you 
put it on a heartbet'ed cluster ip. (I tend to prefer ucarp to pacemaker 
for that as the latter is bloated and too cumbersome for simple 
active/passive setups, but that's OT.)


Dima



Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Walter Underwood
There are all sorts of problems with the primary/secondary approach. How do you 
know
the secondary is working? How do you deal with cold caches on the secondary 
when it
suddenly gets lots of load?

Instead, size the cluster with the number of hosts you need, then add one. Send 
traffic
to all of them. If any of them goes down, you have the capacity to handle the 
traffic.
This is called “N+1 provisioning”.

This was our rule at Netflix a dozen years ago, running Solr 1.3. I do it the 
same way
today with large sharded clusters, one extra per shard. 

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 11, 2021, at 2:41 AM, DAVID MARTIN NIETO  wrote:
> 
> I believe Solr dont have this configuration, you need a load balancer with 
> that configuration mode for that.
> 
> Kind regards.
> 
> 
> 
> De: Kaushal Shriyan 
> Enviado: lunes, 11 de enero de 2021 11:32
> Para: solr-user@lucene.apache.org 
> Asunto: Apache Solr in High Availability Primary and Secondary node.
> 
> Hi,
> 
> We are running Apache Solr 8.7.0 search service on CentOS Linux release
> 7.9.2009 (Core).
> 
> Is there a way to set up the Solr search service in High Availability Mode
> in the Primary and Secondary node? For example, if the primary node is down
> secondary node will take care of the service.
> 
> Best Regards,
> 
> Kaushal



Highlighting large text fields

2021-01-11 Thread Shaun Campbell
I've been using highlighting for a while, using the original highlighter,
and just come across a problem with fields that contain a large amount of
text, approx 250k characters. I only have about 2,000 records but each one
contains a journal publication to search through.

What I noticed is that some records didn't return a highlight even though
they matched on the content. I noticed the hl.maxAnalyzedChars parameter
and increased that, but  it allowed some records to be highlighted, but not
all, and then it caused memory problems on the server.  Performance is also
very poor.

To try to fix this I've tried  to configure the unified highlighter in my
solrconfig.xml instead.   It seems to be working but again I'm missing some
highlighted records.

The other thing is I've tried to adjust my unified highlighting settings in
solrconfig.xml and they don't  seem to be having any effect even after
restarting Solr.  I was just wondering whether there is any highlighting
information stored at index time. It's taking over 4hours to index my
records so it's not easy to keep reindexing my content.

Any ideas on how to handle highlighting of large content  would be
appreciated.

Shaun


Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Shawn Heisey

On 1/11/2021 4:02 AM, Kaushal Shriyan wrote:

Thanks, David for the quick response. Is there any use-case to use HAProxy
or Nginx webserver or any other application to load balance both Solr
primary and secondary nodes?


I had a setup with haproxy and two copies of a Solr index.

Four of the nodes with Solr on them were running a pacemaker setup for 
high availability on the haproxy load balancer.  If any single system 
were to die, everything kept on working.


My homegrown indexing system kept both copies of the index up to date 
independently -- no replication.   I had to abandon replication because 
version 3.x and later cannot replicate from 1.x.  I kept that paradigm 
even after I was running version with compatible replication because it 
was very flexible.


I really like haproxy, but going into further detail would be off topic 
for this list.


Thanks,
Shawn


RE: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread DAVID MARTIN NIETO
Hi again,

I dont know about those products but, with Apache something like that can works:

https://stackoverflow.com/questions/6381749/apache-httpd-mod-proxy-balancer-with-active-passive-setup/11083458
https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html

Kind regards.



David Martín Nieto
Analista Funcional
Calle Cabeza Mesada 5
28031, Madrid
T: +34 667 414 432
T: +34 91 779 56 98| Ext. 3198
E-mail: dmart...@viewnext.com | Web: www.viewnext.com

[https://mail.google.com/mail/u/0?ui=2=72317294cd=0.0.2=msg-f:1662155651369049897=171129c229429f29=fimg=s0-l75-ft=ANGjdJ_o0Ds8_P8d7W-csq2mmc6mBGQy9hSjXsGEv15RXUutalCYzg3HQB3CByE2swcJkH3yRaLwrXkr1G81F9FpfqcPlbpRoZcainmsJjviLoypusuKOxCnOw97zuo=emb]




De: Kaushal Shriyan 
Enviado: lunes, 11 de enero de 2021 12:02
Para: solr-user@lucene.apache.org 
Asunto: Re: Apache Solr in High Availability Primary and Secondary node.

On Mon, Jan 11, 2021 at 4:11 PM DAVID MARTIN NIETO 
wrote:

> I believe Solr dont have this configuration, you need a load balancer with
> that configuration mode for that.
>
> Kind regards.
>
>
Thanks, David for the quick response. Is there any use-case to use HAProxy
or Nginx webserver or any other application to load balance both Solr
primary and secondary nodes?

Best Regards,

Kaushal


Re: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Kaushal Shriyan
On Mon, Jan 11, 2021 at 4:11 PM DAVID MARTIN NIETO 
wrote:

> I believe Solr dont have this configuration, you need a load balancer with
> that configuration mode for that.
>
> Kind regards.
>
>
Thanks, David for the quick response. Is there any use-case to use HAProxy
or Nginx webserver or any other application to load balance both Solr
primary and secondary nodes?

Best Regards,

Kaushal


RE: Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread DAVID MARTIN NIETO
I believe Solr dont have this configuration, you need a load balancer with that 
configuration mode for that.

Kind regards.



De: Kaushal Shriyan 
Enviado: lunes, 11 de enero de 2021 11:32
Para: solr-user@lucene.apache.org 
Asunto: Apache Solr in High Availability Primary and Secondary node.

Hi,

We are running Apache Solr 8.7.0 search service on CentOS Linux release
7.9.2009 (Core).

Is there a way to set up the Solr search service in High Availability Mode
in the Primary and Secondary node? For example, if the primary node is down
secondary node will take care of the service.

Best Regards,

Kaushal


Apache Solr in High Availability Primary and Secondary node.

2021-01-11 Thread Kaushal Shriyan
Hi,

We are running Apache Solr 8.7.0 search service on CentOS Linux release
7.9.2009 (Core).

Is there a way to set up the Solr search service in High Availability Mode
in the Primary and Secondary node? For example, if the primary node is down
secondary node will take care of the service.

Best Regards,

Kaushal


RE: [solr8.7] not relevant results for chinese query

2021-01-11 Thread Bruno Mannina
Hi,

With this article ( 
https://opensourceconnections.com/blog/2011/12/23/indexing-chinese-in-solr/ ), 
I begin to understand what happens.

Is someone have already try, with a recent SOLR, the Poading algorithm?


Thanks,
Bruno

-Message d'origine-
De : Bruno Mannina [mailto:bmann...@free.fr]
Envoyé : dimanche 10 janvier 2021 17:57
À : solr-user@lucene.apache.org
Objet : [solr8.7] not relevant results for chinese query

Hello,



I try to use chinese language with my index.



My definition is:











  

   

   

   

   

   

  





But, I get too much not relevant results.



i.e. : With the query (phone case):

tizh:(手機殼)



my query is translate to:

tizh:(手 OR 機 OR 殼)



But:

tizh:(手 AND 機 AND 殼)

returns 0 result.



And:

tizh:”手機殼”

returns also 0 result.



Is it possible to improve my fieldType ? or must I add something else ?



Thanks,

Bruno





--
L'absence de virus dans ce courrier electronique a ete verifiee par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus


--
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus