Re: Return parent's records which has more than certain number of childs

2016-08-29 Thread Mikhail Khludnev
Hello,
When searching for children you can assign ^=1 per every hit, then
you can aggregate these hits with {!parent ... score=total}, then
you can cut off parents exceeding number of child limit with {!frange}.


On Tue, Aug 30, 2016 at 5:38 AM, Zheng Lin Edwin Yeo 
wrote:

> Hi,
>
> Would like to check, in the parent/child nested document, is it possible to
> return only records with a certain number of child?
> For example, I want to return all the parent's records which has more than
> 100 child.
> Is it possible to be done?
>
> I'm using Solr 6.1.0
>
> Regards,
> Edwin
>



-- 
Sincerely yours
Mikhail Khludnev


Segments in solr index gets deleted after every restart

2016-08-29 Thread Nisha Menon
I have an issue with respect to increasing space on my Solr servers.

Whenever I see that the allocated space is almost full for a particular
server, I perform a service solr restart and this clears up some space and
things work normally from there for some time. Again this builds up and I
get a space utilization 100% warning.

While debugging this, I found that for every Solr restart, for a couple of
collections, there are some segments that gets deleted. Which means, for
one segment, its corresponding .nvm, .fdx, .tvx, .si etc gets deleted.

eg: This is one such segment that was deleted after the restart: _on0.nvm, _
on0.si, _on0_Lucene50_0.dvm,_on0.fnm,_on0.fdx,_on0.tvx,
_on0_Lucene50_0.tip, _on0.nvd, _on0_Lucene50_0.tim, _on0.fdt,
_on0_Lucene50_0.dvd, _on0_Lucene50_0.doc, _on0_Lucene50_0.pos, _on0.tvd

Can anyone please explain what probably would be happening behind the
scenes for such a behavior? I have Solr Cloud with Solr version being 5.2.1
and lucene version 5.2.1.
Thanks in advance.


Re: Return parent's records which has more than certain number of childs

2016-08-29 Thread Alexandre Rafalovitch
If you are indexing the parent/child as a block, then you know number
of children when you are indexing parent and can store that as a
field. That would be most efficient.

The best place to calculate that information is probably a custom
Update Request Processor.

Regards,
   Alex.

Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/


On 30 August 2016 at 09:38, Zheng Lin Edwin Yeo  wrote:
> Hi,
>
> Would like to check, in the parent/child nested document, is it possible to
> return only records with a certain number of child?
> For example, I want to return all the parent's records which has more than
> 100 child.
> Is it possible to be done?
>
> I'm using Solr 6.1.0
>
> Regards,
> Edwin


Return parent's records which has more than certain number of childs

2016-08-29 Thread Zheng Lin Edwin Yeo
Hi,

Would like to check, in the parent/child nested document, is it possible to
return only records with a certain number of child?
For example, I want to return all the parent's records which has more than
100 child.
Is it possible to be done?

I'm using Solr 6.1.0

Regards,
Edwin


Re: Default stop word list

2016-08-29 Thread Walter Underwood
Do not remove stop words. Want to search for “vitamin a”? That won’t work.

Stop word removal is a hack left over from when we were running search engines 
in 64 kbytes of memory.

Yes, common words are less important for search, but removing them is a brute 
force approach with severe side effects. Instead, we use a proportional 
approach with the tf.idf model. That puts a higher weight on rare words and a 
lower weight on common words.

For some real-life examples of problems with stop words, you can read the list 
of movie titles that disappear with stemming and stop words. I discovered these 
when I was running search at Netflix.

• Being There (this is the first one I noticed)
• To Be and To Have (Être et Avoir)
• To Have and To Have Not
• Once and Again
• To Be or Not To Be (1942) (OK, it isn’t just a quote from Hamlet)
• To Be or Not To Be (1983)
• Now and Then, Here and There
• Be with Me
• I’ll Be There
• It Had to Be You
• You Should Not Be Here
• You Are Here

https://observer.wunderwood.org/2007/05/31/do-all-stopword-queries-matter/

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Aug 29, 2016, at 5:39 PM, Steven White  wrote:
> 
> Thanks Shawn.  This is the best answer I have seen, much appreciated.
> 
> A follow up question, I want to remove stop words from the list, but if I
> do, then search quality will degradation (and index size will grow (less of
> an issue)).  For example, if I remove "a", then if someone search for "For
> a Few Dollars More" (without quotes) chances are good records with "a" will
> land higher up that are not relevant to user's search.  How can I address
> this?  Can I setup my schema so that records that get hits against a list
> of words, let's say off the stop word list, are ranked lower?
> 
> Steve
> 
> On Sat, Aug 27, 2016 at 2:53 PM, Shawn Heisey  wrote:
> 
>> On 8/27/2016 12:39 PM, Shawn Heisey wrote:
>>> I personally think that stopword removal is more of a problem than a
>>> solution.
>> 
>> There actually is one thing that a stopword filter can dothat has little
>> to do with the purpose it was designed for.  You can make it impossible
>> to search for certain words.
>> 
>> Imagine that your original data contains the word "frisbee" but for some
>> reason you do not want anybody to be able to locate results using that
>> word.  You can create a stopword list containing just "frisbee" and any
>> other variations that you want to limit like "frisbees", then place it
>> as a filter on the index side of your analysis.  With this in place,
>> searching for those terms will retrieve zero results.
>> 
>> Thanks,
>> Shawn
>> 
>> 



Re: Default stop word list

2016-08-29 Thread Steven White
Thanks Shawn.  This is the best answer I have seen, much appreciated.

A follow up question, I want to remove stop words from the list, but if I
do, then search quality will degradation (and index size will grow (less of
an issue)).  For example, if I remove "a", then if someone search for "For
a Few Dollars More" (without quotes) chances are good records with "a" will
land higher up that are not relevant to user's search.  How can I address
this?  Can I setup my schema so that records that get hits against a list
of words, let's say off the stop word list, are ranked lower?

Steve

On Sat, Aug 27, 2016 at 2:53 PM, Shawn Heisey  wrote:

> On 8/27/2016 12:39 PM, Shawn Heisey wrote:
> > I personally think that stopword removal is more of a problem than a
> > solution.
>
> There actually is one thing that a stopword filter can dothat has little
> to do with the purpose it was designed for.  You can make it impossible
> to search for certain words.
>
> Imagine that your original data contains the word "frisbee" but for some
> reason you do not want anybody to be able to locate results using that
> word.  You can create a stopword list containing just "frisbee" and any
> other variations that you want to limit like "frisbees", then place it
> as a filter on the index side of your analysis.  With this in place,
> searching for those terms will retrieve zero results.
>
> Thanks,
> Shawn
>
>


Default Field Cache

2016-08-29 Thread Rallavagu

Solr 5.4.1




Wondering what is the default configuration for "fieldValueCache".


Re: Solr embedded jetty jstack

2016-08-29 Thread Rallavagu

Responding to my own query.

I got this fixed. The solr startup was maintained by systemd script 
which was configured with "PrivateTmp=true". I have changed that to 
"PrivateTmp=false" and "/tmp/hsperfdata_/" is not removed 
after server startup then jstack worked.


On 8/29/16 11:31 AM, Rallavagu wrote:

I have run into a strange issue where "jstack -l " does not work. I
have tried this as the user that solr (5.4.1) is running as. I get
following error.

$ jstack -l 24064
24064: Unable to open socket file: target process not responding or
HotSpot VM not loaded
The -F option can be used when the target process is not responding

I am running Solr 5.4.1, JDK 8 with latest updates.

I have also downloaded Jetty separately, installed and started the
server. However, jstack on directly downloaded jetty (not solr bundled)
works just fine. After some research, I have found that
/tmp/hsperfdata_/ files is not created by the bundled Solr
while similar file is created by standalone jetty server. After some
more debugging, it appears that the solr startup process creates the
file (/tmp/hsperfdata_/) and then removes it. I have tried
with "-F" option but no use. I have also set "-XX:+UsePerfData"
explicitly to no avail. I have enabled JMX and connected via visualvm to
get thread dump as of now. But, for me jstack is more convenient to
trigger a series of thread dumps. Any ideas? Thanks.


Solr embedded jetty jstack

2016-08-29 Thread Rallavagu
I have run into a strange issue where "jstack -l " does not work. I 
have tried this as the user that solr (5.4.1) is running as. I get 
following error.


$ jstack -l 24064
24064: Unable to open socket file: target process not responding or 
HotSpot VM not loaded

The -F option can be used when the target process is not responding

I am running Solr 5.4.1, JDK 8 with latest updates.

I have also downloaded Jetty separately, installed and started the 
server. However, jstack on directly downloaded jetty (not solr bundled) 
works just fine. After some research, I have found that 
/tmp/hsperfdata_/ files is not created by the bundled Solr 
while similar file is created by standalone jetty server. After some 
more debugging, it appears that the solr startup process creates the 
file (/tmp/hsperfdata_/) and then removes it. I have tried 
with "-F" option but no use. I have also set "-XX:+UsePerfData" 
explicitly to no avail. I have enabled JMX and connected via visualvm to 
get thread dump as of now. But, for me jstack is more convenient to 
trigger a series of thread dumps. Any ideas? Thanks.


Re: Migrate data from solr4.9 to solr6.1

2016-08-29 Thread Piyush Kunal
I would be using solrcloud on solr 6.1.0 and will be having more number of
shards than my previous set-up.

On Mon, Aug 29, 2016 at 11:38 PM, Piyush Kunal 
wrote:

> Is there any way through which I can migrate my index which is currently
> on 4.9 to 6.1?
>
> Looking for something backup and restore.
>


Migrate data from solr4.9 to solr6.1

2016-08-29 Thread Piyush Kunal
Is there any way through which I can migrate my index which is currently on
4.9 to 6.1?

Looking for something backup and restore.


Re: changing the /solr path, additional steps needed for 6.1

2016-08-29 Thread John Bickerstaff
Bless you Chris!  And if you were local, I'd buy you a beer!

This was a big help - I was trying to figure this one out.

On Thu, Aug 25, 2016 at 1:27 PM, Chris Morley  wrote:

> This might help some people:
>
>  To change the URL to server:port/ourspecialpath from server:port/solr is a
> bit inconvenient.  You have to change several files where the solr part of
> the request path is hardcoded:
>
>  server/solr-webapp/webapp/WEB-INF/web.xml
>  server/solr/solr.xml
>  server/contexts/solr-jetty-context.xml
>
>  Now, with the release of the New UI defaulted to on in 6.1, you also have
> to change:
>  server/solr-webapp/webapp/js/angular/services.js
>  (in a bunch of places)
>
>  -Chris.
>
>
>
>


Re: How to update from Solr Cloud 5.4.1 to 5.5.1

2016-08-29 Thread Tom Devel
Shawn,

Do you (or anybody else here) know of the upgrade steps from 6.1 to 6.2 in
this case? The release notes of 6.2 do not mention anything about
upgrading, but 6.2 has some good bugfixes.

If 6.2 made changes to the index format, is a drop-in replacement from 6.1
to 6.2 still possible?

Thanks,
Tom

On Sat, Aug 27, 2016 at 12:23 PM, Shawn Heisey  wrote:

> On 8/26/2016 10:22 AM, D'agostino Victor wrote:
> > Do you know in which version index format changes and if I should
> > update to a higher version ?
>
> In version 6.0, and again in the just-released 6.2, one aspect of the
> index format has been updated.  Version 6.1 didn't have any format
> changes from 6.0.  You won't see the new version reflected in any of the
> filenames in the index directory.
>
> Whether or not to upgrade depends on what features you need, and whether
> you need fixes included in the new version.  Not all of the fixed bugs
> in 6.x are applicable to 5.x -- some are fixes for problems introduced
> during 6.x development.
>
> > And about ZooKeeper ; the 3.4.8 is fine or should I update it too ?
>
> That's the newest stable version of zookeeper.  There are alpha releases
> of version 3.5.
>
> Solr includes zookeeper 3.4.6.  A 3.4.8 server will work, but no
> guarantees can be made about the 3.5 alpha versions.
>
> Thanks,
> Shawn
>
>


Unicode collation - Sorting text for multiple languages

2016-08-29 Thread Vasu Y
Hi,
I was looking at Unicode Collation @ Wiki (
http://wiki.apache.org/solr/UnicodeCollation#Sorting_text_for_multiple_languages
) and it seems to suggest that:
Use the Unicode "default" collator (to overcome/minimize increase in disk
and indexing costs) over defining collated fields for each language and
using copyField.

I didn't quite understand how using "default" collator would help
overcome/minimize increase in disk and indexing costs over defining
collated fields for each language.
I thought the only difference between the two is having to define
n-CollationField definitions (for each language) versus one
"CollationField" for the default/ROOT locale in schema.xml. We will anyways
have to use  to copy from analyzed field to collation field for
each language.

Would appreciate any insights into this.

Thanks,
Vasu


Atomic Update w/ Date Copy Field

2016-08-29 Thread Todd Long
We recently started using atomic updates in our application and have since
noticed that date fields copied to a text field have varying results between
full and partial updates. When the document is fully updated the copied text
date appears as expected (i.e. -MM-dd'T'HH:mm:ss.SSSZ); however, when
the document is partially updated (while omitting the date field) the
original stored date value is copied to a different format (i.e. EEE MMM d
HH:mm:ss z ). I've included an example below of what we are seeing with
the indexed value of our "createdDate_facet_t" field. Is there a way that we
can force the copy field to always use "-MM-dd'T'HH:mm:ss.SSSZ" as the
resulting text format without having to always include the field in the
update?

schema








  



  


/update (full)
-
{
  "id": "12345",
  "createdBy_t": "someone",
  "createdDate_dt": "2015-07-14T12:58:17.535Z"
}

createdDate_facet_t = "2015-07-14t12:58:17.535z"

/update (partial)

{
  "id": "12345",
  "createdBy_t": { "set": "another" }
}

createdDate_facet_t = "tue jul 14 12:58:17 utc 2015"



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Atomic-Update-w-Date-Copy-Field-tp4293779.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Use function in condition

2016-08-29 Thread Emir Arnautovic

Hi Nabil,

Can you try following:

fq={!frange l=1}and(query($sub1),or(query($sub2),query($sub3)))&sub1={!frange 
l=1000}sum(F1,F2)&sub2={!frange u=2000}sum(F3,F4)&sub3={!frange l=3000}sum(F5,F6)

Thanks,
Emir

On 29.08.2016 11:50, nabil Kouici wrote:

Hi solr users,
I'm still not able to find a solution either with function query :(
My need is simple, I'd like to execute these combined filters :
(Sum F1 and F2 greater than 1000) AND ( (sum F3 and F4 lower than 2000) OR (sum 
F5 and F6 greater then 3000) )
Could you please help me to translate these conditions to solr syntaxe.
Regards,Nabil.

   De : Emir Arnautovic 
  À : solr-user@lucene.apache.org
  Envoyé le : Jeudi 25 août 2016 16h51
  Objet : Re: Use function in condition

Hi Nabil,


You have limited set functions, but there are logical functions: or,
and, not and you have query function so can do more complex queries:

fq={!frange l=1}and(query($sub1),termfreq(field3, 300))sub1={!frange 
l=100}sum(field1,field2)

And will return 1 for doc matching both function terms.

It would be much simpler if Solr supported relational functions: gt, lt, eq.

Hope this gives you ideas how to proceed.

Emir

On 25.08.2016 12:06, nabil Kouici wrote:

Hi Emir,Thank you for your replay. I've tested the function range query and 
this is solving 50% my need. The problem is I'm not able to use it with other 
conditions. For exemple:
fq={!frange l=100}sum(field1,field2)  and field3:200

or
fq=({!frange l=100}sum(field1,field2))  and (field3:200)

This is giving me an exception:org.apache.solr.search.SyntaxError: Unexpected 
text after function: AND Field3:200
I know that I can use multiple fq but the problem is I can have complexe filter 
like (cond1 OR cond2 AND cond3)
Could you please help.
Regards,Nabil.

 De : Emir Arnautovic 
   À : solr-user@lucene.apache.org
   Envoyé le : Mercredi 17 août 2016 17h08
   Objet : Re: Use function in condition
 
Hi Nabil,


You can use frange queries, e.g. you can use fq={!frange
l=100}sum(field1,field2) to filter doc with sum greater than 100.

Regards,
Emir


On 17.08.2016 16:26, nabil Kouici wrote:

Hi,
Is it possible to use functions (function query 
https://cwiki.apache.org/confluence/display/solr/Function+Queries) in q or fq 
parameters to build a complex search expression.
For exemple, take only documents that sum(field1,field2)> 100. Another exemple: 
if(test,value1,value2):vallue3
Regards,Nabil.


--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



How to swap two cores and then unload one of them

2016-08-29 Thread Fabrizio Fortino
I have a NON-Cloud Solr and I am trying to use the swap functionality to
push an updated core into production without downtime.

Here are the steps I am executing
1. Solr is up and running with a single core (name = 'livecore')
2. I create a new core with the latest version of my documents (name =
'newcore')
3. I swap the cores -> coreContainer.swap("newcore", "livecore")
4. I try to unload "newcore" (that points to the old one) and remove all
the related dirs -> coreContainer.unload("newcore", true, true, true)

The first three operations are OK. But when I try to execute the last one
the Solr log starts printing the following messages forever

61424 INFO (pool-1-thread-1) [ x:newcore] o.a.s.c.SolrCore Core newcore is
not yet closed, waiting 100 ms before checking again.

I have opened an issue on this problem (
https://issues.apache.org/jira/browse/SOLR-8757) but I have not received
any answer yet.

In the meantime I have found the following workaround: I try to manually
close all the core references before unloading it. Here is the code:

SolrCore core = coreContainer.create("newcore", coreProps)
coreContainer.swap("newcore", "livecore")
// the old livecore is now newcore, so unload it and remove all the
related dirsSolrCore oldCore = coreContainer.getCore("newCore")while
(oldCore.getOpenCount > 1) {
  oldCore.close()
}
coreContainer.unload("newcore", true, true, true)


This seemed to work but there is some race conditions and from time to time
I get a ConcurrentModificationException and then an abnormal CPU
consumption.

I filed a separate issue on this
https://issues.apache.org/jira/browse/SOLR-9208 but this is not considered
an issue by the Solr committers. The suggestion is to move and discuss it
here in the mailing list.

If this is not an issue, what are the steps to swap to cores and unload one
of them?

Thanks a lot,
Fabrizio


Re: Use function in condition

2016-08-29 Thread nabil Kouici
Hi solr users,
I'm still not able to find a solution either with function query :(
My need is simple, I'd like to execute these combined filters :
(Sum F1 and F2 greater than 1000) AND ( (sum F3 and F4 lower than 2000) OR (sum 
F5 and F6 greater then 3000) )
Could you please help me to translate these conditions to solr syntaxe.
Regards,Nabil. 

  De : Emir Arnautovic 
 À : solr-user@lucene.apache.org 
 Envoyé le : Jeudi 25 août 2016 16h51
 Objet : Re: Use function in condition
   
Hi Nabil,

You have limited set functions, but there are logical functions: or, 
and, not and you have query function so can do more complex queries:

fq={!frange l=1}and(query($sub1),termfreq(field3, 300))sub1={!frange 
l=100}sum(field1,field2)

And will return 1 for doc matching both function terms.

It would be much simpler if Solr supported relational functions: gt, lt, eq.

Hope this gives you ideas how to proceed.

Emir

On 25.08.2016 12:06, nabil Kouici wrote:
> Hi Emir,Thank you for your replay. I've tested the function range query and 
> this is solving 50% my need. The problem is I'm not able to use it with other 
> conditions. For exemple:
> fq={!frange l=100}sum(field1,field2)  and field3:200
>
> or
> fq=({!frange l=100}sum(field1,field2))  and (field3:200)
>
> This is giving me an exception:org.apache.solr.search.SyntaxError: Unexpected 
> text after function: AND Field3:200
> I know that I can use multiple fq but the problem is I can have complexe 
> filter like (cond1 OR cond2 AND cond3)
> Could you please help.
> Regards,Nabil.
>
>        De : Emir Arnautovic 
>  À : solr-user@lucene.apache.org
>  Envoyé le : Mercredi 17 août 2016 17h08
>  Objet : Re: Use function in condition
>    
> Hi Nabil,
>
> You can use frange queries, e.g. you can use fq={!frange
> l=100}sum(field1,field2) to filter doc with sum greater than 100.
>
> Regards,
> Emir
>
>
> On 17.08.2016 16:26, nabil Kouici wrote:
>> Hi,
>> Is it possible to use functions (function query 
>> https://cwiki.apache.org/confluence/display/solr/Function+Queries) in q or 
>> fq parameters to build a complex search expression.
>> For exemple, take only documents that sum(field1,field2)> 100. Another 
>> exemple: if(test,value1,value2):vallue3
>> Regards,Nabil.

-- 
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



   

Re: Most popular fields under a list of documents

2016-08-29 Thread Mikhail Khludnev
facet.field=ints ??

On Mon, Aug 29, 2016 at 11:57 AM, Algirdas Jokubauskas  wrote:

> What kind of facets would work best here for this specific scenario?
>
> I already use it in several places.
>
>
> Med venlig hilsen / Best regards
>
> *Algirdas Jokubauskas*
> Software Developer
>
>
>
>
>
>
> *www.accumolo.com www.mcb.dk
> kont...@accumolo.dk *
>
> +45 9610 2824
>
> Lægårdvej 86B
>
> DK-7500 Holstebro
>
> On Thu, Aug 25, 2016 at 3:43 PM, Mikhail Khludnev  wrote:
>
> > Did you consider field facet?
> >
> > On Thu, Aug 25, 2016 at 3:35 PM, Algirdas Jokubauskas 
> wrote:
> >
> > > Hi,
> > >
> > > So I've been trying to figure out how to accomplish this one, but
> > couldn't
> > > find anything that would not kill performance.
> > >
> > > I have a document type with a bunch of info that I use for various
> tasks,
> > > but I want to add a new field which is a list of ints.
> > >
> > > Then I want to do a free text search of that document and get a list of
> > top
> > > 10 most popular ints among the results.
> > >
> > > So if say I had these documents:
> > >
> > > DocA(ints(1,5,7), freetext: "Marry had a little lamb")
> > > DocB(ints(4,3,5), freetext: "Marry had a little wolf")
> > > DocC(ints(5,1,8), freetext: "Marry had a big goat")
> > >
> > > and if I search for "little", and ask for the most popular int I would
> > get
> > > 5
> > >
> > > In a normal case I would ask for 10 most common and there would be a
> few
> > > hundred thousand docs and a few hundred ints in each doc.
> > >
> > > I'm stumped. Any tips? Thanks.
> > >
> > > - AJ
> > >
> >
> >
> >
> > --
> > Sincerely yours
> > Mikhail Khludnev
> >
>



-- 
Sincerely yours
Mikhail Khludnev


Re: Most popular fields under a list of documents

2016-08-29 Thread Algirdas Jokubauskas
What kind of facets would work best here for this specific scenario?

I already use it in several places.


Med venlig hilsen / Best regards

*Algirdas Jokubauskas*
Software Developer






*www.accumolo.com www.mcb.dk
kont...@accumolo.dk *

+45 9610 2824

Lægårdvej 86B

DK-7500 Holstebro

On Thu, Aug 25, 2016 at 3:43 PM, Mikhail Khludnev  wrote:

> Did you consider field facet?
>
> On Thu, Aug 25, 2016 at 3:35 PM, Algirdas Jokubauskas  wrote:
>
> > Hi,
> >
> > So I've been trying to figure out how to accomplish this one, but
> couldn't
> > find anything that would not kill performance.
> >
> > I have a document type with a bunch of info that I use for various tasks,
> > but I want to add a new field which is a list of ints.
> >
> > Then I want to do a free text search of that document and get a list of
> top
> > 10 most popular ints among the results.
> >
> > So if say I had these documents:
> >
> > DocA(ints(1,5,7), freetext: "Marry had a little lamb")
> > DocB(ints(4,3,5), freetext: "Marry had a little wolf")
> > DocC(ints(5,1,8), freetext: "Marry had a big goat")
> >
> > and if I search for "little", and ask for the most popular int I would
> get
> > 5
> >
> > In a normal case I would ask for 10 most common and there would be a few
> > hundred thousand docs and a few hundred ints in each doc.
> >
> > I'm stumped. Any tips? Thanks.
> >
> > - AJ
> >
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>