Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Charlie Hull

On 23/10/2018 02:57, Daniel Carrasco wrote:

annoyingHello,

I've a Solr Cluster that is created with 7 machines on AWS instances. The
Solr version is 7.2.1 (b2b6438b37073bee1fca40374e85bf91aa457c0b) and all
nodes are running on NTR mode and I've a replica by node (7 replicas). One
node is used to import, and the rest are just for serve data.

My problem is that I'm having problems from about two weeks with a MsSQL
import on my Solr Cluster: when the process becomes slow or takes too long,
the entire cluster goes down.


How exactly are you importing from MsSQL to Solr? Are you using the Data 
Import Handler (DIH) and if so, how?  What evidence do you have that 
this is slow or takes too long?


Charlie


I'm confused, because the main reason to have a cluster is HA, and every
time the import node "fails" (is not really failing, just taking more time
to finish), the entire cluster fails and I've to stop the webpage until
nodes are green again.

I don't know if maybe I've to change something in configuration to allow
the cluster to keep working even when the import freezes or the import node
dies, but is very annoying to wake up at 3AM to fix the cluster.

Is there any way to avoid this?, maybe keeping the import node as NTR and
convert the rest to TLOG?

I'm a bit noob in Solr, so I don't know if I've to sent something to help
to find the problem, and the cluster was created just creating a Zookeeper
cluster, connecting the Solr nodes to that Zk cluster, importing the
collections and adding réplicas manually to every collection.
Also I've upgraded that cluster from Solr 6 to Solr 7.1 and later to Solr
7.2.1.

Thanks and greetings!




--
Charlie Hull
Flax - Open Source Enterprise Search

tel/fax: +44 (0)8700 118334
mobile:  +44 (0)7767 825828
web: www.flax.co.uk


Re: AW: AW: 6.6 -> 7.5 SolrJ, seeing many "Connection evictor"-Threads

2018-10-23 Thread Shalin Shekhar Mangar
You can expect as many connection evictor threads as the number of http
client instances. This is true for both Solr 6.6 and 7.x.

I was intrigued as to why you were not seeing the same threads in both
versions. It turns out that I made a mistake in the patch I committed in
SOLR-9290 where instead of using Solr's DefaultSolrThreadFactory which
names threads with a proper prefix, I used Java's DefaultThreadFactory
which names threads like pool-123-thread-1282. So if you take a thread dump
from Solr 6.6, you should be able to find threads named like these which
are sleeping at a similar place in the stack.

On Tue, Oct 23, 2018 at 9:14 AM Clemens Wyss DEV 
wrote:

> On 10/22/2018 6:15 AM, Shawn Heisey wrote:
> > autoSoftCommit is pretty aggressive . If your commits are taking 1-2
> seconds or les
> well, some take minutes (re-index)!
>
> > autoCommit is quite long.  I'd probably go with 60 seconds
> Which means every 1min the "pending"/"soft" commits are effectively saved?
>
> One additional question: having auto(soft)commits in place, do I at all
> need to explicitly commit UpdateRequest from SolrJ?
>
> > added in 5.5.3 and 6.2.0 by this issue
> hmmm, I have never seen these threads before, not even in 6.6
>
> > Shalin worked on that issue, maybe they can shed some light on it and
> >indicate whether there should be many threads running that code
> I'd appreciate
>
> Yet again, many thanks.
> - Clemens
>
>

-- 
Regards,
Shalin Shekhar Mangar.


Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Daniel Carrasco
Hi,
El mar., 23 oct. 2018 a las 10:18, Charlie Hull ()
escribió:

> On 23/10/2018 02:57, Daniel Carrasco wrote:
> > annoyingHello,
> >
> > I've a Solr Cluster that is created with 7 machines on AWS instances. The
> > Solr version is 7.2.1 (b2b6438b37073bee1fca40374e85bf91aa457c0b) and all
> > nodes are running on NTR mode and I've a replica by node (7 replicas).
> One
> > node is used to import, and the rest are just for serve data.
> >
> > My problem is that I'm having problems from about two weeks with a MsSQL
> > import on my Solr Cluster: when the process becomes slow or takes too
> long,
> > the entire cluster goes down.
>
> How exactly are you importing from MsSQL to Solr? Are you using the Data
> Import Handler (DIH) and if so, how?


yeah, we're using import handler with jdbc connector:


  


... A lot of fields configuration ...

... some entities similar to above ...
  




> What evidence do you have that  this is slow or takes too long?
>

Well, the process normally takes less than 20 minutes and doesn't affect at
all to cluster (normally near 15m). I've a monit system that notice when
this process takes more than 25 minutes, and just a bit later after that
alert, the entire collection goes to recovery mode and then we're unable to
continue to serve the requests made by the webpage. We've to stop all the
requests until the collection is OK again. The rest of time the cluster
works perfect without downtime, but lately the problem is happen more often
(I'd to recover the cluster two times in less than an hour this night, and
it didn't fail again because we've stopped the import cron).
This is the soft problem, because sometimes the entire cluster becomes
unstable and affects to other collections. Sometimes even the node that is
Leader fails and we're unable to release that Leadership (even shutting
down the Leader server, running the FORCELEADER API command), and that make
hard to recovery the cluster. If we're lucky, the cluster recovers itself
even with recovering leader (taking so long, of course), but sometimes
we've no luck and we've to reboot all the machines to force a full recover.


>
> Charlie
> >
> > I'm confused, because the main reason to have a cluster is HA, and every
> > time the import node "fails" (is not really failing, just taking more
> time
> > to finish), the entire cluster fails and I've to stop the webpage until
> > nodes are green again.
> >
> > I don't know if maybe I've to change something in configuration to allow
> > the cluster to keep working even when the import freezes or the import
> node
> > dies, but is very annoying to wake up at 3AM to fix the cluster.
> >
> > Is there any way to avoid this?, maybe keeping the import node as NTR and
> > convert the rest to TLOG?
> >
> > I'm a bit noob in Solr, so I don't know if I've to sent something to help
> > to find the problem, and the cluster was created just creating a
> Zookeeper
> > cluster, connecting the Solr nodes to that Zk cluster, importing the
> > collections and adding réplicas manually to every collection.
> > Also I've upgraded that cluster from Solr 6 to Solr 7.1 and later to Solr
> > 7.2.1.
> >
> > Thanks and greetings!
> >
>
>
> --
> Charlie Hull
> Flax - Open Source Enterprise Search
>
> tel/fax: +44 (0)8700 118334
> mobile:  +44 (0)7767 825828
> web: www.flax.co.uk
>


Thanks, and greetings!!

-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_


Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Chris Ulicny
Dan,

Do you have any idea on the resource usage for the hosts when Solr starts
to become unresponsive? It could be that you need more resources or better
AWS instances for the hosts.

We had what sounds like a similar scenario when attempting to move one of
our solrcloud instances to a cloud computing platform. During periods of
heaving indexing, segment merging, and searches, the cluster would become
unresponsive due to solr waiting for numerous I/O operations which we being
throttled. Solr can be very I/O intensive, especially when you can't cache
the entire index in memory.

Thanks,
Chris


On Tue, Oct 23, 2018 at 5:40 AM Daniel Carrasco 
wrote:

> Hi,
> El mar., 23 oct. 2018 a las 10:18, Charlie Hull ()
> escribió:
>
> > On 23/10/2018 02:57, Daniel Carrasco wrote:
> > > annoyingHello,
> > >
> > > I've a Solr Cluster that is created with 7 machines on AWS instances.
> The
> > > Solr version is 7.2.1 (b2b6438b37073bee1fca40374e85bf91aa457c0b) and
> all
> > > nodes are running on NTR mode and I've a replica by node (7 replicas).
> > One
> > > node is used to import, and the rest are just for serve data.
> > >
> > > My problem is that I'm having problems from about two weeks with a
> MsSQL
> > > import on my Solr Cluster: when the process becomes slow or takes too
> > long,
> > > the entire cluster goes down.
> >
> > How exactly are you importing from MsSQL to Solr? Are you using the Data
> > Import Handler (DIH) and if so, how?
>
>
> yeah, we're using import handler with jdbc connector:
>
> 
>driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"
> url="jdbc:sqlserver://..." user="..." password="..."/>
>  query="A_Long_Query" />
> 
> ... A lot of fields configuration ...
> 
> ... some entities similar to above ...
>   
> 
>
>
>
> > What evidence do you have that  this is slow or takes too long?
> >
>
> Well, the process normally takes less than 20 minutes and doesn't affect at
> all to cluster (normally near 15m). I've a monit system that notice when
> this process takes more than 25 minutes, and just a bit later after that
> alert, the entire collection goes to recovery mode and then we're unable to
> continue to serve the requests made by the webpage. We've to stop all the
> requests until the collection is OK again. The rest of time the cluster
> works perfect without downtime, but lately the problem is happen more often
> (I'd to recover the cluster two times in less than an hour this night, and
> it didn't fail again because we've stopped the import cron).
> This is the soft problem, because sometimes the entire cluster becomes
> unstable and affects to other collections. Sometimes even the node that is
> Leader fails and we're unable to release that Leadership (even shutting
> down the Leader server, running the FORCELEADER API command), and that make
> hard to recovery the cluster. If we're lucky, the cluster recovers itself
> even with recovering leader (taking so long, of course), but sometimes
> we've no luck and we've to reboot all the machines to force a full recover.
>
>
> >
> > Charlie
> > >
> > > I'm confused, because the main reason to have a cluster is HA, and
> every
> > > time the import node "fails" (is not really failing, just taking more
> > time
> > > to finish), the entire cluster fails and I've to stop the webpage until
> > > nodes are green again.
> > >
> > > I don't know if maybe I've to change something in configuration to
> allow
> > > the cluster to keep working even when the import freezes or the import
> > node
> > > dies, but is very annoying to wake up at 3AM to fix the cluster.
> > >
> > > Is there any way to avoid this?, maybe keeping the import node as NTR
> and
> > > convert the rest to TLOG?
> > >
> > > I'm a bit noob in Solr, so I don't know if I've to sent something to
> help
> > > to find the problem, and the cluster was created just creating a
> > Zookeeper
> > > cluster, connecting the Solr nodes to that Zk cluster, importing the
> > > collections and adding réplicas manually to every collection.
> > > Also I've upgraded that cluster from Solr 6 to Solr 7.1 and later to
> Solr
> > > 7.2.1.
> > >
> > > Thanks and greetings!
> > >
> >
> >
> > --
> > Charlie Hull
> > Flax - Open Source Enterprise Search
> >
> > tel/fax: +44 (0)8700 118334 <+44%20870%20011%208334>
> > mobile:  +44 (0)7767 825828 <+44%207767%20825828>
> > web: www.flax.co.uk
> >
>
>
> Thanks, and greetings!!
>
> --
> _
>
>   Daniel Carrasco Marín
>   Ingeniería para la Innovación i2TIC, S.L.
>   Tlf:  +34 911 12 32 84 Ext: 223 <+34%20911%2012%2032%2084>
>   www.i2tic.com
> _
>


Re: Integrate nutch with solr

2018-10-23 Thread Elizabeth Haubert
Hi Dinesh,

This article
 is
quite old (Nutch 1.x, Solr 4.x), but the high-level steps are still pretty
much the same: get your java set up, kick off a Solr
, and then
fire off your crawler.

If you are starting from scratch on both Solr and Nutch, I'd recommend
getting your Solr sandbox set up first.  The directions for setting up your
Solr collection up are not specific to Nutch, and will be in the Solr
documentation.  The directions for setting up your crawler will be in the
Nutch documentation.

Good luck!
Elizabeth





On Thu, Oct 18, 2018 at 2:36 PM Dinesh Sundaram 
wrote:

> Hi Team,
> Can you please share the steps to integrate nutch 2.3.1 with solrcloud
> 7.1.0.
>
>
> Thanks,
> Dinesh Sundaram
>


Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Daniel Carrasco
Hello,

Thanks for your response.

We've already thought about that and doubled the instances. Just now for
every Solr instance we've 60GB of RAM (40GB configured on Solr), and a 16
Cores CPU. The entire Data can be stored on RAM and will not fill the RAM
(of course talking about raw data, not procesed data).

About the usage, I've checked the RAM and CPU usage and are not fully used.

Greetings!

El mar., 23 oct. 2018 a las 14:02, Chris Ulicny ()
escribió:

> Dan,
>
> Do you have any idea on the resource usage for the hosts when Solr starts
> to become unresponsive? It could be that you need more resources or better
> AWS instances for the hosts.
>
> We had what sounds like a similar scenario when attempting to move one of
> our solrcloud instances to a cloud computing platform. During periods of
> heaving indexing, segment merging, and searches, the cluster would become
> unresponsive due to solr waiting for numerous I/O operations which we being
> throttled. Solr can be very I/O intensive, especially when you can't cache
> the entire index in memory.
>
> Thanks,
> Chris
>
>
> On Tue, Oct 23, 2018 at 5:40 AM Daniel Carrasco 
> wrote:
>
> > Hi,
> > El mar., 23 oct. 2018 a las 10:18, Charlie Hull ()
> > escribió:
> >
> > > On 23/10/2018 02:57, Daniel Carrasco wrote:
> > > > annoyingHello,
> > > >
> > > > I've a Solr Cluster that is created with 7 machines on AWS instances.
> > The
> > > > Solr version is 7.2.1 (b2b6438b37073bee1fca40374e85bf91aa457c0b) and
> > all
> > > > nodes are running on NTR mode and I've a replica by node (7
> replicas).
> > > One
> > > > node is used to import, and the rest are just for serve data.
> > > >
> > > > My problem is that I'm having problems from about two weeks with a
> > MsSQL
> > > > import on my Solr Cluster: when the process becomes slow or takes too
> > > long,
> > > > the entire cluster goes down.
> > >
> > > How exactly are you importing from MsSQL to Solr? Are you using the
> Data
> > > Import Handler (DIH) and if so, how?
> >
> >
> > yeah, we're using import handler with jdbc connector:
> >
> > 
> >> driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"
> > url="jdbc:sqlserver://..." user="..." password="..."/>
> >  > query="A_Long_Query" />
> > 
> > ... A lot of fields configuration ...
> > 
> > ... some entities similar to above ...
> >   
> > 
> >
> >
> >
> > > What evidence do you have that  this is slow or takes too long?
> > >
> >
> > Well, the process normally takes less than 20 minutes and doesn't affect
> at
> > all to cluster (normally near 15m). I've a monit system that notice when
> > this process takes more than 25 minutes, and just a bit later after that
> > alert, the entire collection goes to recovery mode and then we're unable
> to
> > continue to serve the requests made by the webpage. We've to stop all the
> > requests until the collection is OK again. The rest of time the cluster
> > works perfect without downtime, but lately the problem is happen more
> often
> > (I'd to recover the cluster two times in less than an hour this night,
> and
> > it didn't fail again because we've stopped the import cron).
> > This is the soft problem, because sometimes the entire cluster becomes
> > unstable and affects to other collections. Sometimes even the node that
> is
> > Leader fails and we're unable to release that Leadership (even shutting
> > down the Leader server, running the FORCELEADER API command), and that
> make
> > hard to recovery the cluster. If we're lucky, the cluster recovers itself
> > even with recovering leader (taking so long, of course), but sometimes
> > we've no luck and we've to reboot all the machines to force a full
> recover.
> >
> >
> > >
> > > Charlie
> > > >
> > > > I'm confused, because the main reason to have a cluster is HA, and
> > every
> > > > time the import node "fails" (is not really failing, just taking more
> > > time
> > > > to finish), the entire cluster fails and I've to stop the webpage
> until
> > > > nodes are green again.
> > > >
> > > > I don't know if maybe I've to change something in configuration to
> > allow
> > > > the cluster to keep working even when the import freezes or the
> import
> > > node
> > > > dies, but is very annoying to wake up at 3AM to fix the cluster.
> > > >
> > > > Is there any way to avoid this?, maybe keeping the import node as NTR
> > and
> > > > convert the rest to TLOG?
> > > >
> > > > I'm a bit noob in Solr, so I don't know if I've to sent something to
> > help
> > > > to find the problem, and the cluster was created just creating a
> > > Zookeeper
> > > > cluster, connecting the Solr nodes to that Zk cluster, importing the
> > > > collections and adding réplicas manually to every collection.
> > > > Also I've upgraded that cluster from Solr 6 to Solr 7.1 and later to
> > Solr
> > > > 7.2.1.
> > > >
> > > > Thanks and greetings!
> > > >
> > >
> > >
> > > --
> > > Charlie Hull
> > > Flax - Open Source Enterprise Search
> > >
> > > tel/fax: +44 

Join across shards?

2018-10-23 Thread e_briere
Hi 
all,

Sorry if the question was already covered.

We are using joins across documents with the limitation of having the documents 
to be joined sitting on the same shard. Is there a way around this limitation 
and even join across collections? Are there plans to support this out of the 
box?

Thanks!

Eric Briere.


Re: Inconsistent leader between ZK and Solr and a lot of downtime

2018-10-23 Thread Ben Knüttgen
Daniel Carrasco wrote
> Hello,
> 
> I'm investigating an 8 nodes Solr 7.2.1 cluster because we've a lot of
> problems, like when a node fails to import from a DB (maybe it freeze),
> the
> entire cluster goes down, and other like the leader wont change even when
> is down (all nodes detects that is down but no leader election is
> triggered), and similar problems. Every few days we've to recover the
> cluster because becomes inestable and goes down.
> 
> The last problem that I've got, is three collections that have nodes on
> "recovery" state from a lot of hours, and the log shows an error telling
> that "leader node is not the leader" so I'm trying to change the leader.

Make sure that the clocks on your servers are in sync. Otherwise inter node
authentication tokens could time out which could lead to the problems you
described. You should find hints to the cause of the communication problem
in your Solr logs.



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


RE: Join across shards?

2018-10-23 Thread Vadim Ivanov
Hi, 
You CAN join across collections with runtime "join". 
The only limitation is that FROM collection should not be sharded and joined
data should reside on one node.
Solr cannot join across nodes (distributed search is not supported).
Though using streaming expressions it's possible to do various things...
-- 
Vadim

-Original Message-
From: e_bri...@videotron.ca [mailto:e_bri...@videotron.ca] 
Sent: Tuesday, October 23, 2018 2:38 PM
To: solr-user@lucene.apache.org
Subject: Join across shards?

Hi
all,

Sorry if the question was already covered.

We are using joins across documents with the limitation of having the
documents to be joined sitting on the same shard. Is there a way around this
limitation and even join across collections? Are there plans to support this
out of the box?

Thanks!

Eric Briere.



Re: Regarding multi keyword search

2018-10-23 Thread Walter Underwood
100% on mm with dangerous. If there is one misspelled or wrong word, there are 
zero matches.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Oct 23, 2018, at 8:25 AM, ANNAMANENI RAVEENDRA  
> wrote:
> 
> You should use mm parameter and it should be set to 100 if you use dismax
> or edismax
> 
> 
> On Tue, Oct 23, 2018 at 11:18 AM Gauri Dhawan 
> wrote:
> 
>> Hi!
>> I have been facing an issue for quite some time and haven't been able to
>> come to a solution as of yet. We are trying to implement search on our
>> platform and all our data is stored in Solr.
>> 
>> I have a field `description` which is the field where I have to search.
>> It is of the field type `text_edit_suggest` and it looks something like
>> this
>> 
>> 
>>>  
>>>
>>>
>>>
>>>>> pattern="([\.,;:-_])" replacement=" " replace="all"/>
>>>>> minGramSize="1"/>
>>>>> pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
>>>>> ignoreCase="true" expand="false"/>
>>>  
>>>  
>>>  
>>>  
>>>
>>>
>>>
>>>>> pattern="([\.,;:-_])" replacement=" " replace="all"/>
>>>>> pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
>>>>> pattern="^(.{30})(.*)?" replacement="$1" replace="all"/>
>>>>> ignoreCase="true" expand="false"/>
>>>  
>> 
>> 
>> 
>> When I search for multiple keywords, the results are unexpected.
>> For example :
>> I want to search for the words `first` and `post` and both these words
>> should be present in the description field of the document else it
>> shouldn't return the document.
>> I've tried some 50+ queries for this. Used `edismax` parser as well but in
>> vain.
>> 
>> Tried boosting as well. But most queries result in weight given to either
>> one of the keywords and results in documents that have that keyword but not
>> the other. Can you guys help? Thanks in advance!
>> 
>> 
>> Gauri Dhawan
>> Associate Software Engineer
>> SHEROES
>> 



Internal Solr communication question

2018-10-23 Thread Fernando Otero
Hey all
 I'm running some tests on Solr cloud (10 nodes, 3 shards, 3 replicas),
when I run the queries I end up seeing 7x traffic ( requests / minute)  in
Newrelic.

Could it be that the internal communication between nodes is done through
HTTP and newrelic counts those calls?

Thanks!


Regarding multi keyword search

2018-10-23 Thread Gauri Dhawan
Hi!
I have been facing an issue for quite some time and haven't been able to
come to a solution as of yet. We are trying to implement search on our
platform and all our data is stored in Solr.

I have a field `description` which is the field where I have to search.
It is of the field type `text_edit_suggest` and it looks something like this


>   
> 
> 
> 
>  pattern="([\.,;:-_])" replacement=" " replace="all"/>
>  minGramSize="1"/>
>  pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
>  ignoreCase="true" expand="false"/>
>   
>   
>   
>   
> 
> 
> 
>  pattern="([\.,;:-_])" replacement=" " replace="all"/>
>  pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
>  pattern="^(.{30})(.*)?" replacement="$1" replace="all"/>
>  ignoreCase="true" expand="false"/>
>   



When I search for multiple keywords, the results are unexpected.
For example :
I want to search for the words `first` and `post` and both these words
should be present in the description field of the document else it
shouldn't return the document.
I've tried some 50+ queries for this. Used `edismax` parser as well but in
vain.

Tried boosting as well. But most queries result in weight given to either
one of the keywords and results in documents that have that keyword but not
the other. Can you guys help? Thanks in advance!


Gauri Dhawan
Associate Software Engineer
SHEROES


Re: Regarding multi keyword search

2018-10-23 Thread ANNAMANENI RAVEENDRA
You should use mm parameter and it should be set to 100 if you use dismax
or edismax


On Tue, Oct 23, 2018 at 11:18 AM Gauri Dhawan 
wrote:

> Hi!
> I have been facing an issue for quite some time and haven't been able to
> come to a solution as of yet. We are trying to implement search on our
> platform and all our data is stored in Solr.
>
> I have a field `description` which is the field where I have to search.
> It is of the field type `text_edit_suggest` and it looks something like
> this
>
> 
> >   
> > 
> > 
> > 
> >  > pattern="([\.,;:-_])" replacement=" " replace="all"/>
> >  > minGramSize="1"/>
> >  > pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
> >  > ignoreCase="true" expand="false"/>
> >   
> >   
> >   
> >   
> > 
> > 
> > 
> >  > pattern="([\.,;:-_])" replacement=" " replace="all"/>
> >  > pattern="([^\w\d\*æøåÆØÅ ])" replacement="" replace="all"/>
> >  > pattern="^(.{30})(.*)?" replacement="$1" replace="all"/>
> >  > ignoreCase="true" expand="false"/>
> >   
>
>
>
> When I search for multiple keywords, the results are unexpected.
> For example :
> I want to search for the words `first` and `post` and both these words
> should be present in the description field of the document else it
> shouldn't return the document.
> I've tried some 50+ queries for this. Used `edismax` parser as well but in
> vain.
>
> Tried boosting as well. But most queries result in weight given to either
> one of the keywords and results in documents that have that keyword but not
> the other. Can you guys help? Thanks in advance!
>
>
> Gauri Dhawan
> Associate Software Engineer
> SHEROES
>


Re: ZookeeperServer not running/Client Session timed out

2018-10-23 Thread Susheel Kumar
Hi Shawn,

Thanks for pointing out that it may be due to network/VM issue. I looked
the ZK logs in detail and i see below Socket timeout issue after which ZK
shutdown is called.

Is that good enough to confirm some VM/network issue not any ZK/Solr issue.
I am also including dmesg output during the timestamps we had issues

...
...
2018-10-22 06:03:56,022 [myid:2] - INFO  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2182:NIOServerCnxn@1044] - Closed socket connection for
client /192.3.101.219:55704 which had sessionid 0x5665c67cb0d
2018-10-22 06:03:56,022 [myid:2] - INFO  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2182:Learner@108] - Revalidating client: 0x5665c67cb0d
2018-10-22 06:03:56,265 [myid:2] - WARN
[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2182:Follower@89] - Exception when
following the leader
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
at java.io.DataInputStream.readInt(DataInputStream.java:387)
at
org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at
org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
at
org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99)
at
org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
at
org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:937)
2018-10-22 06:03:56,266 [myid:2] - INFO
[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2182:Follower@166] - shutdown called
java.lang.Exception: shutdown Follower
at
org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:166)
at
org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:941)
2018-10-22 06:03:56,266 [myid:2] - INFO
[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2182:NIOServerCnxn@1044] - Closed
socket connection for client /192.72.5.213:57834 which had sessionid
0x46591d67d0c0024
2018-10-22 06:03:56,266 [myid:2] - INFO
[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2182:NIOServerCnxn@1044] - Closed
socket connection for client /192.3.95.181:55192 which had sessionid
0x3665c676caf0004
2018-10-22 06:03:56,266 [myid:2] - INFO
[QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2182:NIOServerCnxn@1044] - Closed
socket connection for client /192.3.224.15:38712 which had sessionid
0x2668d42319e0012
...
...

dmesg

srch0117
[Mon Oct 22 06:04:37 2018] mptscsih: ioc0: attempting task abort!
(sc=88081bdf90c0)
[Mon Oct 22 06:04:37 2018] sd 0:0:1:0: [sdb] tag#0 CDB: Write(10) 2a 00 00
50 43 df 00 00 10 00
[Mon Oct 22 06:04:41 2018] mptscsih: ioc0: task abort: SUCCESS (rv=2002)
(sc=88081bdf90c0)

srch0118
[Mon Oct 22 06:04:41 2018] mptscsih: ioc0: attempting task abort!
(sc=8807b7c7b200)
[Mon Oct 22 06:04:41 2018] sd 0:0:1:0: [sdb] tag#3 CDB: Write(10) 2a 00 00
33 da 80 00 00 08 00
[Mon Oct 22 06:04:49 2018] mptscsih: ioc0: task abort: SUCCESS (rv=2002)
(sc=8807b7c7b200)
[Mon Oct 22 06:04:49 2018] mptscsih: ioc0: attempting task abort!
(sc=88081c09a680)
[Mon Oct 22 06:04:49 2018] sd 0:0:1:0: [sdb] tag#4 CDB: Write(10) 2a 00 00
50 13 e8 00 00 0f 00
[Mon Oct 22 06:04:50 2018] mptscsih: ioc0: task abort: SUCCESS (rv=2002)
(sc=88081c09a680)

srch0119
[Mon Oct 22 06:04:30 2018] mptscsih: ioc0: attempting task abort!
(sc=880e63c0)
[Mon Oct 22 06:04:30 2018] sd 0:0:1:0: [sdb] tag#0 CDB: Write(10) 2a 00 00
38 06 b0 00 00 18 00
[Mon Oct 22 06:04:38 2018] mptscsih: ioc0: task abort: SUCCESS (rv=2002)
(sc=880e63c0)

srch0120
Nothing around 6

srch0121
[Mon Oct 22 06:00:01 2018] BTRFS info (device sda1): relocating block group
1273285836800 flags 1
[Mon Oct 22 06:00:02 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:05 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:05 2018] BTRFS info (device sda1): relocating block group
1274527350784 flags 1
[Mon Oct 22 06:00:05 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:07 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:07 2018] BTRFS info (device sda1): relocating block group
1275601092608 flags 1
[Mon Oct 22 06:00:08 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:10 2018] BTRFS info (device sda1): found 8 extents
[Mon Oct 22 06:00:10 2018] BTRFS info (device sda1): relocating block group
1274493796352 flags 34
[Mon Oct 22 06:00:10 2018] BTRFS info (device sda1): relocating block group
1277748576256 flags 34
[Mon Oct 22 06:00:10 2018] BTRFS info (device sda1): relocating block group
1277782130688 flags 34
[Mon Oct 22 06:00:10 

Re: Internal Solr communication question

2018-10-23 Thread Shawn Heisey

On 10/23/2018 9:31 AM, Fernando Otero wrote:

Hey all
  I'm running some tests on Solr cloud (10 nodes, 3 shards, 3 replicas),
when I run the queries I end up seeing 7x traffic ( requests / minute)  in
Newrelic.

Could it be that the internal communication between nodes is done through
HTTP and newrelic counts those calls?


The inter-node communication is indeed done over HTTP, using the same 
handlers that clients use, and if you have something watching Solr's 
statistics or watching Jetty's counters, one of the counters will go up 
when an inter-node request happens.


With 3 shards, one request coming in will generate as many as six 
additional requests -- one request to a replica for each shard, and then 
another request to each shard that has matches for the query, to 
retrieve the documents that will be in the response. The node that 
received the initial request will compile the results from all the 
shards and send them back in response to the original request.  
Nutshell:  One request from a client expands. With three shards, that 
will be four to seven requests total.  If you have 10 shards, it will be 
between 11 and 21 total requests.


Thanks,
Shawn



Re: AW: AW: AW: 6.6 -> 7.5 SolrJ, seeing many "Connection evictor"-Threads

2018-10-23 Thread Shawn Heisey

On 10/22/2018 9:44 PM, Clemens Wyss DEV wrote:

On 10/22/2018 6:15 AM, Shawn Heisey wrote:

autoSoftCommit is pretty aggressive . If your commits are taking 1-2 seconds or 
les

well, some take minutes (re-index)!



Are you absolutely sure that you have commits taking that much time?  
I'm not talking about indexing, just the commit. Indexing a big batch of 
documents can take a while, but even on a huge index, commits shouldn't 
take a super long time, unless your cache warming is excessive.




autoCommit is quite long.  I'd probably go with 60 seconds

Which means every 1min the "pending"/"soft" commits are effectively saved?

One additional question: having auto(soft)commits in place, do I at all need to 
explicitly commit UpdateRequest from SolrJ?



With openSearcher set to false, the hard commits that autoCommit does do 
NOT make changes visible.  A hard commit flushes outstanding data to 
disk and starts a new transaction log.  If openSearcher is left at the 
default of "true" then it would also open a new searcher, making changes 
visible.


Hard commits are about durability, soft commits are about visibility.

If you have autoSoftCommit or use commitWithin, you do not need to send 
explicit commits.


I see that Shalin has replied with info about his work on the class 
you're concerned about.


Thanks,
Shawn



Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Shawn Heisey

On 10/23/2018 7:15 AM, Daniel Carrasco wrote:

Hello,

Thanks for your response.

We've already thought about that and doubled the instances. Just now for
every Solr instance we've 60GB of RAM (40GB configured on Solr), and a 16
Cores CPU. The entire Data can be stored on RAM and will not fill the RAM
(of course talking about raw data, not procesed data).


Why are you making the heap so large?  I've set up servers that can 
handle hundreds of millions of Solr documents in a much smaller heap.  A 
40GB heap would be something you might do if you're handling billions of 
documents on one server.


When you say the entire data can be stored in RAM ... are you counting 
that 40GB you gave to Solr?  Because you can't count that -- that's for 
Solr, NOT the index data.


The heap size should never be dictated by the amount of memory in the 
server.  It should be made as large as it needs to be for the job, and 
no larger.


https://wiki.apache.org/solr/SolrPerformanceProblems#RAM


About the usage, I've checked the RAM and CPU usage and are not fully used.


What exactly are you looking at?  I've had people swear that they can't 
see a problem with their systems when Solr is REALLY struggling to keep 
up with what it has been asked to do.


Further down on the page I linked above is a section about asking for 
help.  If you can provide the screenshot it mentions there, that would 
be helpful.  Here's a direct link to that section:


https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue

Thanks,
Shawn



Re: Regarding multi keyword search

2018-10-23 Thread Shawn Heisey

On 10/23/2018 8:20 AM, Gauri Dhawan wrote:

I have been facing an issue for quite some time and haven't been able to
come to a solution as of yet. We are trying to implement search on our
platform and all our data is stored in Solr.

I have a field `description` which is the field where I have to search.
It is of the field type `text_edit_suggest` and it looks something like this



   



 



   



  



 



When I search for multiple keywords, the results are unexpected.
For example :
I want to search for the words `first` and `post` and both these words
should be present in the description field of the document else it
shouldn't return the document.


Your index analysis has two tokenizers.  You can only have one.  There 
is at least one typo in the fieldType definition provided.  After I fix 
that, Solr 7.5.0 won't load the core, with this as the error:


Plugin init failure for [schema.xml] fieldType "text_suggest_edge": 
Plugin init failure for [schema.xml] analyzer/tokenizer: The schema 
defines multiple tokenizers for: [tokenizer: null]


What version of Solr are you running?  Have you explicitly included the 
"sow" parameter on your query, or in the handler definition?


The KeywordTokenizerFactory that you're using probably doesn't do what 
you think it does.  It preserves the entire input as a single token -- 
doesn't split it into separate words.  The kind of searching you 
mentioned likely isn't possible with the analysis chain you've got.  It 
might take a bunch of back and forth question/answer cycles to get to 
something useful.


In my strong opinion, that KeywordTokenizerFactory has a terrible name 
and needs a new one.  Anyone want to bikeshed the possibilities?


Thanks,
Shawn



Re: Setting up MiniSolrCloudCluster to use pre-built index

2018-10-23 Thread Ken Krugler
Hi Mark,

I’ll have a completely new, rebuilt index that’s (a) large, and (b) already 
sharded appropriately.

In that case, using the merge API isn’t great, in that it would take 
significant time and temporarily use double (or more) disk space.

E.g. I’ve got an index with 250M+ records, and about 200GB. There are other 
indexes, still big but not quite as large as this one.

So I’m still wondering if there’s any robust way to swap in a fresh set of 
shards, especially without relying on legacy cloud mode.

I think I can figure out where the data is being stored for an existing (empty) 
collection, shut that down, swap in the new files, and reload.

But I’m wondering if that’s really the best (or even sane) approach.

Thanks,

— Ken

> On May 19, 2018, at 6:24 PM, Mark Miller  wrote:
> 
> You create MiniSolrCloudCluster with a base directory and then each Jetty
> instance created gets a SolrHome in a subfolder called node{i}. So if
> legacyCloud=true you can just preconfigure a core and index under the right
> node{i} subfolder. legacyCloud=true should not even exist anymore though,
> so the long term way to do this would be to create a collection and then
> use the merge API or something to merge your index into the empty
> collection.
> 
> - Mark
> 
> On Sat, May 19, 2018 at 5:25 PM Ken Krugler 
> wrote:
> 
>> Hi all,
>> 
>> Wondering if anyone has experience (this is with Solr 6.6) in setting up
>> MiniSolrCloudCluster for unit testing, where we want to use an existing
>> index.
>> 
>> Note that this index wasn’t built with SolrCloud, as it’s generated by a
>> distributed (Hadoop) workflow.
>> 
>> So there’s no “restore from backup” option, or swapping collection
>> aliases, etc.
>> 
>> We can push our configset to Zookeeper and create the collection as per
>> other unit tests in Solr, but what’s the right way to set up data dirs for
>> the cores such that Solr is running with this existing index (or indexes,
>> for our sharded test case)?
>> 
>> Thanks!
>> 
>> — Ken
>> 
>> PS - yes, we’re aware of the routing issue with generating our own shards….
>> 
>> --
>> Ken Krugler
>> +1 530-210-6378 <(530)%20210-6378>
>> http://www.scaleunlimited.com
>> Custom big data solutions & training
>> Flink, Solr, Hadoop, Cascading & Cassandra
>> 
>> --
> - Mark
> about.me/markrmiller

--
Ken Krugler
+1 530-210-6378
http://www.scaleunlimited.com
Custom big data solutions & training
Flink, Solr, Hadoop, Cascading & Cassandra



Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Daniel Carrasco
Hello,

I've set that heap size because the solr receives a lot of queries every
second and I want to cache as much as possible. Also I'm not sure about the
number of documents in the collection, but the webpage have a lot of
products.

About store the index data in RAM is just an expression. The data is stored
on SSD disks with XFS (faster than EXT4).

I'll take a look to the links tomorrow at work.

Thanks!!
Greetings!!


El mar., 23 oct. 2018 23:48, Shawn Heisey  escribió:

> On 10/23/2018 7:15 AM, Daniel Carrasco wrote:
> > Hello,
> >
> > Thanks for your response.
> >
> > We've already thought about that and doubled the instances. Just now for
> > every Solr instance we've 60GB of RAM (40GB configured on Solr), and a 16
> > Cores CPU. The entire Data can be stored on RAM and will not fill the RAM
> > (of course talking about raw data, not procesed data).
>
> Why are you making the heap so large?  I've set up servers that can
> handle hundreds of millions of Solr documents in a much smaller heap.  A
> 40GB heap would be something you might do if you're handling billions of
> documents on one server.
>
> When you say the entire data can be stored in RAM ... are you counting
> that 40GB you gave to Solr?  Because you can't count that -- that's for
> Solr, NOT the index data.
>
> The heap size should never be dictated by the amount of memory in the
> server.  It should be made as large as it needs to be for the job, and
> no larger.
>
> https://wiki.apache.org/solr/SolrPerformanceProblems#RAM
>
> > About the usage, I've checked the RAM and CPU usage and are not fully
> used.
>
> What exactly are you looking at?  I've had people swear that they can't
> see a problem with their systems when Solr is REALLY struggling to keep
> up with what it has been asked to do.
>
> Further down on the page I linked above is a section about asking for
> help.  If you can provide the screenshot it mentions there, that would
> be helpful.  Here's a direct link to that section:
>
>
> https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue
>
> Thanks,
> Shawn
>
>


Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Walter Underwood
We handle request rates at a few thousand requests/minute with an 8 GB heap. 
95th percentile response time is 200 ms. Median (cached) is 4 ms.

An oversized heap will hurt your query performance because everything stops for 
the huge GC.

RAM is still a thousand times faster than SSD, so you want a lot of RAM 
available for file system buffers managed by the OS.

I recommend trying an 8 GB heap with the latest version of Java 8 and the G1 
collector. 

We have this in our solr.in.sh:

SOLR_HEAP=8g
# Use G1 GC  -- wunder 2017-01-23
# Settings from https://wiki.apache.org/solr/ShawnHeisey
GC_TUNE=" \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=8m \
-XX:MaxGCPauseMillis=200 \
-XX:+UseLargePages \
-XX:+AggressiveOpts \
"

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Oct 23, 2018, at 9:51 PM, Daniel Carrasco  wrote:
> 
> Hello,
> 
> I've set that heap size because the solr receives a lot of queries every
> second and I want to cache as much as possible. Also I'm not sure about the
> number of documents in the collection, but the webpage have a lot of
> products.
> 
> About store the index data in RAM is just an expression. The data is stored
> on SSD disks with XFS (faster than EXT4).
> 
> I'll take a look to the links tomorrow at work.
> 
> Thanks!!
> Greetings!!
> 
> 
> El mar., 23 oct. 2018 23:48, Shawn Heisey  escribió:
> 
>> On 10/23/2018 7:15 AM, Daniel Carrasco wrote:
>>> Hello,
>>> 
>>> Thanks for your response.
>>> 
>>> We've already thought about that and doubled the instances. Just now for
>>> every Solr instance we've 60GB of RAM (40GB configured on Solr), and a 16
>>> Cores CPU. The entire Data can be stored on RAM and will not fill the RAM
>>> (of course talking about raw data, not procesed data).
>> 
>> Why are you making the heap so large?  I've set up servers that can
>> handle hundreds of millions of Solr documents in a much smaller heap.  A
>> 40GB heap would be something you might do if you're handling billions of
>> documents on one server.
>> 
>> When you say the entire data can be stored in RAM ... are you counting
>> that 40GB you gave to Solr?  Because you can't count that -- that's for
>> Solr, NOT the index data.
>> 
>> The heap size should never be dictated by the amount of memory in the
>> server.  It should be made as large as it needs to be for the job, and
>> no larger.
>> 
>> https://wiki.apache.org/solr/SolrPerformanceProblems#RAM
>> 
>>> About the usage, I've checked the RAM and CPU usage and are not fully
>> used.
>> 
>> What exactly are you looking at?  I've had people swear that they can't
>> see a problem with their systems when Solr is REALLY struggling to keep
>> up with what it has been asked to do.
>> 
>> Further down on the page I linked above is a section about asking for
>> help.  If you can provide the screenshot it mentions there, that would
>> be helpful.  Here's a direct link to that section:
>> 
>> 
>> https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue
>> 
>> Thanks,
>> Shawn
>> 
>> 



Re: Join across shards?

2018-10-23 Thread Erick Erickson
In addition to Vadim's comment, Solr Streaming _can_
work across shards and even across collections.
Depending on your use-case this may work for you.

Best,
Erick
On Tue, Oct 23, 2018 at 6:41 AM Vadim Ivanov
 wrote:
>
> Hi,
> You CAN join across collections with runtime "join".
> The only limitation is that FROM collection should not be sharded and joined
> data should reside on one node.
> Solr cannot join across nodes (distributed search is not supported).
> Though using streaming expressions it's possible to do various things...
> --
> Vadim
>
> -Original Message-
> From: e_bri...@videotron.ca [mailto:e_bri...@videotron.ca]
> Sent: Tuesday, October 23, 2018 2:38 PM
> To: solr-user@lucene.apache.org
> Subject: Join across shards?
>
> Hi
> all, otedtext>
>
> Sorry if the question was already covered.
>
> We are using joins across documents with the limitation of having the
> documents to be joined sitting on the same shard. Is there a way around this
> limitation and even join across collections? Are there plans to support this
> out of the box?
>
> Thanks!
>
> Eric Briere.
>


Re: Slow import from MsSQL and down cluster during process

2018-10-23 Thread Daniel Carrasco
Thanks for all, I'll try later ;)

Greetings!!.

El mié., 24 oct. 2018 a las 7:13, Walter Underwood ()
escribió:

> We handle request rates at a few thousand requests/minute with an 8 GB
> heap. 95th percentile response time is 200 ms. Median (cached) is 4 ms.
>
> An oversized heap will hurt your query performance because everything
> stops for the huge GC.
>
> RAM is still a thousand times faster than SSD, so you want a lot of RAM
> available for file system buffers managed by the OS.
>
> I recommend trying an 8 GB heap with the latest version of Java 8 and the
> G1 collector.
>
> We have this in our solr.in.sh:
>
> SOLR_HEAP=8g
> # Use G1 GC  -- wunder 2017-01-23
> # Settings from https://wiki.apache.org/solr/ShawnHeisey
> GC_TUNE=" \
> -XX:+UseG1GC \
> -XX:+ParallelRefProcEnabled \
> -XX:G1HeapRegionSize=8m \
> -XX:MaxGCPauseMillis=200 \
> -XX:+UseLargePages \
> -XX:+AggressiveOpts \
> "
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> > On Oct 23, 2018, at 9:51 PM, Daniel Carrasco 
> wrote:
> >
> > Hello,
> >
> > I've set that heap size because the solr receives a lot of queries every
> > second and I want to cache as much as possible. Also I'm not sure about
> the
> > number of documents in the collection, but the webpage have a lot of
> > products.
> >
> > About store the index data in RAM is just an expression. The data is
> stored
> > on SSD disks with XFS (faster than EXT4).
> >
> > I'll take a look to the links tomorrow at work.
> >
> > Thanks!!
> > Greetings!!
> >
> >
> > El mar., 23 oct. 2018 23:48, Shawn Heisey 
> escribió:
> >
> >> On 10/23/2018 7:15 AM, Daniel Carrasco wrote:
> >>> Hello,
> >>>
> >>> Thanks for your response.
> >>>
> >>> We've already thought about that and doubled the instances. Just now
> for
> >>> every Solr instance we've 60GB of RAM (40GB configured on Solr), and a
> 16
> >>> Cores CPU. The entire Data can be stored on RAM and will not fill the
> RAM
> >>> (of course talking about raw data, not procesed data).
> >>
> >> Why are you making the heap so large?  I've set up servers that can
> >> handle hundreds of millions of Solr documents in a much smaller heap.  A
> >> 40GB heap would be something you might do if you're handling billions of
> >> documents on one server.
> >>
> >> When you say the entire data can be stored in RAM ... are you counting
> >> that 40GB you gave to Solr?  Because you can't count that -- that's for
> >> Solr, NOT the index data.
> >>
> >> The heap size should never be dictated by the amount of memory in the
> >> server.  It should be made as large as it needs to be for the job, and
> >> no larger.
> >>
> >> https://wiki.apache.org/solr/SolrPerformanceProblems#RAM
> >>
> >>> About the usage, I've checked the RAM and CPU usage and are not fully
> >> used.
> >>
> >> What exactly are you looking at?  I've had people swear that they can't
> >> see a problem with their systems when Solr is REALLY struggling to keep
> >> up with what it has been asked to do.
> >>
> >> Further down on the page I linked above is a section about asking for
> >> help.  If you can provide the screenshot it mentions there, that would
> >> be helpful.  Here's a direct link to that section:
> >>
> >>
> >>
> https://wiki.apache.org/solr/SolrPerformanceProblems#Asking_for_help_on_a_memory.2Fperformance_issue
> >>
> >> Thanks,
> >> Shawn
> >>
> >>
>
>

-- 
_

  Daniel Carrasco Marín
  Ingeniería para la Innovación i2TIC, S.L.
  Tlf:  +34 911 12 32 84 Ext: 223
  www.i2tic.com
_