Re: Arabic words search in solr

2017-02-08 Thread Steve Rowe
Hi Mohan,

I haven’t looked at the latest problems, but the ICU folding filter should be 
the last filter, to allow the Arabic normalization and stemming filters to see 
the original words.

--
Steve
www.lucidworks.com

> On Feb 8, 2017, at 10:58 PM, mohanmca01  wrote:
> 
> Hi Steve,
> 
> Thanks for your continues investigation on this issue.
> 
> I added ICU Folding Filter in schema.xml file and re-indexed all the data
> again. i noticed some improvements in search but its not really as expected.
> 
> below is the configuration changed in schema file:
> 
> -
> 
>   
>
> 
> 
> words="lang/stopwords_ar.txt" />
> 
>
>
>  
>
> -
> 
> attached the document for your reference where highlighted ones in red are
> not working as expected.
> 
> Also, i have raised one point regarding Jquery autocomplete with unique
> records..kindly let me know if you have any background on how to implement
> the same.
> 
> arabicSearch.docx
>   
> 
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Arabic-words-search-in-solr-tp4317733p4319436.html
> Sent from the Solr - User mailing list archive at Nabble.com.



Re: Arabic words search in solr

2017-02-08 Thread mohanmca01
Hi Steve,

Thanks for your continues investigation on this issue.

I added ICU Folding Filter in schema.xml file and re-indexed all the data
again. i noticed some improvements in search but its not really as expected.

below is the configuration changed in schema file:

-

   


 




  

-

attached the document for your reference where highlighted ones in red are
not working as expected.

Also, i have raised one point regarding Jquery autocomplete with unique
records..kindly let me know if you have any background on how to implement
the same.

arabicSearch.docx
  


 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Arabic-words-search-in-solr-tp4317733p4319436.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Interval Facets with JSON

2017-02-08 Thread deniz
Tom Evans-2 wrote
> I don't think there is such a thing as an interval JSON facet.
> Whereabouts in the documentation are you seeing an "interval" as JSON
> facet type?
> 
> 
> You want a range facet surely?
> 
> One thing with range facets is that the gap is fixed size. You can
> actually do your example however:
> 
> json.facet={hieght_facet:{type:range, gap:20, start:160, end:190,
> hardend:True, field:height}}
> 
> If you do require arbitrary bucket sizes, you will need to do it by
> specifying query facets instead, I believe.
> 
> Cheers
> 
> Tom


nothing other than
https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-IntervalFaceting
for documentation on intervals...  i am ok with range queries as well but
intervals would fit better because of different sizes...

i have also checked the class FacetRequest after digging through the error
stack and found the lines below:

public Object parseFacetOrStat(String key, String type, Object args) throws
SyntaxError {
// TODO: a place to register all these facet types?

if ("field".equals(type) || "terms".equals(type)) {
  return parseFieldFacet(key, args);
} else if ("query".equals(type)) {
  return parseQueryFacet(key, args);
} else if ("range".equals(type)) {
  return parseRangeFacet(key, args);
}

AggValueSource stat = parseStat(key, type, args);
if (stat == null) {
  throw err("Unknown facet or stat. key=" + key + " type=" + type + "
args=" + args);
}

couldnt find any other class which is extending this method either... so
simply i will switch to ranges for now...

thanks a lot for your suggestions





-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Interval-Facets-with-JSON-tp4319111p4319402.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Find groups where at least one item matches a query

2017-02-08 Thread Alexandre Rafalovitch
As per the documentation:
"So all downstream components (faceting, highlighting, etc...) will
work with the collapsed result set."

So, no, you cannot facet on expanded group. Partially because it is
not really fully expanded (there is a limit of items in each group).

But also, are trying to facet per group or globally? If globally,
maybe you can override the faceting query or have a custom component
(before facet one) that generates facet.query parameters based on what
you got back from collapsing.

If locally, maybe there is something in json.facet module to do nested
facet for each domain.

Or maybe it is worth running two queries, one for results and one for
facets.

Regards,
   Alex.

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 8 February 2017 at 16:02, Cristian Popovici
 wrote:
> Alexander - thanks! It seems to work great.
>
> I still have a question - if I want to do a facet query that includes also
> the documents in the expanded area. Is this possible? If I apply a facet
> query like "facet=true=modality" it counts only the head
> documents.
>
> Thanks,
> Cristian.
>
> On Sun, Feb 5, 2017 at 10:43 PM, Alexandre Rafalovitch 
> wrote:
>
>> What about collapse and expand with overriden query. Something like
>> this (against 6.4 techproducts example):
>> http://localhost:8983/solr/techproducts/select?expand.q=*
>> :*=true={!collapse%20field=manu_id_s}=on&
>> q=name:CORSAIR=json
>>
>> Note that the main document area contains the head document and the
>> expanded area contains the rest of them, up to provided/default limit.
>> For further info, see
>> https://cwiki.apache.org/confluence/display/solr/
>> Collapse+and+Expand+Results
>>
>> Regards,
>>Alex.
>> 
>> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>>
>>
>> On 5 February 2017 at 14:55, Cristian Popovici
>>  wrote:
>> > Doesn't seem to work - I'm doing a query like this and I get only one
>> result
>> >
>> > q=pathology:normal=true=groupId&*group.limit=2*
>> >
>> > On Sun, Feb 5, 2017 at 7:20 PM, Nick Vasilyev 
>> > wrote:
>> >
>> >> Check out the group.limit argument.
>> >>
>> >> On Feb 5, 2017 12:10 PM, "Cristian Popovici" <
>> cristi.popov...@visionsr.com
>> >> >
>> >> wrote:
>> >>
>> >> > Erick, thanks for you answer.
>> >> >
>> >> > Sorry - I forgot to mention that I do not know the group id when I
>> >> perform
>> >> > the query.
>> >> > Grouping - I think - does not help for me as it filters out the
>> documents
>> >> > that do not meet the filter criteria.
>> >> >
>> >> > Example:
>> >> > *q=pathology:Normal=true=groupId*  will miss out
>> the
>> >> > "pathology":
>> >> > "Metastasis".
>> >> >
>> >> > I need to retrieve both documents in the same group even if only one
>> >> meets
>> >> > the search criteria.
>> >> >
>> >> > Thanks!
>> >> >
>> >> > On Sun, Feb 5, 2017 at 6:54 PM, Erick Erickson <
>> erickerick...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > Isn't this just "=groupId:223"?
>> >> > >
>> >> > > Or do you mean you need multiple _groups_? In which case you can use
>> >> > > grouping, see:
>> >> > > https://cwiki.apache.org/confluence/display/solr/
>> >> > > Collapse+and+Expand+Results
>> >> > > and/or
>> >> > > https://cwiki.apache.org/confluence/display/solr/Result+Grouping
>> >> > >
>> >> > > but do note there are some limitations in distributed mode.
>> >> > >
>> >> > > Best,
>> >> > > Erick
>> >> > >
>> >> > > On Sun, Feb 5, 2017 at 1:49 AM, Cristian Popovici
>> >> > >  wrote:
>> >> > > > Hi all,
>> >> > > >
>> >> > > > I'm new to Solr and I need a bit of help.
>> >> > > >
>> >> > > > I have a structure of documents indexed in Solr that are grouped
>> >> > together
>> >> > > > by a property. I need to retrieve all groups where at least one
>> entry
>> >> > in
>> >> > > > the group matches a query.
>> >> > > >
>> >> > > > Example:
>> >> > > > I have two documents indexed and both share the *groupId *property
>> >> that
>> >> > > > defines the grouping field.
>> >> > > >
>> >> > > > *{*
>> >> > > > *"groupId": "223",*
>> >> > > > *"modality": "Computed Tomography",*
>> >> > > > *"anatomy": "Subcutaneous fat",*
>> >> > > > *"pathology": "Metastasis",*
>> >> > > > *}*
>> >> > > >
>> >> > > > *{*
>> >> > > > *"groupId": "223",*
>> >> > > > *"modality": "Computed Tomography",*
>> >> > > > *"anatomy": "Subcutaneous fat",*
>> >> > > > *"pathology": "Normal",*
>> >> > > > *}*
>> >> > > >
>> >> > > > I need to retrieve both entries in the group when performing a
>> query
>> >> > > like:
>> >> > > >
>> >> > > > *(pathology:Normal)*
>> >> > > > Is this possible in solr?
>> >> > > >
>> >> > > > Thanks!
>> >> > >
>> >> >
>> >>
>>


Re: difference in json update handler update/json and update/json/docs

2017-02-08 Thread Alexandre Rafalovitch
/update/json expects Solr JSON update format.
/update is an auto-route that should be equivalent to /update/json
with the right content type/extension.

/update/json/docs expects random JSON and tries to extract fields for
indexing from it.
https://cwiki.apache.org/confluence/display/solr/Transforming+and+Indexing+Custom+JSON

Regards,
   Alex.


http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 8 February 2017 at 15:54, Florian Meier
 wrote:
> dear solr users,
> can somebody explain the exact difference between the to update handlers? I’m 
> asking cause with some curl commands solr fails to identify the fields of the 
> json doc and indexes everything in _str_:
>
> Those work perfectly:
> curl 'http://localhost:8983/solr/testcore2/update/json?commit=true' 
> --data-binary @example/exampledocs/cacmDocs.json
>
>
> curl 'http://localhost:8983/solr/testcore2/update?commit=true' --data-binary 
> @example/exampledocs/cacmDocs.json -H 'Content-type:application/json'
>
> But those two (both with update/json/docs) don't
>
> curl 'http://localhost:8983/solr/testcore2/update/json/docs?commit=true' 
> --data-binary @example/exampledocs/cacmDocs.json -H 
> 'Content-type:application/json‘
>
> curl 'http://localhost:8983/solr/testcore2/update/json/docs?commit=true' 
> --data-binary @example/exampledocs/cacmDocs.json
>
> Cheers,
> Florian
>
>
>
>
>


Re: Find groups where at least one item matches a query

2017-02-08 Thread Cristian Popovici
Alexander - thanks! It seems to work great.

I still have a question - if I want to do a facet query that includes also
the documents in the expanded area. Is this possible? If I apply a facet
query like "facet=true=modality" it counts only the head
documents.

Thanks,
Cristian.

On Sun, Feb 5, 2017 at 10:43 PM, Alexandre Rafalovitch 
wrote:

> What about collapse and expand with overriden query. Something like
> this (against 6.4 techproducts example):
> http://localhost:8983/solr/techproducts/select?expand.q=*
> :*=true={!collapse%20field=manu_id_s}=on&
> q=name:CORSAIR=json
>
> Note that the main document area contains the head document and the
> expanded area contains the rest of them, up to provided/default limit.
> For further info, see
> https://cwiki.apache.org/confluence/display/solr/
> Collapse+and+Expand+Results
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 5 February 2017 at 14:55, Cristian Popovici
>  wrote:
> > Doesn't seem to work - I'm doing a query like this and I get only one
> result
> >
> > q=pathology:normal=true=groupId&*group.limit=2*
> >
> > On Sun, Feb 5, 2017 at 7:20 PM, Nick Vasilyev 
> > wrote:
> >
> >> Check out the group.limit argument.
> >>
> >> On Feb 5, 2017 12:10 PM, "Cristian Popovici" <
> cristi.popov...@visionsr.com
> >> >
> >> wrote:
> >>
> >> > Erick, thanks for you answer.
> >> >
> >> > Sorry - I forgot to mention that I do not know the group id when I
> >> perform
> >> > the query.
> >> > Grouping - I think - does not help for me as it filters out the
> documents
> >> > that do not meet the filter criteria.
> >> >
> >> > Example:
> >> > *q=pathology:Normal=true=groupId*  will miss out
> the
> >> > "pathology":
> >> > "Metastasis".
> >> >
> >> > I need to retrieve both documents in the same group even if only one
> >> meets
> >> > the search criteria.
> >> >
> >> > Thanks!
> >> >
> >> > On Sun, Feb 5, 2017 at 6:54 PM, Erick Erickson <
> erickerick...@gmail.com>
> >> > wrote:
> >> >
> >> > > Isn't this just "=groupId:223"?
> >> > >
> >> > > Or do you mean you need multiple _groups_? In which case you can use
> >> > > grouping, see:
> >> > > https://cwiki.apache.org/confluence/display/solr/
> >> > > Collapse+and+Expand+Results
> >> > > and/or
> >> > > https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> >> > >
> >> > > but do note there are some limitations in distributed mode.
> >> > >
> >> > > Best,
> >> > > Erick
> >> > >
> >> > > On Sun, Feb 5, 2017 at 1:49 AM, Cristian Popovici
> >> > >  wrote:
> >> > > > Hi all,
> >> > > >
> >> > > > I'm new to Solr and I need a bit of help.
> >> > > >
> >> > > > I have a structure of documents indexed in Solr that are grouped
> >> > together
> >> > > > by a property. I need to retrieve all groups where at least one
> entry
> >> > in
> >> > > > the group matches a query.
> >> > > >
> >> > > > Example:
> >> > > > I have two documents indexed and both share the *groupId *property
> >> that
> >> > > > defines the grouping field.
> >> > > >
> >> > > > *{*
> >> > > > *"groupId": "223",*
> >> > > > *"modality": "Computed Tomography",*
> >> > > > *"anatomy": "Subcutaneous fat",*
> >> > > > *"pathology": "Metastasis",*
> >> > > > *}*
> >> > > >
> >> > > > *{*
> >> > > > *"groupId": "223",*
> >> > > > *"modality": "Computed Tomography",*
> >> > > > *"anatomy": "Subcutaneous fat",*
> >> > > > *"pathology": "Normal",*
> >> > > > *}*
> >> > > >
> >> > > > I need to retrieve both entries in the group when performing a
> query
> >> > > like:
> >> > > >
> >> > > > *(pathology:Normal)*
> >> > > > Is this possible in solr?
> >> > > >
> >> > > > Thanks!
> >> > >
> >> >
> >>
>


RE: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Markus Jelsma
> Thank you I will follow Erick's steps
> BTW I am also trying to ingesting using Flume , Flume uses Morphlines along 
> with Tika
> Even Flume SolrSink will have the same issue?

Yes, when using Tika you run the risk of it choking on a document, eating CPU 
and/or RAM until everything dies. This is also true when you run it standalone. 
The problem is usually caused by PDF and Office documents that are unusual, 
corrupt or incomplete (e.g. truncated in size) or extremely large. But even 
ordinary HTML can get you into trouble due to extreme sizes or very deep nested 
elements.

But, in general, it is not a problem you will experience frequently. We operate 
broad and large scale web crawlers, ingesting all kinds of bad stuff all the 
time. The trick to avoid problems is running each Tika parse in a separate 
thread, have a timer and kill the thread if it reaches a limit. It can still go 
wrong, but trouble is very rare.

Running it standalone and talking to it over network is safest, but not very 
portable/easy distributable on Hadoop or other platforms.


difference in json update handler update/json and update/json/docs

2017-02-08 Thread Florian Meier
dear solr users,
can somebody explain the exact difference between the to update handlers? I’m 
asking cause with some curl commands solr fails to identify the fields of the 
json doc and indexes everything in _str_:

Those work perfectly:
curl 'http://localhost:8983/solr/testcore2/update/json?commit=true' 
--data-binary @example/exampledocs/cacmDocs.json


curl 'http://localhost:8983/solr/testcore2/update?commit=true' --data-binary 
@example/exampledocs/cacmDocs.json -H 'Content-type:application/json'

But those two (both with update/json/docs) don't

curl 'http://localhost:8983/solr/testcore2/update/json/docs?commit=true' 
--data-binary @example/exampledocs/cacmDocs.json -H 
'Content-type:application/json‘

curl 'http://localhost:8983/solr/testcore2/update/json/docs?commit=true' 
--data-binary @example/exampledocs/cacmDocs.json

Cheers,
Florian 







RE: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Anatharaman, Srinatha (Contractor)
Shawn,

Thank you I will follow Erick's steps
BTW I am also trying to ingesting using Flume , Flume uses Morphlines along 
with Tika
Even Flume SolrSink will have the same issue?

Currently my SolrSink does not ingest the data and also I do not see any error 
in my logs.
I am seeing lot of issues with Solr

Could you please suggest me what could be the issue with my Flume SolrSink?

I have attached my another email sent on SolrSink issue

Regards,
~Sri

-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Wednesday, February 08, 2017 2:21 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImportHandler - Unable to load Tika Config Processing Document 
# 1

On 2/8/2017 9:08 AM, Anatharaman, Srinatha (Contractor) wrote:
> Thank you for your reply
> Other archive message you mentioned is posted by me only I am new to 
> Solr, When you say process outside Solr program. What exactly I should do?
>
> I am having lots of text document which I need to index, what should I apply 
> to these document before loading it to Solr?

Did you not see Erick's reply, where he provided the following link, and said 
that the program shown there was a decent guide to writing your own program to 
handle Tika processing?

https://lucidworks.com/2012/02/14/indexing-with-solrj/

The blog post includes code that talks to a database, which would be fairly 
easy to remove/change.  Some knowledge of how to write Java programs is 
required.  Tika is a Java API, so writing the program in Java is a prerequisite.

The entire point of this idea is to take the Tika processing out of the Solr 
server(s).  If Tika runs within Solr, it can cause Solr to hang or crash.  The 
authors of Tika try as hard as they can to make sure it works well, but the 
software is dealing with proprietary data formats that are not publicly 
documented.  Sometimes one of those documents can cause Tika to explode.  
Crashes in client code won't break your application, and it is likely easier to 
recover from a crash at that level.

Thanks,
Shawn


--- Begin Message ---
Hi,





I am indexing text document using Flume,

I do not see any error or warning message but data is not getting ingested to 
Solr

Log level for both Solr and Flume is set to TRACE, ALL



Flume version : 1.5.2.2.3

Solr Version : 5.5

Config files are as below

Flume Config :

agent.sources = SpoolDirSrc

agent.channels = FileChannel

agent.sinks = SolrSink



# Configure Source

agent.sources.SpoolDirSrc.channels = fileChannel

agent.sources.SpoolDirSrc.type = spooldir

agent.sources.SpoolDirSrc.spoolDir = /home/flume/source_emails

agent.sources.SpoolDirSrc.basenameHeader = true

agent.sources.SpoolDirSrc.fileHeader = true

#agent.sources.src1.fileSuffix = .COMPLETED

agent.sources.SpoolDirSrc.deserializer = 
org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder

# Use a channel that buffers events in memory

agent.channels.FileChannel.type = file

agent.channels.FileChannel.capacity = 1

#agent.channels.FileChannel.transactionCapacity = 1

# Configure Solr Sink

agent.sinks.SolrSink.type = 
org.apache.flume.sink.solr.morphline.MorphlineSolrSink

agent.sinks.SolrSink.morphlineFile = /etc/flume/conf/morphline.conf

agent.sinks.SolrSink.batchsize = 1000

agent.sinks.SolrSink.batchDurationMillis = 2500

agent.sinks.SolrSink.channel = fileChannel

agent.sinks.SolrSink.morphlineId = morphline1

agent.sources.SpoolDirSrc.channels = FileChannel

agent.sinks.SolrSink.channel = FileChannel



Morphline Config

solrLocator: {

collection : gsearch

#zkHost : "127.0.0.1:9983"

zkHost : "codesolr-as-r3p:21810,codesolr-as-r3p:21811,codesolr-as-r3p:21812"

}

morphlines :

[

  {

id : morphline1

importCommands : ["org.kitesdk.**", "org.apache.solr.**"]

commands :

[

  { detectMimeType { includeDefaultMimeTypes : true } }

  {

solrCell {

  solrLocator : ${solrLocator}

  captureAttr : true

  lowernames : true

  capture : [_attachment_body, _attachment_mimetype, basename, content, 
content_encoding, content_type, file, meta]

  parsers : [ { parser : org.apache.tika.parser.txt.TXTParser } ]

 }

  }

  { generateUUID { field : id } }

  { sanitizeUnknownSolrFields { solrLocator : ${solrLocator} } }

  { logDebug { format : "output record: {}", args : ["@{}"] } }

  { loadSolr: { solrLocator : ${solrLocator} } }

]

  }

]



Please help me what could be the issue

Regards,

~Sri



--- End Message ---


Re: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Shawn Heisey
On 2/8/2017 9:08 AM, Anatharaman, Srinatha (Contractor) wrote:
> Thank you for your reply
> Other archive message you mentioned is posted by me only
> I am new to Solr, When you say process outside Solr program. What exactly I 
> should do?
>
> I am having lots of text document which I need to index, what should I apply 
> to these document before loading it to Solr?

Did you not see Erick's reply, where he provided the following link, and
said that the program shown there was a decent guide to writing your own
program to handle Tika processing?

https://lucidworks.com/2012/02/14/indexing-with-solrj/

The blog post includes code that talks to a database, which would be
fairly easy to remove/change.  Some knowledge of how to write Java
programs is required.  Tika is a Java API, so writing the program in
Java is a prerequisite.

The entire point of this idea is to take the Tika processing out of the
Solr server(s).  If Tika runs within Solr, it can cause Solr to hang or
crash.  The authors of Tika try as hard as they can to make sure it
works well, but the software is dealing with proprietary data formats
that are not publicly documented.  Sometimes one of those documents can
cause Tika to explode.  Crashes in client code won't break your
application, and it is likely easier to recover from a crash at that level.

Thanks,
Shawn



RE: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Anatharaman, Srinatha (Contractor)
In my requirement when a Solr search finds the string it has to return the 
entire text document(emails in RTF format). If I process it outside the Solr 
how do I achieve this?
When you say process outside, what do I process with rtf document? And also 
search result have to return original document

I was able to successfully do this in Solr Core stand alone



-Original Message-
From: Allison, Timothy B. [mailto:talli...@mitre.org] 
Sent: Wednesday, February 08, 2017 1:56 PM
To: solr-user@lucene.apache.org
Subject: RE: DataImportHandler - Unable to load Tika Config Processing Document 
# 1

>It is *strongly* recommended to *not* use >the Tika that's embedded within 
>Solr, but >instead to do the processing outside of Solr >in a program of your 
>own and index the results.  

+1 

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201601.mbox/%3CBY2PR09MB11210EDFCFA297528940B07C7F30%40BY2PR09MB112.namprd09.prod.outlook.com%3E
 


RE: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Allison, Timothy B.
>It is *strongly* recommended to *not* use >the Tika that's embedded within 
>Solr, but >instead to do the processing outside of Solr >in a program of your 
>own and index the results.  

+1 

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201601.mbox/%3CBY2PR09MB11210EDFCFA297528940B07C7F30%40BY2PR09MB112.namprd09.prod.outlook.com%3E
 


Re: Facets and docValues

2017-02-08 Thread Erick Erickson
Yes, all three fields should be docValues. The point of docValues is
to keep from "uninverting" the docValues structure in Java's heap. Any
time you have to answer the question "What is the value in
docX.fieldY" it should be a docValues field. The way facets (and
funciton queries for tha t matter work) is that the doc is scored. If
the doc has a non-zero score, the values in fields need to be
evaluated. So picture:
1> score doc X
2> if score is non-zero then for docx, field category, add one to the
facet bucket for that value. For x and y add them to the facet stat.

Best,
Erick

On Wed, Feb 8, 2017 at 5:27 AM, Chris Ulicny  wrote:
> I've been trying to figure out how exactly docValues help with facet
> queries, and I only seem to find mention that they are beneficial to facet
> performance without many specifics. What I'd like to know is whether it
> applies to all fields used in the facet or just fields that are faceted on.
>
> For example, consider if we have the following facet
>
> catfacet:{
> type: terms,
> field: category,
> facet: {
> x_sum:"sum(x)",
> y_sum:"sum(y)"
> }
> }
>
> Is it beneficial to have docValues enabled for all three fields used or
> some specific subset of them?
>
> Thanks.


is there a way to match related multivalued fields of different types

2017-02-08 Thread Renee Sun
Hi -
I have a schema looks like:





(text_nost and text_st are just defined field type without/with stopwords...
irrelevant to the issues here)

these 3 fields are parallel in means of their values. I want to be able to
match these values and be able to search something like :

give me all attachment_names if their corresponding attachment_size > 5000

I googled and saw someone mentioned about using dynamic fields, but I think
dynamic fields are more suitable for 'type' style value rather than what I
am having is attachment_names being just individual values.

Please advice what is the best way to achieve this.
Thanks in advance!
Renee 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/is-there-a-way-to-match-related-multivalued-fields-of-different-types-tp4319342.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Switching from Managed Schema to Manually Edited schema.xml --IS NOT WORKING

2017-02-08 Thread Anatharaman, Srinatha (Contractor)
Erick,

I have tested it on Solr Stand-alone mode and it works perfectly fine
To answer your other question, Yes I have uploaded all my config files 
including tikaConfig file to Zookeeper using solr upconfig command as below

./solr zk -upconfig -n gsearch -d 
/app/platform/solr1/server/solr/configsets/gsearch/conf -z 
codesolr-as-r3p:21810,codesolr-as-r3p:21811,codesolr-as-r3p:21812

After this I have created my collection as below command

./solr create_collection -c gsearch -d 
/app/platform/solr1/server/solr/configsets/gsearch/conf -n gsearch -shards 2 
-replicationFactor 2 -p 8983


Regards,
~Sri


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com] 
Sent: Monday, February 06, 2017 10:21 PM
To: solr-user 
Subject: Re: Switching from Managed Schema to Manually Edited schema.xml --IS 
NOT WORKING

You did not answer whether you uploaded your configs to Zookeeper and reloaded 
the collection. Providing configs will not help you with that.

What I'd advise:

First get it working in stand-alone mode without Solr cloud at all.
That should be quite simple, all on your local machine. Then migrate to 
SolrCloud so you're only changing one thing at a time.

Best,
Erick

On Mon, Feb 6, 2017 at 9:54 AM, Anatharaman, Srinatha (Contractor) 
 wrote:
> Erick,
>
> I did as mentioned in that URL, made changes to solrconfig and kept 
> only required fields in schema.xml Would you mind sharing config files for 
> indexing text document?
>
> Regards,
> ~Sri
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Monday, February 06, 2017 12:22 AM
> To: solr-user 
> Subject: Re: Switching from Managed Schema to Manually Edited 
> schema.xml --IS NOT WORKING
>
> This is still using the managed schema specifically the data_driven_configs 
> schema as evidenced by the add-unknown-field-to-the-schema part of the URL.
>
> It looks like you're not _really_ removing the managed schema 
> definitions from your solrconfig.xml. You must
> 1> change solrconfig.xml
> 2> push it to ZooKeeper
> 3> reload the collection
>
> before the config changes actually take effect.
>
> Best,
> Erick
>
> On Sun, Feb 5, 2017 at 9:05 PM, Anatharaman, Srinatha (Contractor) 
>  wrote:
>> Hi ,
>>
>> I am indexing a Text document and followed the steps defined in below 
>> URL to create the schema.xml 
>> https://cwiki.apache.org/confluence/display/solr/Schema+Factory+Defin
>> i
>> tion+in+SolrConfig#SchemaFactoryDefinitioninSolrConfig-SwitchingfromM
>> tion+in+a
>> nagedSchematoManuallyEditedschema.xml
>>
>> After making above changes, When I try to index the document using curl 
>> command I get below error :
>>
>>   > name="responseHeader">400> name="QTime">147> name="metadata">> name="error-class">org.apache.solr.common.SolrException> name="root-error-class">org.apache.solr.common.SolrException> r 
>> name="error-class">org.apache.solr.update.processor.DistributedUpdate
>> P rocessor$DistributedUpdatesAsyncException> name="root-error-class">org.apache.solr.update.processor.DistributedU
>> p dateProcessor$DistributedUpdatesAsyncException> name="msg">Async exception during distributed update: Bad Request
>>
>>
>>
>> request:
>> http://165.137.46.219:8983/solr/gsearch_shard1_replica2/update?update.
>> chain=add-unknown-fields-to-the-schemaupdate.distrib=TOLEADER
>> p 
>> ;distrib.from=http%3A%2F%2F165.137.46.218%3A8983%2Fsolr%2Fgsearch_sha
>> r d2_replica1%2Fwt=javabinversion=2> name="code">400 
>>
>> Could someone help me to resolve this issue, How do I create a 
>> schema.xml file for a text document(document content varies for each 
>> files). I want to index entire document as whole and search on the 
>> document content
>>
>> Thanks & Regards,
>> ~Sri
>>
>>
>



RE: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Anatharaman, Srinatha (Contractor)
Shawn,

Thank you for your reply
Other archive message you mentioned is posted by me only
I am new to Solr, When you say process outside Solr program. What exactly I 
should do?

I am having lots of text document which I need to index, what should I apply to 
these document before loading it to Solr?

Regards,
~Sri


-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org] 
Sent: Wednesday, February 08, 2017 9:46 AM
To: solr-user@lucene.apache.org
Subject: Re: DataImportHandler - Unable to load Tika Config Processing Document 
# 1

On 2/6/2017 3:45 PM, Anatharaman, Srinatha (Contractor) wrote:
> I am having below error while trying to index using dataImporthandler
>
> Data-Config file is mentioned below. zookeeper is not able to read 
> "tikaConfig.xml" on below statement
>
>   processor="TikaEntityProcessor" tikaConfig="tikaConfig.xml"
>
> Please help me to resolve this issue
>
> ion: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to load Tika Config Processing Document # 1

> Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
> ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
> trying to do is not supported in ZooKeeper mode
> at 
> org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:149)
> at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:91)
> ... 11 more

This sounds to me like there's something making TikaEntityProcessor 
incompatible with running in SolrCloud mode.  The way that this processor loads 
its config appears to NOT work when the config comes from zookeeper, which it 
always will when you're running SolrCloud.

I don't know if this is expected or not, or whether it will be considered a bug.

It is *strongly* recommended to *not* use the Tika that's embedded within Solr, 
but instead to do the processing outside of Solr in a program of your own and 
index the results.  Tika is very touchy software that sometimes hangs or 
crashes as it processes rich-text documents.  If that happens to the embedded 
Tika, then Solr itself will also be affected.

Doing Tika processing outside of Solr is more important with SolrCloud, because 
all replicas will need to independently index the data in cloud mode.  Here's 
an archive of a message from this list about pretty much the exact same problem:

https://www.mail-archive.com/solr-user@lucene.apache.org/msg127924.html

Note that this message was sent only a week ago.

Thanks,
Shawn




Re: Facing an Issue on SOLR box

2017-02-08 Thread Shawn Heisey
On 2/5/2017 9:21 PM, Arun Kumar wrote:
> We are facing an error "Cannot write to config directory
> /var/solr/data/marketing_prod_career_all_index/conf; switching to use
> InMemory storage instead." on our SOLR box. As it's occur, SOLR
> service stopped to response and we have to restart it again. Now it's
> happening very frequently.

The message tells you exactly what is wrong.  When Solr started, it was
started as a particular user.  In situations where Solr is not running
on Windows and the service installer was used, that user is typically
"solr".  This message says that the user being used to run Solr does not
have write access to the configuration directory, so Solr was set up to
use an in-memory copy of the configuration.  Solr's managed resource
capability is unable to alter the on-disk configuration.  This error is
unlikely to cause Solr to stop responding.  The permission problem might
cause other issues, though.

The directory mentioned suggests a non-Windows system, and that the
service installer script WAS used, and it was probably run with
defaults, so that user is probably "solr".

Because the service installer script was used, that directory would
normally be created with the correct permissions, so I am guessing that
somebody copied new data to that directory but was not careful about
ownership and permissions.  Problems like this his can also be caused by
starting Solr as root and then trying to start it later as a service,
which will run it as a regular user.

If the install was was done with all defaults, the following commands
*MIGHT* fix permission problems causing the error you have mentioned. 
You'll also need to restart Solr.  I cannot be sure that this will fix
anything.

chown -R solr:solr /var/solr
chmod u+w -R /var/solr

Thanks,
Shawn



Re: DataImportHandler - Unable to load Tika Config Processing Document # 1

2017-02-08 Thread Shawn Heisey
On 2/6/2017 3:45 PM, Anatharaman, Srinatha (Contractor) wrote:
> I am having below error while trying to index using dataImporthandler
>
> Data-Config file is mentioned below. zookeeper is not able to read 
> "tikaConfig.xml" on below statement
>
>   processor="TikaEntityProcessor" tikaConfig="tikaConfig.xml"
>
> Please help me to resolve this issue
>
> ion: java.lang.RuntimeException: 
> org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to load 
> Tika Config Processing Document # 1

> Caused by: org.apache.solr.common.cloud.ZooKeeperException: 
> ZkSolrResourceLoader does not support getConfigDir() - likely, what you are 
> trying to do is not supported in ZooKeeper mode
> at 
> org.apache.solr.cloud.ZkSolrResourceLoader.getConfigDir(ZkSolrResourceLoader.java:149)
> at 
> org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:91)
> ... 11 more

This sounds to me like there's something making TikaEntityProcessor
incompatible with running in SolrCloud mode.  The way that this
processor loads its config appears to NOT work when the config comes
from zookeeper, which it always will when you're running SolrCloud.

I don't know if this is expected or not, or whether it will be
considered a bug.

It is *strongly* recommended to *not* use the Tika that's embedded
within Solr, but instead to do the processing outside of Solr in a
program of your own and index the results.  Tika is very touchy software
that sometimes hangs or crashes as it processes rich-text documents.  If
that happens to the embedded Tika, then Solr itself will also be affected.

Doing Tika processing outside of Solr is more important with SolrCloud,
because all replicas will need to independently index the data in cloud
mode.  Here's an archive of a message from this list about pretty much
the exact same problem:

https://www.mail-archive.com/solr-user@lucene.apache.org/msg127924.html

Note that this message was sent only a week ago.

Thanks,
Shawn



FINAL REMINDER: CFP for ApacheCon closes February 11th

2017-02-08 Thread Rich Bowen
Dear Apache Enthusiast,

This is your FINAL reminder that the Call for Papers (CFP) for ApacheCon
Miami is closing this weekend - February 11th. This is your final
opportunity to submit a talk for consideration at this event.

This year, we are running several mini conferences in conjunction with
the main event, so if you're submitting for one of those events, please
pay attention to the instructions below.

Apache: Big Data
* Event information:
http://events.linuxfoundation.org/events/apache-big-data-north-america
* CFP:
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp

Apache: IoT (Internet of Things)
* Event Information: http://us.apacheiot.org/
* CFP -
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
(Indicate 'IoT' in the Target Audience field)

CloudStack Collaboration Conference
* Event information: http://us.cloudstackcollab.org/
* CFP -
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
(Indicate 'CloudStack' in the Target Audience field)

FlexJS Summit
* Event information - http://us.apacheflexjs.org/
* CFP -
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
(Indicate 'Flex' in the Target Audience field)

TomcatCon
* Event information - https://tomcat.apache.org/conference.html
* CFP -
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
(Indicate 'Tomcat' in the Target Audience field)

All other topics and projects
* Event information -
http://events.linuxfoundation.org/events/apachecon-north-america/program/about
* CFP -
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp

Admission to any of these events also grants you access to all of the
others.

Thanks, and we look forward to seeing you in Miami!

-- 
Rich Bowen
VP Conferences, Apache Software Foundation
rbo...@apache.org
Twitter: @apachecon



(You are receiving this email because you are subscribed to a dev@ or
users@ list of some Apache Software Foundation project. If you do not
wish to receive email from these lists any more, you must follow that
list's unsubscription procedure. View the headers of this message for
unsubscription instructions.)


Facets and docValues

2017-02-08 Thread Chris Ulicny
I've been trying to figure out how exactly docValues help with facet
queries, and I only seem to find mention that they are beneficial to facet
performance without many specifics. What I'd like to know is whether it
applies to all fields used in the facet or just fields that are faceted on.

For example, consider if we have the following facet

catfacet:{
type: terms,
field: category,
facet: {
x_sum:"sum(x)",
y_sum:"sum(y)"
}
}

Is it beneficial to have docValues enabled for all three fields used or
some specific subset of them?

Thanks.


Re: alerting system with Solr's Streaming Expressions

2017-02-08 Thread Joel Bernstein
Can you post the final iteration of the model?

Also the expression you used to train the model?

How much training data do you have? Ho many positive examples and negatives
examples?

Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Feb 7, 2017 at 2:14 PM, Susheel Kumar  wrote:

> Hello,
>
> I am tried to follow http://joelsolr.blogspot.com/ to see if we can
> classify positive & negative feedbacks using streaming expressions.  All
> works but end result where probability_d result of classify expression
> gives similar results for positive / negative feedback. See below
>
> What I may be missing here.  Do i need to put more data in training set or
> something else?
>
>
> { "result-set": { "docs": [ { "body_txt": [ "love the company" ],
> "score_d": 2.1892474120319667, "id": "6", "probability_d":
> 0.977944433135261 }, { "body_txt": [ "bad experience " ], "score_d":
> 3.1689453250842914, "id": "5", "probability_d": 0.9888109278133054 }, {
> "body_txt": [ "This company rewards its employees, but you should only work
> here if you truly love sales. The stress of the job can get to you and they
> definitely push you." ], "score_d": 4.621702323888672, "id": "4",
> "probability_d": 0.99898557 }, { "body_txt": [ "no chance for
> advancement with that company every year I was there it got worse I don't
> know if all branches of adp but Florence organization was turn over rate
> would be higher if it was for temp workers" ], "score_d":
> 5.288898825826228, "id": "3", "probability_d": 0.9956 }, {
> "body_txt": [ "It was a pleasure to work at the Milpitas campus. The team
> that works there are professional and dedicated individuals. The level of
> loyalty and dedication is impressive" ], "score_d": 2.5303947056922937,
> "id": "2", "probability_d": 0.990430778418 },
>


Re: Distributed Search (across collections) + partial Filter query

2017-02-08 Thread alessandro.benedetti
Hi all,
thanks to Andrea Gazzarini suggestion I solved it using local params ( which
is different from macro expansion even if conceptually similar).
Local params were available in Solr 4.10.x

I appended this filter query in the request handler of interest:


  {!lucene df=filterField v=$allowed_values}
  *


This will appear in all the collections request handler that want to use the
filter.

Cheers



-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distributed-Search-across-collections-partial-Filter-query-tp4319166p4319292.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Interval Facets with JSON

2017-02-08 Thread Tom Evans
On Tue, Feb 7, 2017 at 8:54 AM, deniz  wrote:
> Hello,
>
> I am trying to run JSON facets with on interval query as follows:
>
>
> "json.facet":{"height_facet":{"interval":{"field":"height","set":["[160,180]","[180,190]"]}}}
>
> And related field is  stored="true" />
>
> But I keep seeing errors like:
>
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Unknown
> facet or stat. key=height_facet type=interval args={field=height,
> set=[[160,180], [180,190]]} , path=/facet
>

I don't think there is such a thing as an interval JSON facet.
Whereabouts in the documentation are you seeing an "interval" as JSON
facet type?


You want a range facet surely?

One thing with range facets is that the gap is fixed size. You can
actually do your example however:

json.facet={hieght_facet:{type:range, gap:20, start:160, end:190,
hardend:True, field:height}}

If you do require arbitrary bucket sizes, you will need to do it by
specifying query facets instead, I believe.

Cheers

Tom


Re: complex query is stumping me

2017-02-08 Thread alessandro.benedetti
Hi John,
let me try to recap :
Your Solr Document is an Item with a price as one of the fields, a
purchaseGroupId a groupId.
You filter by purchaseGroupId and then you group by ( or collapse by the
groupId).
At this point how do you want to assign the score ?
For each document in a groupId you want to calculate the score as :
Max price - min price per ItemId ?
How do you want to sort the groups then ?
A simple score desc ?

So you have a :
purchase_group_id
group_id
item_id
?


Cheers





-
---
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
Sease Ltd. - www.sease.io
--
View this message in context: 
http://lucene.472066.n3.nabble.com/complex-query-is-stumping-me-tp4319237p4319271.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to create solr custom filter

2017-02-08 Thread Andrea Gazzarini

Hi Mugeesh,
my fault: a point is missing there, as suggested

/"//*-ea *//was not specified but "/

//

You need to add the "-ea" VM argument. If you are in Eclipse,

/Run >> Run Configurations/

then in the dialog that appears, select the run configuration 
corresponding to that class (StartDevSolr), and click on the second tab 
("Arguments"). There you will find two text areas, type *-ea* in the "VM 
Arguments" textarea.


HTH
Andrea

On 08/02/17 06:09, Mugeesh Husain wrote:

thanks andrea for your help, I created few solr plugin that working fine, but
still i am stuck to debug the code using eclipse, as you mentioned below
url.http://andreagazzarini.blogspot.in/2016/11/quickly-debug-your-solr-add-on.htmlIn
this url, i could not run the junit code, i couldn't run StartDevSolr
file(its dosn't showing me junit run/debug), when I  removed abstract method
from StartDevSolr class the  it showing me below error,  Assertions
mismatch: -ea was not specified but -Dtests.asserts=trueFeb 08, 2017
10:30:29 AM com.carrotsearch.randomizedtesting.RandomizedRunner
runSuiteSEVERE: Panic: RunListener hook shouldn't throw
exceptions.java.lang.NullPointerException   at
org.apache.lucene.util.RunListenerPrintReproduceInfo.printDebuggingInformation(RunListenerPrintReproduceInfo.java:131)
at
org.apache.lucene.util.RunListenerPrintReproduceInfo.testRunFinished(RunListenerPrintReproduceInfo.java:118)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:706)
at
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:140)
at
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:591)I
tried above code using solr 6.2.0.I am newone for jnuit may be i am getting
this issue, if you have any more debug url, let me know.or  suggest me?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-create-solr-custom-filter-tp4317767p4319258.html
Sent from the Solr - User mailing list archive at Nabble.com.




Aw: Re: Solr 5.5.0 MSSQL Datasource Example

2017-02-08 Thread Per Newgro
Thank you Fuad,

with dbcp2 BasicDataSource it is working

1st i need to add the libraries to server/lib/ext
commons-dbcp2-2.1.1.jar
commons-logging-1.2.jar
commons-pool2-2.4.2.jar
The current version i've found in http://mvnrepository.com/search?q=dbcp

Then my DataSource looks like this


java:comp/env/jdbc/myds


com.microsoft.sqlserver.jdbc.SQLServerDriver
jdbc:sqlserver://ip;databaseName=my_db
user
password
25
5000
SELECT 1
-1




Thanks for your support
Per

> Gesendet: Dienstag, 07. Februar 2017 um 21:39 Uhr
> Von: "Fuad Efendi" 
> An: "Per Newgro" , solr-user@lucene.apache.org
> Betreff: Re: Solr 5.5.0 MSSQL Datasource Example
>
> Perhaps this answers your question:
> 
> 
> http://stackoverflow.com/questions/27418875/microsoft-sqlserver-driver-datasource-have-password-empty
> 
> 
> Try different one as per Eclipse docs,
> 
> http://www.eclipse.org/jetty/documentation/9.4.x/jndi-datasource-examples.html
> 
> 
> 
> 
>  
> 
>  jdbc/DSTest
> 
>  
> 
> 
> 
>user
> 
>pass
> 
>dbname
> 
>localhost
> 
>1433
> 
> 
> 
>  
> 
> 
> 
> 
> 
> 
> --
> 
> Fuad Efendi
> 
> (416) 993-2060
> 
> http://www.tokenizer.ca
> Search Relevancy, Recommender Systems
> 
> 
> From: Per Newgro  
> Reply: solr-user@lucene.apache.org 
> 
> Date: February 7, 2017 at 10:15:42 AM
> To: solr-user-group 
> 
> Subject:  Solr 5.5.0 MSSQL Datasource Example
> 
> Hello,
> 
> has someone a working example for MSSQL Datasource with 'Standard Microsoft
> SQL Driver'.
> 
> My environment:
> debian
> Java 8
> Solr 5.5.0 Standard (download and installed as service)
> 
> server/lib/ext
> sqljdbc4-4.0.jar
> 
> Global JNDI resource defined
> server/etc/jetty.xml
> 
> 
> java:comp/env/jdbc/mydb
> 
> 
> ip
> mydb
> user
> password
> 
> 
> 
> 
> or 2nd option tried
> 
> 
> java:comp/env/jdbc/mydb
> 
> 
> jdbc:sqlserver://ip;databaseName=mydb;
> user
> password
> 
> 
> 
> 
> 
> collection1/conf/db-data-config.xml
> 
> 
> ...
> 
> This leads to SqlServerException login failed for user.
> at
> com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
> 
> at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:254)
> at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:84)
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2908)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:2234)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:2220)
> 
> at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696)
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1326)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:991)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:827)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnectionInternal(SQLServerDataSource.java:621)
> 
> at
> com.microsoft.sqlserver.jdbc.SQLServerDataSource.getConnection(SQLServerDataSource.java:57)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.getFromJndi(JdbcDataSource.java:256)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:182)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$1.call(JdbcDataSource.java:172)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource.getConnection(JdbcDataSource.java:463)
> 
> at
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:309)
> 
> ... 12 more
> 
> But when i remove the jndi datasource and rewrite the dataimport data
> source to
> 
> 
> driver="com.microsoft.sqlserver.jdbc.SQLServerDriver" br/>
> url="jdbc:sqlserver://ip;databaseName=mydb"
> user="user" password="password" />
> ...
> 
> Then it works.
> But this way i need to configure the db in every core. I would like to
> avoid that.
> 
> Thanks
> Per
>