RE: How to Add New Fields and Fields Types Programmatically Using Solrj

2016-07-18 Thread Jeniba Johnson

Thanks a lot Steve. It worked out.

Regards,
Jeniba Johnson




-Original Message-
From: Steve Rowe [mailto:sar...@gmail.com] 
Sent: Monday, July 18, 2016 7:57 PM
To: solr-user@lucene.apache.org
Subject: Re: How to Add New Fields and Fields Types Programmatically Using Solrj

Hi Jeniba,

You can add fields and field types using Solrj with SchemaRequest.Update 
subclasses - see here for a list: 


There are quite a few examples of doing both in the tests: 


--
Steve
www.lucidworks.com

> On Jul 18, 2016, at 1:59 AM, Jeniba Johnson  
> wrote:
> 
> 
> Hi,
> 
> I have configured solr5.3.1 and started Solr in schema less mode. Using 
> SolrInputDocument, Iam able to add new fields in solrconfig.xml using Solrj.
> How to specify the field type of a field using Solrj.
> 
> Eg  required="true" multivalued="false" />
> 
> How can I add field type properties using SolrInputDocument programmatically 
> using Solrj? Can anyone help with it?
> 
> 
> 
> Regards,
> Jeniba Johnson
> 
> 
> 
> 
> The contents of this e-mail and any attachment(s) may contain confidential or 
> privileged information for the intended recipient(s). Unintended recipients 
> are prohibited from taking action on the basis of information in this e-mail 
> and using or disseminating the information, and must notify the sender and 
> delete it from their system. L Infotech will not accept responsibility or 
> liability for the accuracy or completeness of, or the presence of any virus 
> or disabling code in this e-mail"



Re: Index and query brackets

2016-07-18 Thread Anil
Thanks Chris for the response.

I am using TextField and edismax query parser.

i have changed filters so that brackets are not trimmed. i will test it and
let the group know if that is working.

Thanks.

On 19 July 2016 at 03:27, Chris Hostetter  wrote:

>
>
> If you index the literal string value of "[ DATA ]" and then you wnat to
> be able to query for "[ DATA ]" again later there are two things you have
> to consider:
>
> 1) how is your field value analyzed?
>
> If you use something like StrField then an index term for the literal
> string "[ DATA ]" is created and put in your index, but if you use
> TextField then the analyzer configured might do things like tokenize the
> string into 3 distinct terms, lowercase alpha charagers, or perhaps even
> drop the bracket characters completely -- as long as that is consistent at
> index time and query time then you should be fine, as long as you pay
> attention to...
>
> 2) what query parser are you using.
>
> The default parser treats brackets and whitespace as special meta-syntax
> characters.  You can quote them, or backslah escape them but the
> whitespace itself may also need to be quoted/escaped to prevent the parser
> from trying to make a boolena query for 3 terms ("[", "DATA", "]") instead
> of 1.
>
> alternatively you can use things like the "field" QParser, which let's you
> target a specific field by name, with a query string value that can be
> anything -- there are no special meta-syntax characters for hte field
> parser. and the appopriate analyer wll be used to create a TermQuery or
> PhraseQuery (as needed)
>
> ie: q = {!field f=your_field_name}[ DATA ]
>
>
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers
>
>
> : Date: Tue, 5 Jul 2016 08:45:57 +0530
> : From: Anil 
> : Reply-To: solr-user@lucene.apache.org
> : To: solr-user@lucene.apache.org
> : Subject: Re: Index and query brackets
> :
> : NO Ediwin. Thanks for your response.
> :
> : i was checking how to check [1 TO 5] as a content not as a range query.
> :
> : i tried by escaping [ and ] and did not work. seems need to check
> analyzers
> : at index side.
> :
> : Regards,
> : Anil
> :
> : On 5 July 2016 at 08:42, Zheng Lin Edwin Yeo 
> wrote:
> :
> : > Hi Anil,
> : >
> : > Are you referring to something like q=level:[1 TO 5] ? This will
> search for
> : > level that ranges from 1 to 5.
> : > You may refer to the documentation here:
> : > https://wiki.apache.org/solr/SolrQuerySyntax
> : >
> : > Regards,
> : > Edwin
> : >
> : >
> : > On 4 July 2016 at 15:05, Anil  wrote:
> : >
> : > > HI,
> : > >
> : > > how can index and query content with brackets as bracket is used for
> : > range
> : > > query
> : > >
> : > > Ex : [DATA]
> : > >
> : > > -
> : > > Anil
> : > >
> : >
> :
>
> -Hoss
> http://www.lucidworks.com/
>


Re: Index and query brackets

2016-07-18 Thread Chris Hostetter


If you index the literal string value of "[ DATA ]" and then you wnat to 
be able to query for "[ DATA ]" again later there are two things you have 
to consider:

1) how is your field value analyzed?

If you use something like StrField then an index term for the literal 
string "[ DATA ]" is created and put in your index, but if you use 
TextField then the analyzer configured might do things like tokenize the 
string into 3 distinct terms, lowercase alpha charagers, or perhaps even 
drop the bracket characters completely -- as long as that is consistent at 
index time and query time then you should be fine, as long as you pay 
attention to...

2) what query parser are you using.

The default parser treats brackets and whitespace as special meta-syntax 
characters.  You can quote them, or backslah escape them but the 
whitespace itself may also need to be quoted/escaped to prevent the parser 
from trying to make a boolena query for 3 terms ("[", "DATA", "]") instead 
of 1.

alternatively you can use things like the "field" QParser, which let's you 
target a specific field by name, with a query string value that can be 
anything -- there are no special meta-syntax characters for hte field 
parser. and the appopriate analyer wll be used to create a TermQuery or 
PhraseQuery (as needed)

ie: q = {!field f=your_field_name}[ DATA ]


https://cwiki.apache.org/confluence/display/solr/Other+Parsers


: Date: Tue, 5 Jul 2016 08:45:57 +0530
: From: Anil 
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Index and query brackets
: 
: NO Ediwin. Thanks for your response.
: 
: i was checking how to check [1 TO 5] as a content not as a range query.
: 
: i tried by escaping [ and ] and did not work. seems need to check analyzers
: at index side.
: 
: Regards,
: Anil
: 
: On 5 July 2016 at 08:42, Zheng Lin Edwin Yeo  wrote:
: 
: > Hi Anil,
: >
: > Are you referring to something like q=level:[1 TO 5] ? This will search for
: > level that ranges from 1 to 5.
: > You may refer to the documentation here:
: > https://wiki.apache.org/solr/SolrQuerySyntax
: >
: > Regards,
: > Edwin
: >
: >
: > On 4 July 2016 at 15:05, Anil  wrote:
: >
: > > HI,
: > >
: > > how can index and query content with brackets as bracket is used for
: > range
: > > query
: > >
: > > Ex : [DATA]
: > >
: > > -
: > > Anil
: > >
: >
: 

-Hoss
http://www.lucidworks.com/


Re: Problem using bbox in schema

2016-07-18 Thread Chris Hostetter

can you please send us the entire schema.xml file you were using when you 
got that error?

Would be nice to get to the bottom of how/why you gut an error regarding 
"positionIncrementGap" on AbstractSpatialPrefixTreeFieldType.



: Date: Mon, 18 Jul 2016 18:10:40 +0200
: From: Rastislav Hudak 
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Problem using bbox in schema
: 
: Thanks Erick.
: I wasn't able to track this down but when I re-set the schema to the sample
: one and added the bbox it was working. I thought it must be the bbox field
: because of that 'AbstractSpatialPrefixTreeFieldType' but it must have been
: something else. The only difference I can see now is that I left all the
: sample fieldtypes in, eg previously I deleted "point" as I didn't need it. So
: maybe there's some dependency I missed.
: 
: Thanks anyways!
: 
: Rasta
: 
: On 2016-07-18 17:56, Erick Erickson wrote:
: > Those work fine for me (latest 6x version). It looks to me like
: > you've changed some _other_ field, this error
: > 
: > Can't set positionIncrementGap on custom analyzer class
: > 
: > looks like you have some text-related field you've also changed
: > and likely have a custom analyzer somewhere?
: > 
: > Best,
: > Erick
: > 
: > On Mon, Jul 18, 2016 at 1:26 AM, Rastislav Hudak
: >  wrote:
: > > Hi all,
: > > 
: > > using solr 6.1.0, I'm trying to add bbox into my schema, the same way it's
: > > in all examples I could find:
: > > 
: > > 
: > >  > numberType="_bbox_coord" storeSubFields="false" />
: > >  > precisionStep="8" docValues="true" stored="false"/>
: > > 
: > > However, if I add these lines the core cannot be initialized, I'm getting
: > > following exception:
: > > 
: > > java.util.concurrent.ExecutionException:
: > > org.apache.solr.common.SolrException: Unable to create core [phaidra]
: > > at java.util.concurrent.FutureTask.report(FutureTask.java:122)
: > > at java.util.concurrent.FutureTask.get(FutureTask.java:192)
: > > at org.apache.solr.core.CoreContainer$1.run(CoreContainer.java:494)
: > > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
: > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
: > > at
: > > 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
: > > at
: > > 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
: > > at
: > > 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
: > > at java.lang.Thread.run(Thread.java:745)
: > > Caused by: org.apache.solr.common.SolrException: Unable to create core
: > > [phaidra]
: > > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:825)
: > > at
: > > org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:466)
: > > ... 5 more
: > > Caused by: org.apache.solr.common.SolrException: Could not load conf for
: > > core phaidra: Can't load schema
: > > /var/solr/data/phaidra/conf/managed-schema:
: > > Plugin Initializing failure for [schema.xml] fieldType
: > > at
: > > org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
: > > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
: > > ... 6 more
: > > Caused by: org.apache.solr.common.SolrException: Can't load schema
: > > /var/solr/data/phaidra/conf/managed-schema: Plugin Initializing failure
: > > for
: > > [schema.xml] fieldType
: > > at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:600)
: > > at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
: > > at
: > > 
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
: > > at
: > > 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:174)
: > > at
: > > 
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:47)
: > > at
: > > 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
: > > at
: > > 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
: > > at
: > > org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
: > > ... 7 more
: > > Caused by: org.apache.solr.common.SolrException: Plugin Initializing
: > > failure for [schema.xml] fieldType
: > > at
: > > 
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
: > > at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:494)
: > > ... 14 more
: > > Caused by: java.lang.RuntimeException: Can't set positionIncrementGap on
: > > custom analyzer class
: > > org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType$1
: > > at org.apache.solr.schema.FieldType.setArgs(FieldType.java:182)
: > > at
: > > 
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:150)
: > > at
: > > 

ApacheCon: Getting the word out internally

2016-07-18 Thread Melissa Warnkin
ApacheCon: Getting the word out internally
Dear Apache Enthusiast,

As you are no doubt already aware, we will be holding ApacheCon in
Seville, Spain, the week of November 14th, 2016. The call for papers
(CFP) for this event is now open, and will remain open until
September 9th.

The event is divided into two parts, each with its own CFP. The first
part of the event, called Apache Big Data, focuses on Big Data
projects and related technologies.

Website: http://events.linuxfoundation.org/events/apache-big-data-europe
CFP:
http://events.linuxfoundation.org/events/apache-big-data-europe/program/cfp

The second part, called ApacheCon Europe, focuses on the Apache
Software Foundation as a whole, covering all projects, community
issues, governance, and so on.

Website: http://events.linuxfoundation.org/events/apachecon-europe
CFP: http://events.linuxfoundation.org/events/apachecon-europe/program/cfp

ApacheCon is the official conference of the Apache Software
Foundation, and is the best place to meet members of your project and
other ASF projects, and strengthen your project's community.

If your organization is interested in sponsoring ApacheCon, contact Rich Bowen
at e...@apache.org  ApacheCon is a great place to find the brightest
developers in the world, and experts on a huge range of technologies.

I hope to see you in Seville!
==

Melissaon behalf of the ApacheCon Team


Re: Cold replication

2016-07-18 Thread Mahmoud Almokadem
Thanks Erick,

I'll take a look at the replication on Solr. But I don't know if it well
support incremental backup or not.

And I want to use SSD because my index cannot be held in memory. The index
is about 200GB on each instance and the RAM is 61GB and the update
frequency is high. So, I want to use SSDs equipped with the servers instead
on EBSs.

Would you explain what you mean with proper warming?

Thanks,
Mahmoud


On Mon, Jul 18, 2016 at 5:46 PM, Erick Erickson 
wrote:

> Have you tried the replication API backup command here?
>
> https://cwiki.apache.org/confluence/display/solr/Index+Replication#IndexReplication-HTTPAPICommandsfortheReplicationHandler
>
> Warning, I haven't worked with this personally in this
> situation so test.
>
> I do have to ask why you think SSDs are required here and
> if you've measured. With proper warming, most of the
> index is held in memory anyway and the source of
> the data (SSD or spinning) is not a huge issue. SSDs
> certainly are better/faster, but have you measured whether
> they are _enough_ faster to be worth the added
> complexity?
>
> Best,
> Erick
>
> Best,
> Erick
>
> On Mon, Jul 18, 2016 at 4:05 AM, Mahmoud Almokadem
>  wrote:
> > Hi,
> >
> > We have SolrCloud 6.0 installed on 4 i2.2xlarge instances with 4 shards.
> We store the indices on EBS attached to these instances. Fortunately these
> instances are equipped with TEMPORARY SSDs. We need to the store the
> indices on the SSDs but they are not safe.
> >
> > The index is updated every five minutes.
> >
> > Could we use the SSDs to store the indices and create an incremental
> backup or cold replication on the EBS? So we use EBS only for storing
> indices not serving the data to the solr.
> >
> > Incase of losing the data on SSDs we can restore a backup from the EBS.
> Is it possible?
> >
> > Thanks,
> > Mahmoud
> >
> >
>


Re: DateMath parsing change?

2016-07-18 Thread Timothy Potter
Got an answer from Hossman in another channel ... this syntax was not
officially supported and is no longer valid, i.e. my code must change
;-)

On Mon, Jul 18, 2016 at 8:02 AM, Timothy Potter  wrote:
> I have code that uses the DateMathParser and this used to work in 5.x
> but is no longer accepted in 6.x:
>
> time:[NOW-2DAY TO 2016-07-19Z]
>
> org.apache.solr.common.SolrException: Invalid Date in Date Math
> String:'2016-07-19Z'
> at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:241)
>
> This is in a unit test, so I can change it, but seems like regression to me?


Re: Problem using bbox in schema

2016-07-18 Thread Rastislav Hudak

Thanks Erick.
I wasn't able to track this down but when I re-set the schema to the 
sample one and added the bbox it was working. I thought it must be the 
bbox field because of that 'AbstractSpatialPrefixTreeFieldType' but it 
must have been something else. The only difference I can see now is that 
I left all the sample fieldtypes in, eg previously I deleted "point" as 
I didn't need it. So maybe there's some dependency I missed.


Thanks anyways!

Rasta

On 2016-07-18 17:56, Erick Erickson wrote:

Those work fine for me (latest 6x version). It looks to me like
you've changed some _other_ field, this error

Can't set positionIncrementGap on custom analyzer class

looks like you have some text-related field you've also changed
and likely have a custom analyzer somewhere?

Best,
Erick

On Mon, Jul 18, 2016 at 1:26 AM, Rastislav Hudak
 wrote:

Hi all,

using solr 6.1.0, I'm trying to add bbox into my schema, the same way it's
in all examples I could find:





However, if I add these lines the core cannot be initialized, I'm getting
following exception:

java.util.concurrent.ExecutionException:
org.apache.solr.common.SolrException: Unable to create core [phaidra]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer$1.run(CoreContainer.java:494)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core
[phaidra]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:825)
at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:466)
... 5 more
Caused by: org.apache.solr.common.SolrException: Could not load conf for
core phaidra: Can't load schema /var/solr/data/phaidra/conf/managed-schema:
Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
... 6 more
Caused by: org.apache.solr.common.SolrException: Can't load schema
/var/solr/data/phaidra/conf/managed-schema: Plugin Initializing failure for
[schema.xml] fieldType
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:600)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:174)
at
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:47)
at
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
... 7 more
Caused by: org.apache.solr.common.SolrException: Plugin Initializing
failure for [schema.xml] fieldType
at
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:494)
... 14 more
Caused by: java.lang.RuntimeException: Can't set positionIncrementGap on
custom analyzer class
org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType$1
at org.apache.solr.schema.FieldType.setArgs(FieldType.java:182)
at
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:150)
at
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:53)
at
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:191)
... 15 more

thanks for any help!

Rasta




Re: Indexing using SolrOutputFormat class

2016-07-18 Thread Erick Erickson
You're looking at a class that is specific to the
MapReduceIndexerTool which uses EmbeddedSolrServer
to build sub-indexes. These sub-indexes are then
merged via the tool into indexes suitable for
copying to (or merging with) existing Solr indexes
on a shard-by-shard basis.

If you're using MapReduce to process the files
and trying to index them to a live Solr cluster,
just use the regular SolrJ CloudSolrClient,
assemble lists of SolrInputDocuments and send
them to Solr with CloudSolrClient.add(doclist).

I usually use batches of 1,000 for small documents.

MRIT is intended for initial bulk loading. Last I knew,
for instance, it does not replace documents with the
same document ID () since it relies on
merging the index using Lucene's merge capability
which does not check for duplicate doc IDs.

Best,
Erick

On Mon, Jul 18, 2016 at 1:15 AM, rashi gandhi  wrote:
> Hi All,
>
>
>
> I am using Solr-5.0.0 API for indexing data in our application and the
> requirement is to index the data in batches, using solr-mapreduce API.
>
>
>
> In our application, we may receive data from any type of input source for
> example: file, streams and any other relational or non-relational Db’s in a
> particular format. And I need to index this data into Solr, by using
> SolrOutputFormat class.
>
>
>
> As per my analysis until now, I find that SolrOutputFormat works with the
> EmbeddedSolrServer and requires path to config files for indexing data,
> without the need of passing host and port for creating the SolrClient.
>
>
>
> I checked for the documentation online, but couldn’t find any proper
> examples that make the use of SolrOutputFormat class.
>
> Does anybody have some implementations or a document, which mentions
> details like what exactly needs to be passed as input to SolrOutputFormat
> configuration, etc.?
>
>
>
> Any pointers would be helpful.


Re: Problem using bbox in schema

2016-07-18 Thread Erick Erickson
Those work fine for me (latest 6x version). It looks to me like
you've changed some _other_ field, this error

Can't set positionIncrementGap on custom analyzer class

looks like you have some text-related field you've also changed
and likely have a custom analyzer somewhere?

Best,
Erick

On Mon, Jul 18, 2016 at 1:26 AM, Rastislav Hudak
 wrote:
> Hi all,
>
> using solr 6.1.0, I'm trying to add bbox into my schema, the same way it's
> in all examples I could find:
>
> 
>  numberType="_bbox_coord" storeSubFields="false" />
>  precisionStep="8" docValues="true" stored="false"/>
>
> However, if I add these lines the core cannot be initialized, I'm getting
> following exception:
>
> java.util.concurrent.ExecutionException:
> org.apache.solr.common.SolrException: Unable to create core [phaidra]
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at org.apache.solr.core.CoreContainer$1.run(CoreContainer.java:494)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Unable to create core
> [phaidra]
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:825)
> at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:466)
> ... 5 more
> Caused by: org.apache.solr.common.SolrException: Could not load conf for
> core phaidra: Can't load schema /var/solr/data/phaidra/conf/managed-schema:
> Plugin Initializing failure for [schema.xml] fieldType
> at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
> ... 6 more
> Caused by: org.apache.solr.common.SolrException: Can't load schema
> /var/solr/data/phaidra/conf/managed-schema: Plugin Initializing failure for
> [schema.xml] fieldType
> at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:600)
> at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
> at
> org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
> at
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:174)
> at
> org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:47)
> at
> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
> at
> org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
> at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
> ... 7 more
> Caused by: org.apache.solr.common.SolrException: Plugin Initializing
> failure for [schema.xml] fieldType
> at
> org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
> at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:494)
> ... 14 more
> Caused by: java.lang.RuntimeException: Can't set positionIncrementGap on
> custom analyzer class
> org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType$1
> at org.apache.solr.schema.FieldType.setArgs(FieldType.java:182)
> at
> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:150)
> at
> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:53)
> at
> org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:191)
> ... 15 more
>
> thanks for any help!
>
> Rasta


Re: index sql databases

2016-07-18 Thread Erick Erickson
I don't see how that relates to the original
question.

bq: when I display the field type date I get the
value in this forme -MM-dd'T'hh:mm:ss'Z'

A regex in the _input_ side will have
no effect on what Solr returns. You'd have
to use a DocTransformer to change the output
on the query side. DIH is in the indexing side.

Best,
Erick

On Mon, Jul 18, 2016 at 2:45 AM, kostali hassan
 wrote:
> can we use transformer="RegexTransformer"
> and set in db_data_config.xml
>   groupNames="date_t,time_t" />
>
> 2016-07-16 18:18 GMT+01:00 Shawn Heisey :
>
>> On 7/15/2016 3:10 PM, kostali hassan wrote:
>> > Thank you Shawn the prb is when I display the field type date I get
>> > the value in this forme -MM-dd'T'hh:mm:ss'Z'
>>
>> Solr only displays ISO date format for date fields -- an example is
>> 2016-07-16T18:17:08.497Z -- and only in the UTC timezone.  If you want
>> something else in your application, you'll have to translate it, or
>> you'll have to write a custom plugin to add to Solr that changes the
>> output format.
>>
>> Thanks,
>> Shawn
>>
>>


Re: Cold replication

2016-07-18 Thread Erick Erickson
Have you tried the replication API backup command here?
https://cwiki.apache.org/confluence/display/solr/Index+Replication#IndexReplication-HTTPAPICommandsfortheReplicationHandler

Warning, I haven't worked with this personally in this
situation so test.

I do have to ask why you think SSDs are required here and
if you've measured. With proper warming, most of the
index is held in memory anyway and the source of
the data (SSD or spinning) is not a huge issue. SSDs
certainly are better/faster, but have you measured whether
they are _enough_ faster to be worth the added
complexity?

Best,
Erick

Best,
Erick

On Mon, Jul 18, 2016 at 4:05 AM, Mahmoud Almokadem
 wrote:
> Hi,
>
> We have SolrCloud 6.0 installed on 4 i2.2xlarge instances with 4 shards. We 
> store the indices on EBS attached to these instances. Fortunately these 
> instances are equipped with TEMPORARY SSDs. We need to the store the indices 
> on the SSDs but they are not safe.
>
> The index is updated every five minutes.
>
> Could we use the SSDs to store the indices and create an incremental backup 
> or cold replication on the EBS? So we use EBS only for storing indices not 
> serving the data to the solr.
>
> Incase of losing the data on SSDs we can restore a backup from the EBS. Is it 
> possible?
>
> Thanks,
> Mahmoud
>
>


DIH - Need to externalize or encrypt username/password stored within data-config.xml

2016-07-18 Thread Aniket Khare
Hi,

Could you please suggest any document where I can encrypt ths connection
string and stored it data-config?
Following is the url for the closed jira issue, but I dont find any
documentation for that

https://issues.apache.org/jira/browse/SOLR-4392

-- 
Regards,

Aniket S. Khare


Re: SolrCloud - Query performance degrades with multiple servers(Shards)

2016-07-18 Thread Erick Erickson
+1 to Susheel's question. Sharding inevitably adds
overhead. Roughly each shard is queried
for its top N docs (10 if, say, rows=10). The
doc ID and sort criteria (score by default) are returned
to the node that originally got the request. That node
then sorts the lists into the real top 10 to return to
the user. Then the node handling the request re-queries
the shards for the contents of those docs.

Sharding is a way to handle very large data sets, the
general recommendation is to shard _only_ when you
have too many documents to get good query perf
from a single shard.

If you need to increase QPS, add _replicas_ not shards.
Only go to sharding when you have too many documents
fit on your hardware.

Best,
Erick

On Mon, Jul 18, 2016 at 6:31 AM, Susheel Kumar  wrote:
> Hello,
>
> Question:  Do you really need sharding/can live without sharding since you
> mentioned only 10K records in one shard. What's your index/document size?
>
> Thanks,
> Susheel
>
> On Mon, Jul 18, 2016 at 2:08 AM, kasimjinwala 
> wrote:
>
>> currently I am using solrCloud 5.0 and I am facing query performance issue
>> while using 3 implicit shards, each shard contain around 10K records.
>> when I am specifying shards parameter(*shards=shard1*) in query it gives
>> 30K-35K qps. but while removing shards parameter from query it give
>> *1000-1500qps*. performance decreases drastically.
>>
>> please provide comment or suggestion to solve above issue
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287600.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>


Re: How to Add New Fields and Fields Types Programmatically Using Solrj

2016-07-18 Thread Steve Rowe
Hi Jeniba,

You can add fields and field types using Solrj with SchemaRequest.Update 
subclasses - see here for a list: 


There are quite a few examples of doing both in the tests: 


--
Steve
www.lucidworks.com

> On Jul 18, 2016, at 1:59 AM, Jeniba Johnson  
> wrote:
> 
> 
> Hi,
> 
> I have configured solr5.3.1 and started Solr in schema less mode. Using 
> SolrInputDocument, Iam able to add new fields in solrconfig.xml using Solrj.
> How to specify the field type of a field using Solrj.
> 
> Eg  required="true" multivalued="false" />
> 
> How can I add field type properties using SolrInputDocument programmatically 
> using Solrj? Can anyone help with it?
> 
> 
> 
> Regards,
> Jeniba Johnson
> 
> 
> 
> 
> The contents of this e-mail and any attachment(s) may contain confidential or 
> privileged information for the intended recipient(s). Unintended recipients 
> are prohibited from taking action on the basis of information in this e-mail 
> and using or disseminating the information, and must notify the sender and 
> delete it from their system. L Infotech will not accept responsibility or 
> liability for the accuracy or completeness of, or the presence of any virus 
> or disabling code in this e-mail"



DateMath parsing change?

2016-07-18 Thread Timothy Potter
I have code that uses the DateMathParser and this used to work in 5.x
but is no longer accepted in 6.x:

time:[NOW-2DAY TO 2016-07-19Z]

org.apache.solr.common.SolrException: Invalid Date in Date Math
String:'2016-07-19Z'
at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:241)

This is in a unit test, so I can change it, but seems like regression to me?


Re: SolrCloud - Query performance degrades with multiple servers(Shards)

2016-07-18 Thread Susheel Kumar
Hello,

Question:  Do you really need sharding/can live without sharding since you
mentioned only 10K records in one shard. What's your index/document size?

Thanks,
Susheel

On Mon, Jul 18, 2016 at 2:08 AM, kasimjinwala 
wrote:

> currently I am using solrCloud 5.0 and I am facing query performance issue
> while using 3 implicit shards, each shard contain around 10K records.
> when I am specifying shards parameter(*shards=shard1*) in query it gives
> 30K-35K qps. but while removing shards parameter from query it give
> *1000-1500qps*. performance decreases drastically.
>
> please provide comment or suggestion to solve above issue
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287600.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: JDBC: Collection not found with count(*) and uppercase name

2016-07-18 Thread Joel Bernstein
This looks a like a bug with SQL stats query which uses the StatsStream.
The table name should be case insensitive.

The simple select query is using CloudSolrStream which is properly handling
the table name.

Feel free to create a jira for this.

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Jul 18, 2016 at 12:35 AM, Damien Kamerman  wrote:

> Hi,
>
> I'm on Solr 6.1 and testing a JDBC query from SquirrelSQL and I find this
> query works OK:
> select id from c_D02016
>
> But when I try this query I get an error: Collection not found c_d02016
> select count(*) from c_D02016.
>
> It seems solr is expecting the collection/table name to be lower-case. Has
> anyone else seen this?
>
> Here's the full log from the server:
> ERROR - 2016-07-18 13:46:23.711; [c:ip_0 s:shard1 r:core_node1
> x:c_0_shard1_replica1] org.apache.solr.common.SolrException;
> java.io.IOException: org.apache.solr.common.SolrException: Collection not
> found: c_d02016
> at
>
> org.apache.solr.client.solrj.io.stream.StatsStream.open(StatsStream.java:221)
> at
>
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1578)
> at
>
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51)
> at
>
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:423)
> at
>
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:304)
> at
>
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:168)
> at
>
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at
>
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at
>
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at
>
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at
>
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:731)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:473)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at
>
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
>
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
>
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at
>
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:318)
> at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
>
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
>
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.solr.common.SolrException: Collection not 

Cold replication

2016-07-18 Thread Mahmoud Almokadem
Hi, 

We have SolrCloud 6.0 installed on 4 i2.2xlarge instances with 4 shards. We 
store the indices on EBS attached to these instances. Fortunately these 
instances are equipped with TEMPORARY SSDs. We need to the store the indices on 
the SSDs but they are not safe.

The index is updated every five minutes. 

Could we use the SSDs to store the indices and create an incremental backup or 
cold replication on the EBS? So we use EBS only for storing indices not serving 
the data to the solr.

Incase of losing the data on SSDs we can restore a backup from the EBS. Is it 
possible?

Thanks, 
Mahmoud 




Re: index sql databases

2016-07-18 Thread kostali hassan
can we use transformer="RegexTransformer"
and set in db_data_config.xml
 

2016-07-16 18:18 GMT+01:00 Shawn Heisey :

> On 7/15/2016 3:10 PM, kostali hassan wrote:
> > Thank you Shawn the prb is when I display the field type date I get
> > the value in this forme -MM-dd'T'hh:mm:ss'Z'
>
> Solr only displays ISO date format for date fields -- an example is
> 2016-07-16T18:17:08.497Z -- and only in the UTC timezone.  If you want
> something else in your application, you'll have to translate it, or
> you'll have to write a custom plugin to add to Solr that changes the
> output format.
>
> Thanks,
> Shawn
>
>


BooleanQuery Migration from Solr 4 to SOlr 6

2016-07-18 Thread Max Bridgewater
HI Folks,

I am tasked with migrating a Solr app from Solr 4 to Solr 6. This solr app
is in essence a bunch of solr components/handlers. One part that challenges
me is BooleanQuery immutability in Solr 6.

Here is the challenge: In our old code base, we had classes that
implemented custom interfaces and extended BooleanQuery. These custom
interfaces were essentially markers that told our various components where
the user came from. Based on the user's origin, different pieces of logic
would apply.

Now, in Solr 6, our custom boolean query  can no longer extend BooleanQuery
since BooleanQuery only has a private constructor. I am looking for a clean
solution to this problem.

Here are some ideas I had:

1) Remove the logic that depends on the custom boolean query => Big risk to
our search logic
2) Simply remove BooleanQuery as super class of custom boolean query =>
Major risk. Wherever we do “if(query instanceof BooleanQuery) “, we would
not catch our custom queries.
3) Remove BooleanQuery as parent to the custom query (e.g. make it extend
Query) AND Refactor to move all “if(query instanceof BooleanQuery) “ into a
dedicated method: isCustomBooleanQuery. This would return “query instanceof
BooleanQuery || “query instanceof CustomQuery“. We then need to change ALL
20 occurrences of this test and ensure we handle both cases appropriately.
==> Very invasive.
4) Add a method createCustomQuery() that would return a boolean query
wherein a special clause is added that allows us to identify our custom
queries.  This special clause should not impact search results. => Pretty
ugly.


Other potential clean, low risk, and less invasive solution?


Max.


SOLR-9311

2016-07-18 Thread Kent Mu
Hi friends,

I come across an issue, and I have raised the issue in solr-jira, the link
is as bellow.

https://issues.apache.org/jira/browse/SOLR-9311

looking forward to your reply.

Thanks!
Kent


Re: solrcloud so many connections

2016-07-18 Thread Kent Mu
hello, does anybody also come across the issue? can anybody help me?


2016-07-14 13:01 GMT+08:00 Kent Mu :

> Hi friends!
>
> We are using Solrj 4.9.1 to connect to a Zookeeper. and the solr server
> version is 4.9.0 We are currently using CloudSolrServer as a singleton, I
> believe that solrj to zookeeper is a TCP connection, and zookeeper to
> solrcloud internal is actually a httpconnection.
>
> we use the zabbix to monitor the solrcloud status, and we deploy solr in
> Wildfly, for example the port is 8180, we find that the number that
> connecting with solr on port 8180 is so high. for now  we find the number
> can be around 4000, that is too large.
>
> and we find that with the increasing connections, the query speed become
> slow.
>
> does anyone come across this issue too?
>
> look forward to your reply.
>
> Thanks.
> Kent
>


Problem using bbox in schema

2016-07-18 Thread Rastislav Hudak
Hi all,

using solr 6.1.0, I'm trying to add bbox into my schema, the same way it's
in all examples I could find:





However, if I add these lines the core cannot be initialized, I'm getting
following exception:

java.util.concurrent.ExecutionException:
org.apache.solr.common.SolrException: Unable to create core [phaidra]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer$1.run(CoreContainer.java:494)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$22(ExecutorUtil.java:229)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core
[phaidra]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:825)
at org.apache.solr.core.CoreContainer.lambda$load$0(CoreContainer.java:466)
... 5 more
Caused by: org.apache.solr.common.SolrException: Could not load conf for
core phaidra: Can't load schema /var/solr/data/phaidra/conf/managed-schema:
Plugin Initializing failure for [schema.xml] fieldType
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:86)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:810)
... 6 more
Caused by: org.apache.solr.common.SolrException: Can't load schema
/var/solr/data/phaidra/conf/managed-schema: Plugin Initializing failure for
[schema.xml] fieldType
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:600)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:183)
at
org.apache.solr.schema.ManagedIndexSchema.(ManagedIndexSchema.java:104)
at
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:174)
at
org.apache.solr.schema.ManagedIndexSchemaFactory.create(ManagedIndexSchemaFactory.java:47)
at
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:108)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:79)
... 7 more
Caused by: org.apache.solr.common.SolrException: Plugin Initializing
failure for [schema.xml] fieldType
at
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:194)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:494)
... 14 more
Caused by: java.lang.RuntimeException: Can't set positionIncrementGap on
custom analyzer class
org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType$1
at org.apache.solr.schema.FieldType.setArgs(FieldType.java:182)
at
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:150)
at
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:53)
at
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:191)
... 15 more

thanks for any help!

Rasta


Indexing using SolrOutputFormat class

2016-07-18 Thread rashi gandhi
Hi All,



I am using Solr-5.0.0 API for indexing data in our application and the
requirement is to index the data in batches, using solr-mapreduce API.



In our application, we may receive data from any type of input source for
example: file, streams and any other relational or non-relational Db’s in a
particular format. And I need to index this data into Solr, by using
SolrOutputFormat class.



As per my analysis until now, I find that SolrOutputFormat works with the
EmbeddedSolrServer and requires path to config files for indexing data,
without the need of passing host and port for creating the SolrClient.



I checked for the documentation online, but couldn’t find any proper
examples that make the use of SolrOutputFormat class.

Does anybody have some implementations or a document, which mentions
details like what exactly needs to be passed as input to SolrOutputFormat
configuration, etc.?



Any pointers would be helpful.


Re: SolrCloud - Query performance degrades with multiple servers(Shards)

2016-07-18 Thread kasimjinwala
currently I am using solrCloud 5.0 and I am facing query performance issue
while using 3 implicit shards, each shard contain around 10K records. 
when I am specifying shards parameter(*shards=shard1*) in query it gives
30K-35K qps. but while removing shards parameter from query it give
*1000-1500qps*. performance decreases drastically.

please provide comment or suggestion to solve above issue



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-Query-performance-degrades-with-multiple-servers-tp4024660p4287600.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Find part of long query in shorter fields

2016-07-18 Thread CA
Hi Ahmet,


thank you for the link. It helped me to find more resources.

What I still don’t understand, though, is why the edismax returns one of the 
documents with a partial hit and not the other:


q=Braun Series 9 9095CC Men's Electric Shaver Wet/Dry with Clean and Renew 
Charger
// edismax with qf/pf : „name“ and „brand“ field

HIT:
name: "Braun Series Clean CCR2 Cleansing Dock Cartridges Lemonfresh 
Formula Cartrige (Compatible with Series 7,5,3) 2 pc“
brand: Braun

NOT A HIT:
name: "Braun 9095cc Series 9 Electric Shaver“
brand: Braun

(explainOther, schema, solrconfig for this, see my previous e-mail)


I’m still thinking that if I could understand what is happening then it would 
help me figure out what the solution for my use case is. Maybe edismax would be 
perfectly fine with the right combination of fieldtypes and config values?


Thanks for your input!
Chantal






FW: How to Add New Fields and Fields Types Programmatically Using Solrj

2016-07-18 Thread Jeniba Johnson

Hi,

I have configured solr5.3.1 and started Solr in schema less mode. Using 
SolrInputDocument, Iam able to add new fields in solrconfig.xml using Solrj.
How to specify the field type of a field using Solrj.

Eg 

How can I add field type properties using SolrInputDocument programmatically 
using Solrj? Can anyone help with it?



Regards,
Jeniba Johnson




The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. L Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail"