Re: Block Join Parent Query Parser range query search error

2016-07-25 Thread Zheng Lin Edwin Yeo
Thanks for the reply.

It works!

Regards,
Edwin


On 26 July 2016 at 00:08, Mikhail Khludnev  wrote:

> Hello Edwin,
>
> The issue is the space in child query clause.
> Please refer to child range query via v=$qparam like it's proposed in
>
> http://blog-archive.griddynamics.com/2013/12/grandchildren-and-siblings-with-block.html
>
> On Mon, Jul 25, 2016 at 1:40 PM, Zheng Lin Edwin Yeo  >
> wrote:
>
> > Hi,
> >
> > I am using Solr 6.1.0, and I'm indexing Parent-Child data into Solr.
> >
> > When I do my query, I use the Block Join Parent Query Parser, to return
> > only the parent's records, and not any of the child records, even though
> > there might be a match in the child record.
> >
> > However, I am not able to do the range query for the child record. For
> > example, if I search with this query
> > q=* +title:join *+{!parent
> which="*content_type:parentDocument*"}range_f:[2
> > TO 8]
> >
> > I will get the following error:
> >
> > {
> >   "responseHeader":{
> > "zkConnected":true,
> > "status":400,
> > "QTime":3},
> >   "error":{
> > "metadata":[
> >   "error-class","org.apache.solr.common.SolrException",
> >   "root-error-class","org.apache.solr.parser.ParseException"],
> > "msg":"org.apache.solr.search.SyntaxError: Cannot parse
> > 'range_f:[2': Encountered \"\" at line 1, column 18.\r\nWas
> > expecting one of:\r\n\"TO\" ...\r\n ...\r\n
> >  ...\r\n",
> > "code":400}}
> >
> >
> > What could be the issue here?
> >
> > Regards,
> > Edwin
> >
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>


RE: No need white space split

2016-07-25 Thread Prasanna Josium
Hi,
One other possibility is to use the query operator 'q.op=AND' along with your 
query. 
by default this is an 'OR'. Hope it helps your situation. 
Prasanna  

-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID] 
Sent: 25 July 2016 19:50
To: solr-user@lucene.apache.org; shashirous...@gmail.com
Subject: Re: No need white space split

Hi,

May be you can simply use string field type?
Or KeywordTokenizerFactory?

Ahmet



On Monday, July 25, 2016 4:38 PM, Shashi Roushan  
wrote:
Hi All,

I am Shashi.

I am using Solr 6.1. I want to get result only when the hole word matched.
Actually I want to avoid whitespace split.

Whenever we search for "CORSAIR ValueSelect", I want the result only "CORSAIR 
ValueSelect",currently I am getting one more result as "CORSAIR XMS 2GB".

Can any one help me?


Re: error indexing spatial

2016-07-25 Thread David Smiley
Hi tig.  Most likely, you didn't repeat the first point as the last.  Even
though it's redundant, nonetheless this is what WKT (and some other spatial
formats) calls for.
~ David

On Wed, Jul 20, 2016 at 10:13 PM tkg_cangkul  wrote:

> hi i try to indexing spatial format to solr 5.5.0 but i've got this error
> message.
>
> [image: error1]
>
> [image: error2]
> anybody can help me to solve this pls?
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


To use XML or database with DIH?

2016-07-25 Thread Wendy
Hi,I have a question regarding what type of data input to use with solr DIH:
XML file or database?I have a large collection of data to be indexed
initially, then on a weekly basis for increment index (add/update/delete).
Should I use mysal database or xml file? I also would like to have the code
flexible enough to handle future data fields w/o code change. Thanks,Wendy



--
View this message in context: 
http://lucene.472066.n3.nabble.com/To-use-XML-or-database-with-DIH-tp4288827.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: GC implications on Solr

2016-07-25 Thread Emir Arnautovic

Hi Madhur,

Shown described extreme case (not unusual though) and is not hard to 
detect since effects will be catastrophic. You can use one of Solr 
monitoring tools to see how GC (and other interrupting events such as 
commits, segment merges, saturated network) affect Solr numbers. One 
such tool is Sematext's SPM: http://sematext.com/spm/. It will give you 
enough info to pinpoint issues and tune Solr and JVM.


Thanks,

Emir

--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On 25.07.2016 18:07, Shawn Heisey wrote:

On 7/24/2016 11:13 PM, Madhur Adlakha wrote:

To whom so ever it may concern,

I have been fetching certain Solr metrics and keeping a track of them 
graphically on Grafana, there are a few queries I wanted to clear, as follows:

   *   How and which metrics of Solr are affected by the Garbage collector?
   *   Which metrics of garbage collector should we track that might have 
implications on Solr?

Garbage collection can result in a complete pause of the Java virtual
machine.  This pause can be quite long -- I've seen 15-20 seconds in a
single full GC pause in a running Solr 4.x install with no GC tuning.
When this happens, Java *only* does garbage collection.  *ANYTHING* that
Solr is doing when the pause occurs will be delayed, so any performance
metric might be affected.

Pause time is what you need to track.  When there are GC logs, you can
load those logs into GCViewer and get a lot of statistics.

https://github.com/chewiebug/GCViewer

Another good tool for calculating JVM pauses from *any* source is jHiccup.

https://www.azul.com/jhiccup/

The start script in Solr 5.0 and later has default GC tuning parameters
that work pretty well for typical heap sizes of a few gigabytes.  If you
have a particularly large heap, you may need different GC tuning.  I
have done some work on G1 tuning parameters.

https://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr

Thanks,
Shawn



Re: Sorting - uppercase value first or lowercase value

2016-07-25 Thread Erick Erickson
Well, since the ascii upper-case codes are smaller than lower case,
i.e.
A = 0x41
a = 0x61

upper case before lower case is correct IMO.

But you're being fooled by the I "tiebreaker" I'd guess,
along with (I suppose) a small number of test docs. When
two docs have the same sort value, the internal Lucene
doc ID is used to break the tie. I suggest that it just happens
that you've indexed your docs with all the upper-case
versions first in your test set and all the lower-case
versions second. If I'm right, and you reverse
the sort order, the docs will still appear upper-case first.

Try interleaving upper and lower case values and I think you'll
see them mixed in the result, i.e.
doc1: APPLE
doc2: apple
doc3: APPLE
doc4: apple

Best,
Erick

On Mon, Jul 25, 2016 at 9:59 AM, Vasu Y  wrote:
> Hi,
>  We are indexing our objects into Solr and let users to sort by different
> fields. The sort field is defined as specified below in schema.xml:
>
>  positionIncrementGap="100">
>   
> 
> 
>   
> 
>
> For a field of type "lowercase", if we have the field values: APPLES,
> ZUCCHINI, banana, BANANA, apples, zucchini and sort in ascending order,
> solr produces the result in the following sorted order:
> APPLES, apples, BANANA, banana, ZUCCHINI, zucchini.
>
> But we have another tool which also displays the same information from a
> database in the following sorted order:
> apples, APPLES, banana, BANANA, zucchini, ZUCCHINI
>
> But the database is using the SQL query "select column1 from table1 order
> by UPPER(column1) asc".
>
> I could either change SQL query to "select column1 from table1 order by
> LOWER(column1) asc" or change solr definition to include
> solr.UpperCaseFilterFactory instead of solr.LowerCaseFilterFactory so that
> both applications behave same in terms of sorting.
>
> But, in general, when we sort a collection of string values, what should be
> the correct sort order? Should upper case value ("APPLE") come before
> lowercase value ("apple") or the other way (lowercase value before
> uppercase value) when sorting in ascending order?
>
> Thanks,
> Vasu


Sorting - uppercase value first or lowercase value

2016-07-25 Thread Vasu Y
Hi,
 We are indexing our objects into Solr and let users to sort by different
fields. The sort field is defined as specified below in schema.xml:


  


  


For a field of type "lowercase", if we have the field values: APPLES,
ZUCCHINI, banana, BANANA, apples, zucchini and sort in ascending order,
solr produces the result in the following sorted order:
APPLES, apples, BANANA, banana, ZUCCHINI, zucchini.

But we have another tool which also displays the same information from a
database in the following sorted order:
apples, APPLES, banana, BANANA, zucchini, ZUCCHINI

But the database is using the SQL query "select column1 from table1 order
by UPPER(column1) asc".

I could either change SQL query to "select column1 from table1 order by
LOWER(column1) asc" or change solr definition to include
solr.UpperCaseFilterFactory instead of solr.LowerCaseFilterFactory so that
both applications behave same in terms of sorting.

But, in general, when we sort a collection of string values, what should be
the correct sort order? Should upper case value ("APPLE") come before
lowercase value ("apple") or the other way (lowercase value before
uppercase value) when sorting in ascending order?

Thanks,
Vasu


Re: min()/max() on date fields using JSON facets

2016-07-25 Thread Yonik Seeley
On Mon, Jul 25, 2016 at 9:57 AM, Tom Evans  wrote:
> For the equivalent JSON facet - "{'date.max': 'max(date_published)',
> 'date.min': 'min(date_published)'}" - I'm returned this:
>
> {u'count': 86760, u'date.max': 146836800.0, u'date.min': 129409920.0}
>
> What do these numbers represent - I'm guessing it is milliseconds
> since epoch? In UTC?

Yeah, that would probably be it.
IIRC min/max only currently support numerics.  Could you open a JIRA
issue for this?

-Yonik


Re: loading zookeeper data

2016-07-25 Thread Erick Erickson
A Collection is simply the "SolrCloud" way of thinking about a logical
index that incorporates shards, replication factors changing topology
of where the replicas live and the like. In your case it's synonymous
with your core (master and slaves). Since there's no master or slave
role in SolrCloud, it's a little confusing (Leaders and
replicas/followers roles can change in SolrCloud).

bq: Anyhow, the bottom line appears to be that 130Mb of jars are
needed to deploy my configuration to Zookeeper
bq:  I don't want production machines to require VCS checkout credentials

Huh? I think you're confusing deployment tools with how Zookeeper is
used in SolrCloud. Zookeeper has two major functions:

1> store the conf directory (schema.xml, solrconfig.xml and the like),
plus occasionally custom jars and make these automatically available
to all Solr nodes in the cluster. It does NOT store the whole Solr
deployment.

2> be aware of all Solr nodes in the system and notify all the other
Solr nodes when instances go up and down.

Zookeeper was never intended to hold all of Solr and take the place of
puppet or chef. It will not automatically provision a new bare-metal
node with a working Solr etc.

Especially the VCS comment. _Some_ node somewhere has to be VCS
conversant. But once that machine pushes config files to Zookeeper,
they're then automagically available to all the Solr nodes in the
collection, the Solr nodes need to know nothing about your VCS system.

Anyway, if you're happy with your current setup go ahead and continue
to use it. Just be clear what Zookeeper is intended to solve and what
it isn't. It's perfectly compatible with Puppet, Chef and the like

Best,
Erick

On Sun, Jul 24, 2016 at 4:46 PM, Aristedes Maniatis  wrote:
> Thanks so much for your reply. That's clarified a few things for me.
>
> Erick Erickson wrote:
>
>> Where SolrCloud becomes compelling is when you _do_ need to have
>> shard, and deal with HA/DR.
>
> I'm not using shards since the indicies are small enough, however I use 
> master/slave with 6 nodes for two reasons: having a single master poll the 
> database means less load on the database than have every node poll 
> separately. And of course we still want HA and performance, so we balance 
> load with haproxy.
>
>> Then the added step of maintaining things
>> in Zookeeper is a small price to pay for _not_ having to be sure that
>> all the configs on all the servers are all the same. Imagine a cluster
>> with several hundred replicas out there. Being absolutely sure that
>> all of them have the same configs, have been restarted and the like
>> becomes daunting. So having to do an "upconfig" is a good tradeoff
>> IMO.
>
> Saltstack (and ansible, puppet, chef, etc) all make distributed configuration 
> management trivial. So it isn't solving any problem for me, but I understand 
> how people without a configuration management tool would like it.
>
>
>
>> The bin/solr script has a "zk -upconfig" parameter that'll take care
>> of pushing the configs up. Since you already have the configs in VCS,
>> your process is just to pull them from vcs to "somewhere" then
>> bin/solr zk -upconfig -z zookeeper_asserss -n configset_name -d
>> directory_you_downloaded_to_from_VCS.
>
> Yep, I guess that's confirming my guess at how people are expected to use 
> this. Its pretty cumbersome for me because:
>
> 1. I don't want production machines to require VCS checkout credentials
> 2. I don't want to have to install Solr (and keep the version in sync with 
> production) on our build or configuration management machines
> 3. I still need files on disk in order to version control them and tie that 
> into our QA processes. Now I need another step to take those files and inject 
> them into the Zookeeper black box, ensuring they are always up to date.
>
> I do understand that people who managed hundreds of nodes completely by hand 
> would find it useful. But I am surprised that there were any of those people.
>
> I was hoping that Zookeeper had some hidden features that would make my life 
> easier.
>
>
>> Thereafter you simply refer to them by name when you create a
>> collection and the rest of it is automatic. Every time a core reloads
>> it gets the new configs.
>>
>> If you're trying to manipulate _cores_, that may be where you're going
>> wrong. Think of them as _collections_. What's not clear from your
>> problem statement is whether these cores on the various machines are
>> part of the same collection or not.
>
> I was unaware of the concept of collection until now. We use one core for 
> each type of entity we are indexing and that works well.
>
>> Do you have multiple shards in one
>> logical index?
>
> No shards. Every Solr node contains the complete set of all data.
>
>>  Or do you have multiple collections that have
>> masters/slaves (in which case the master and all the slaves that point
>> to it will be a "collection")?
>
> I'm not understanding from 

Re: Block Join Parent Query Parser range query search error

2016-07-25 Thread Mikhail Khludnev
Hello Edwin,

The issue is the space in child query clause.
Please refer to child range query via v=$qparam like it's proposed in
http://blog-archive.griddynamics.com/2013/12/grandchildren-and-siblings-with-block.html

On Mon, Jul 25, 2016 at 1:40 PM, Zheng Lin Edwin Yeo 
wrote:

> Hi,
>
> I am using Solr 6.1.0, and I'm indexing Parent-Child data into Solr.
>
> When I do my query, I use the Block Join Parent Query Parser, to return
> only the parent's records, and not any of the child records, even though
> there might be a match in the child record.
>
> However, I am not able to do the range query for the child record. For
> example, if I search with this query
> q=* +title:join *+{!parent which="*content_type:parentDocument*"}range_f:[2
> TO 8]
>
> I will get the following error:
>
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":400,
> "QTime":3},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.parser.ParseException"],
> "msg":"org.apache.solr.search.SyntaxError: Cannot parse
> 'range_f:[2': Encountered \"\" at line 1, column 18.\r\nWas
> expecting one of:\r\n\"TO\" ...\r\n ...\r\n
>  ...\r\n",
> "code":400}}
>
>
> What could be the issue here?
>
> Regards,
> Edwin
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics


Re: GC implications on Solr

2016-07-25 Thread Shawn Heisey
On 7/24/2016 11:13 PM, Madhur Adlakha wrote:
> To whom so ever it may concern,
>
> I have been fetching certain Solr metrics and keeping a track of them 
> graphically on Grafana, there are a few queries I wanted to clear, as follows:
>
>   *   How and which metrics of Solr are affected by the Garbage collector?
>   *   Which metrics of garbage collector should we track that might have 
> implications on Solr?

Garbage collection can result in a complete pause of the Java virtual
machine.  This pause can be quite long -- I've seen 15-20 seconds in a
single full GC pause in a running Solr 4.x install with no GC tuning. 
When this happens, Java *only* does garbage collection.  *ANYTHING* that
Solr is doing when the pause occurs will be delayed, so any performance
metric might be affected.

Pause time is what you need to track.  When there are GC logs, you can
load those logs into GCViewer and get a lot of statistics.

https://github.com/chewiebug/GCViewer

Another good tool for calculating JVM pauses from *any* source is jHiccup.

https://www.azul.com/jhiccup/

The start script in Solr 5.0 and later has default GC tuning parameters
that work pretty well for typical heap sizes of a few gigabytes.  If you
have a particularly large heap, you may need different GC tuning.  I
have done some work on G1 tuning parameters.

https://wiki.apache.org/solr/ShawnHeisey#GC_Tuning_for_Solr

Thanks,
Shawn



Re: solr 5.5.2 loadOnStartUp does not work

2016-07-25 Thread Erick Erickson
"Load" is a little tricky here, it means "load the core and open a searcher.
The core _descriptor_ which is the internal structure of
core.properties (plus some other info) _is_ loaded and is what's
used to show the list of available cores. Else how would you
even know the core existed?

It's not until you actually try to do anything (even click on the
item in the "cores" drop-down) that the heavy-duty
work of opening the core actually executes.

So I think it's working as expected,. But do note
that this whole area (transient cores, loading on
startup true/false) is intended for stand-alone
Solr and is unsupported in SolrCloud.

Best,
Erick

On Mon, Jul 25, 2016 at 6:09 AM, elisabeth benoit
 wrote:
> Hello,
>
> I have a core.properties with content
>
> name=indexer
> loadOnStartup=false
>
>
> but the core is loaded on start up (it appears on the admin interface).
>
> I thougth the core would be unloaded on startup. did I miss something?
>
>
> best regards,
>
> elisabeth


Re: Downgraded Raid5 cause endless recovery and hang.

2016-07-25 Thread Shawn Heisey
On 7/24/2016 8:04 PM, forest_soup wrote:
> We have a 5 node solrcloud. When a solr node's disk had issue and
> Raid5 downgraded, a recovery on the node was triggered. But there's a
> hanging happens. The node disappears in the live_nodes list. 

In my opinion, RAID5 (and RAID6) are bad ways to handle storage.  Cost
per usable gigabyte is the only real advantage, but the performance
problems are not worth that advantage.  If you care more about capacity
than performance, then it might be OK.

Under normal circumstances (no failed disk), if you're writing to the
array at all, all I/O (both read and write) is slow.  RAID5 can have
awesome read performance, but *only* if the array is health and there is
no writing happening at the same time.

If you lose a disk, the parity reads required to reconstruct the missing
data cause REALLY bad performance.

When you replace the failed disk and it is rebuilding, performance is
even worse.  The additional load is often enough to cause a second disk
to fail, which for RAID5 means the entire array is lost.

These I/O performance issues cause really big problems for Solr and
zookeeper.  There's no surprise to me that a degraded RAID5 array has
issues like you describe.

Thanks,
Shawn



Re: Using log4j.xml in Solr6

2016-07-25 Thread Erick Erickson
_How_ is it "not working"? You might want
to review:
http://wiki.apache.org/solr/UsingMailingLists

Best,
Erick

On Mon, Jul 25, 2016 at 7:09 AM, marotosg  wrote:
> Hi all,
>
> I am trying to upgrade Solr4.11 to Solr6 and having some trouble with
> logging.
> I have Solr4.11 running on tomcat 6 as a solr.war. Inside my solr.war a few
> jar files updated as it explains in this post so I can use a "log4j.xml"
> with some advanced features to compress old files.
> https://wiki.apache.org/solr/SolrLogging
>
> This is not working for me on Solr6.2.
>
> Does anyone know how to achive the usage of log4j.xml in newer versions of
> Solr server?
>
> Thanks
> Sergio
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Using-log4j-xml-in-Solr6-tp4288742.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: No need white space split

2016-07-25 Thread Ahmet Arslan
Hi,

May be you can simply use string field type?
Or KeywordTokenizerFactory?

Ahmet



On Monday, July 25, 2016 4:38 PM, Shashi Roushan  
wrote:
Hi All,

I am Shashi.

I am using Solr 6.1. I want to get result only when the hole word matched.
Actually I want to avoid whitespace split.

Whenever we search for "CORSAIR ValueSelect", I want the result only
"CORSAIR ValueSelect",currently I am getting one more result as "CORSAIR
XMS 2GB".

Can any one help me?


Using log4j.xml in Solr6

2016-07-25 Thread marotosg
Hi all,

I am trying to upgrade Solr4.11 to Solr6 and having some trouble with
logging.
I have Solr4.11 running on tomcat 6 as a solr.war. Inside my solr.war a few
jar files updated as it explains in this post so I can use a "log4j.xml"
with some advanced features to compress old files.
https://wiki.apache.org/solr/SolrLogging

This is not working for me on Solr6.2.

Does anyone know how to achive the usage of log4j.xml in newer versions of
Solr server?

Thanks
Sergio



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Using-log4j-xml-in-Solr6-tp4288742.html
Sent from the Solr - User mailing list archive at Nabble.com.


min()/max() on date fields using JSON facets

2016-07-25 Thread Tom Evans
Hi all

I'm trying to replace a use of the stats module with JSON facets in
order to calculate the min/max date range of documents in a query. For
the same search, "stats.field=date_published" returns this:

{u'date_published': {u'count': 86760,
 u'max': u'2016-07-13T00:00:00Z',
 u'mean': u'2013-12-11T07:09:17.676Z',
 u'min': u'2011-01-04T00:00:00Z',
 u'missing': 0,
 u'stddev': 50006856043.410477,
 u'sum': u'3814570-11-06T00:00:00Z',
 u'sumOfSquares': 1.670619719649826e+29}}

For the equivalent JSON facet - "{'date.max': 'max(date_published)',
'date.min': 'min(date_published)'}" - I'm returned this:

{u'count': 86760, u'date.max': 146836800.0, u'date.min': 129409920.0}

What do these numbers represent - I'm guessing it is milliseconds
since epoch? In UTC?
Is there any way to control the output format or TZ?
Is there any benefit in using JSON facets to determine this, or should
I just continue using stats?

Cheers

Tom


Re: solr extends query

2016-07-25 Thread sara hajili
i saw synonym Filter but it is not sufficient for me.as i saw this, i must
build synonym filter map . that this is built like a map i mean i must put
exactly in synonym map for example "home" is synonym of "house" (for
example).
so in query when user insert home it filter add house to it and expand this.
but i don't want this. i have a algorithm that get n words and according to
it and use of wordnet ,it make a decision that what word must be add to
expand this query. it depends on all words exist on query.
so i can not tokenize my query word by word.
because of if i tokenize query word by word .then in synonym filter factory
i have just one token and i can not make a decision what word must be added
to query. i need to have all word together.
and also i can not tokenize my query with "." for example because i wanna
to tokenize my query word by word to normalize every word. so i can not
change tokenize format.
and i can not use synonymfilterfactory.
how i can do this?i got all query words and expand this with my algorithm
based on using wordnet?

On Mon, Jul 25, 2016 at 5:32 AM, Erik Hatcher 
wrote:

> You’re going to need to tokenize to look up words, so a TokenFilter is a
> better place to put this sort of thing, I think.
>
> Build off of Lucene’s SynonymFilter (and corresponding
> SynonymFilterFactory)
> https://github.com/apache/lucene-solr/blob/5e5fd662575105de88d8514b426bccdcb4c76948/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilter.java
> <
> https://github.com/apache/lucene-solr/blob/5e5fd662575105de88d8514b426bccdcb4c76948/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilter.java
> >
>
> I’m not sure exactly what your code is trying to do (please share that as
> well, for best assistance), but I do not recommend putting custom code into
> Solr’s package namespaces (and obviously that has issues here, because of
> the separate JAR and visibility/access).
>
> Erik
>
>
> > On Jul 25, 2016, at 2:25 AM, sara hajili  wrote:
> >
> > i use solr 6-1-0.
> > and i write my own search Handler .i got this error:
> >
> > java.lang.IllegalAccessError: tried to access field
> > org.apache.solr.handler.component.ResponseBuilder.requestInfo from
> > class org.apache.solr.handler.component.MyResponseBuilder
> >   at
> org.apache.solr.handler.component.MyResponseBuilder.getRequestInfo(MyResponseBuilder.java:19)
> >   at
> org.apache.solr.handler.component.MySearchHandler.handleRequestBody(MySearchHandler.java:94)
> >   at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> >   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
> >   at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
> >   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
> >   at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
> >   at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> >   at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> >   at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> >   at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> >   at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> >   at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> >   at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> >   at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> >   at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> >   at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> >   at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> >   at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> >   at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> >   at org.eclipse.jetty.server.Server.handle(Server.java:518)
> >   at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> >   at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> >   at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >   at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> >   at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> >  

No need white space split

2016-07-25 Thread Shashi Roushan
Hi All,

I am Shashi.

I am using Solr 6.1. I want to get result only when the hole word matched.
Actually I want to avoid whitespace split.

Whenever we search for "CORSAIR ValueSelect", I want the result only
"CORSAIR ValueSelect",currently I am getting one more result as "CORSAIR
 XMS 2GB".

Can any one help me?


solr 5.5.2 loadOnStartUp does not work

2016-07-25 Thread elisabeth benoit
Hello,

I have a core.properties with content

name=indexer
loadOnStartup=false


but the core is loaded on start up (it appears on the admin interface).

I thougth the core would be unloaded on startup. did I miss something?


best regards,

elisabeth


Re: solr extends query

2016-07-25 Thread Erik Hatcher
You’re going to need to tokenize to look up words, so a TokenFilter is a better 
place to put this sort of thing, I think.

Build off of Lucene’s SynonymFilter (and corresponding SynonymFilterFactory) 
https://github.com/apache/lucene-solr/blob/5e5fd662575105de88d8514b426bccdcb4c76948/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilter.java
 


I’m not sure exactly what your code is trying to do (please share that as well, 
for best assistance), but I do not recommend putting custom code into Solr’s 
package namespaces (and obviously that has issues here, because of the separate 
JAR and visibility/access).

Erik


> On Jul 25, 2016, at 2:25 AM, sara hajili  wrote:
> 
> i use solr 6-1-0.
> and i write my own search Handler .i got this error:
> 
> java.lang.IllegalAccessError: tried to access field
> org.apache.solr.handler.component.ResponseBuilder.requestInfo from
> class org.apache.solr.handler.component.MyResponseBuilder
>   at 
> org.apache.solr.handler.component.MyResponseBuilder.getRequestInfo(MyResponseBuilder.java:19)
>   at 
> org.apache.solr.handler.component.MySearchHandler.handleRequestBody(MySearchHandler.java:94)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> 
> 
> because of in solr6-1-0 searchHandler 2 fields in responseBuilder
> declare as a package access type(isDistrib and requestInfo).and in
> this class does not exist any getter to this fields.
> 
> so when i implement MySearchHandler and create new class
> "MySearchHandler" in new java project and put in package
> "org.apache.solr.handler"
> 
> and extends mySearchHandler form searchHandler.
> 
> so in compile time it is not issue that we use field with package
> access("idea in compile time assume that they are in same package")
> 
> but when i create a jar file and put in solr .and in solr-config file
> i put my handler "MySearchHandler" as /mySearchhandler i got above
> error.
> 
> so it shows that in runtime solr find 2 jar file with same package
> name.but could not recognize that they are in same package and let
> mysearchHandler to use filed of 

Block Join Parent Query Parser range query search error

2016-07-25 Thread Zheng Lin Edwin Yeo
Hi,

I am using Solr 6.1.0, and I'm indexing Parent-Child data into Solr.

When I do my query, I use the Block Join Parent Query Parser, to return
only the parent's records, and not any of the child records, even though
there might be a match in the child record.

However, I am not able to do the range query for the child record. For
example, if I search with this query
q=* +title:join *+{!parent which="*content_type:parentDocument*"}range_f:[2
TO 8]

I will get the following error:

{
  "responseHeader":{
"zkConnected":true,
"status":400,
"QTime":3},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.parser.ParseException"],
"msg":"org.apache.solr.search.SyntaxError: Cannot parse
'range_f:[2': Encountered \"\" at line 1, column 18.\r\nWas
expecting one of:\r\n\"TO\" ...\r\n ...\r\n
 ...\r\n",
"code":400}}


What could be the issue here?

Regards,
Edwin


Credentials Implementation to Solr Admin page.

2016-07-25 Thread Swathika
How to apply username and password to Solr 6.0.1 admin page. Please let me
know.   

PS: We are using Solr master and slave setup.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Credentials-Implementation-to-Solr-Admin-page-tp4288708.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr extends query

2016-07-25 Thread sara hajili
i use solr 6-1-0.
and i write my own search Handler .i got this error:

java.lang.IllegalAccessError: tried to access field
org.apache.solr.handler.component.ResponseBuilder.requestInfo from
class org.apache.solr.handler.component.MyResponseBuilder
at 
org.apache.solr.handler.component.MyResponseBuilder.getRequestInfo(MyResponseBuilder.java:19)
at 
org.apache.solr.handler.component.MySearchHandler.handleRequestBody(MySearchHandler.java:94)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


because of in solr6-1-0 searchHandler 2 fields in responseBuilder
declare as a package access type(isDistrib and requestInfo).and in
this class does not exist any getter to this fields.

so when i implement MySearchHandler and create new class
"MySearchHandler" in new java project and put in package
"org.apache.solr.handler"

and extends mySearchHandler form searchHandler.

so in compile time it is not issue that we use field with package
access("idea in compile time assume that they are in same package")

but when i create a jar file and put in solr .and in solr-config file
i put my handler "MySearchHandler" as /mySearchhandler i got above
error.

so it shows that in runtime solr find 2 jar file with same package
name.but could not recognize that they are in same package and let
mysearchHandler to use filed of "isDistrib and requestInfo (with
package declare)"

so i see that searchhandler is in solr-core jar file,.so i puth this
jarfile to my searchhandler jar file,and delete solr-core jar file
from lib of solr web-app and then restart solr."i do this because i
don't have a 2 jar file with same packae name ,and in run time solr
fo=ind out them in same package and let me to uyse 2 fileds"

but after do that i try to start solr .but now solr did n't start and
i got this in command line:

"solr could n't start after 30 second." // some thing like a timeout exception


so now i think  maybe i select a wrong approach to expand my queries
and add wordnet to solr.

and please describe how i can use synonym filter factory or tokenizar
that u said i can use them to expand query.

tnx


On Sun, Jul 24, 2016 at 12:09 PM, Erik Hatcher 
wrote:

> Have a look at Solr's source code and you will see many TokenFilter
> implementations.  The synonym token filter is the closest to what you want.
>
> But