Hi all,
I'm going to develop a search architecture solr based and i wonder if you
could suggest me which Solr version will suite best my needs.
I have 10 Solr machines which use replication, sharding and multi-core ; 1
Solr server would index Documents (Xml, *Pdf*,Text ... ) on a *NFS*
Hi all,
as I saw in this discussion [1] there were many issues with PDF indexing in
Solr 1.4 due to TIka library (0.4 Version).
In Solr 1.4.1 the tika library is the same so I guess the issues are the
same.
Could anyone, who contributed to the previous thread, help me in resolving
these issues?
Hi Jon,
During the last days we front the same problem.
Using Solr 1.4.1 classic (tika 0.4 ),from some pdf files we can't extract
content and from others, Solr throws an exception during the Indexing
Process .
You must:
Update tika libraries (into /contrib/extraction/lib)with tika-core.0.8
. It would be very nice to have a Solr implementation using
the
newest versions of PDFBox Tika and actually have content being
extracted...=)
Best,
Dave
-Original Message-
From: Alessandro Benedetti [mailto:benedetti.ale...@gmail.com]
Sent: Tuesday, July 27, 2010 6:09 AM
Hi all,
I need to retrieve query-results with a ranking independent from each
query-result's default lucene score, which means assigning the same score to
each query result.
I tried to use a zero boost factor ( ^0 ) to reset to zero each
query-result's score.
This strategy seems to work within the
the exception?
2010/9/7 Grant Ingersoll gsing...@apache.org
On Sep 7, 2010, at 7:08 AM, Alessandro Benedetti wrote:
Hi all,
I need to retrieve query-results with a ranking independent from each
query-result's default lucene score, which means assigning the same score
to
each query result
Any News?
I'm also interested in this topic :)
2011/12/12 Brian Lamb brian.l...@journalexperts.com
Hi all,
According to
http://wiki.apache.org/solr/DataImportHandler#Usage_with_XML.2BAC8-HTTP_Datasource
a
delta-import is not currently implemented for URLDataSource. I say
currently
Hi Guys,
I probably found a way to mime the delta import for the fileEntityProcessor
( I have used it for xml files ... )
Adding this configuration in the xml-data-config :
entity name=personeImpreseList rootEntity=false dataSource=null
processor=FileListEntityProcessor
fileName=^.*\.xml$
Hi guys,
I'm developing a custom SolrEventListener, and inside the PostCommit()
method I need to execute some queries and collect results.
In my SolrEventListener class, I have a SolrCore
Object( org.apache.solr.core.SolrCore) and a list of queries (Strings ).
How can I use the SolrCore to
document, this is the principal need of
my plugin.
How can i search the early indexed documents? How can i open the new
searcher? and where?
Inside the postCommit seems to be not good...
Any suggestion?
2011/12/29 Alessandro Benedetti benedetti.ale...@gmail.com
Hi guys,
I'm developing a custom
2011/12/31 Alessandro Benedetti benedetti.ale...@gmail.com
Ok, I have made progresses, I built my architecture and I execute queries
, inside the PostCommit method, and they are launched as i want.
But The core can't see the early updated documents and the commit ends
after than
?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12994256#comment-12994256
).
Am I indexing wrong? Am I missing something?
The type of my spatial field is geohash ...
Cheers
--
---
Alessandro Benedetti
Sourcesense - making sense
with geohashes is an extension of what's in Solr,
it's not what's in Solr today. Recently I ported SOLR-2155 to Solr 3x, and
in a way that does NOT require that you patch Solr. I attached it to the
issue just now.
~ David Smiley
On Sep 29, 2011, at 9:37 AM, Alessandro Benedetti wrote:
Hi all,
I
you
2011/9/29 Smiley, David W. dsmi...@mitre.org
On Sep 29, 2011, at 5:10 PM, Alessandro Benedetti wrote:
Sorry David, probably I misunderstood your reply, what do you mean?
I'm using Lucid Work Enterprise 1.8, and, as I know , it includes
geohashes
patch.
Solr 3x, trunk, and I
We developed a custom Highlighter to solve this issue.
We added a url field in the solr schema doc for our domain and when
highlighting is called, we access the file, extract the information and send
them to the custom highlighter.
If you still need some help, I can provide you, our solution in
?
In simple words I want a bq to activate specific bf.
Cheers
--
---
Alessandro Benedetti
Sourcesense - making sense of Open Source: http://www.sourcesense.com
the date boost.
But this function I wrote , has a wrong sintax, I need to correct the
exists part.
Any hint?
2012/10/26 Alessandro Benedetti a.benede...@sourcesense.com
Hi guys,
I was fighting with boost factor in my edismax request handler :
lst name=appends
str name=defTypeedismax
Hi guys,
I was studying in deep the join feature, and I noticed that in Solr , the
join query parser is not working in scoring.
If you add the parameter scoreMode it is completely ignored...
Checking the source code it's possible to see that the join query is built
as follow :
public class
It's really simple indeed. Solr provide the SpellCheck[1] feature that
allow you to do this.
You have only to configure the RequestHandler and the Search Component.
And of course develop a simple ui ( you can find an example in the velocity
response handler Solritas[2] .
Cheers
[1]
I think this could help : http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning
Cheers
2013/9/27 ewinclub7 ewincl...@hotmail.com
ด้วยที่แทงบอลแบบออนไลน์กำลังมาแรงทำให้พวกโต๊ะบอลเดี๋ยวนี้ก็เริ่มขยับขยายมาเปิดรับแทงบอลออนไลน์เอง
download goldclub http://www.goldclub.net/download/
Nope, it's not the last component problem, but it's definetely the
request handler problem, it was the same for me ...
Switching to the /tvrh requesthandler solved my problem.
We should update the wiki !
2013/9/27 Shawn Heisey s...@elyograg.org
On 9/27/2013 4:02 PM, Jack Krupansky wrote:
Hi guys, I think this is a very simple bug, but i didn't know where to
quickly post it :
In the schemaless mode, in the solr admin, if you select a Core and then
select the schema tab, a wild error will appear, because no schema.xml file
exists :
Hi guys,
I was thinking how to activate the DocValues approach for using faceting.
Tell me if I am correct :
1) enable in schema.xml for the field of interest, the DocValues attribute
set to true.
2) use one of these 2 faceting strategies : fc ( Field Cache) or fcs (Per
segment Field Cache)
3)
I have a zookeeper ensemble hotes in one amazon server.
Using the CloudSolrServer and trying to connect , I obtain this nreally
unusual error :
969 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Client is
connected to ZooKeeper
1043 [main] INFO
running and what's the version of SolrJ? I am
guessing they are different.
On Wed, Oct 30, 2013 at 8:32 PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
I have a zookeeper ensemble hotes in one amazon server.
Using the CloudSolrServer and trying to connect , I obtain
Hi guys,
I was working with the ContentStreamUpdateRequest in solr 4.5 to send to
Solr a document with a set of metaData through an HTTP POST request.
Following the tutorial is easy to structure the request :
*contentStreamUpdateRequest.setParam(literal.field1,value1);*
, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Hi guys,
I was working with the ContentStreamUpdateRequest in solr 4.5 to send to
Solr a document with a set of metaData through an HTTP POST request.
Following the tutorial is easy to structure the request
I copy here this information as well.
Another detail that comes to my mind is that the SolrServer used to process
the request is *CloudSolrServer.*
I will check the implementation of the method.
2013/12/14 Alessandro Benedetti benedetti.ale...@gmail.com
Thank you Raymond,
so what's wrong
Hi Nagendra,
really cool topic.
I'm really interested in discover more information about the three
similraties algorithm you offer ( Term Similarity, Document Similiraty and
Term In Document Similarity).
I was looking for more details and explanations behind your Ranking
Algorithm.
Where could i
Hi guys, following this thread I have some question :
1) regarding LUCENE-5350, what is the context quoted ? Is it the context a
filter query ?
2) regarding https://issues.apache.org/jira/browse/SOLR-5378, do we have
the final documentation available ?
Cheers
2014/1/16 Hamish Campbell
Any news regarding this ?
I'm investigating in Solr offline clustering as well ( full index
clustering).
Cheers
2012-09-17 20:16 GMT+01:00 Denis Kuzmenok forward...@ukr.net:
Sorry for late response. To be strict, here is what i want:
* I get documents all the time. Let's assume those are
Hi guys,
I'm looking around to find out if it's possible to have a full-index
/Offline cluster.
My scope is to make a full index clustering ad for each document have the
cluster field with the id/label of the cluster at indexing time.
Anyone know more details regarding this kind of integration
for
offline clustering.
Ahmet
On Monday, March 10, 2014 4:11 PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Hi guys,
I'm looking around to find out if it's possible to have a full-index
/Offline cluster.
My scope is to make a full index clustering ad for each document have
-classification-functions-of-lucene-and-mahout.html
May be others (Dawid Weiss) can clarify?
Ahmet
On Monday, March 10, 2014 4:24 PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Thank you, Ahmet, i already know Mahout.
What i was curious is if already exists an integration
Hi guys,
wondering if there is any proper way to access Schema API via Solrj.
Of course is possible to reach them in Java with a specific Http Request,
but in this way, using SolrCloud for example we become coupled to one
specific instance ( and we don't want) .
Code Example :
)
Then you've got to parse the response using NamedList etc.etc.
-Original Message-
From: Alessandro Benedetti [mailto:benedetti.ale...@gmail.com]
Sent: Tuesday, July 08, 2014 5:54 AM
To: solr-user@lucene.apache.org
Subject: [Solr Schema API] SolrJ Access
Hi guys,
wondering if there is any
mmm wondering how to pass the payload for the PUT using that structure with
SolrQuery...
2014-07-09 15:42 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
Thank's Elaine !
Worked for the GET Method !
I will test soon with the PUT method :)
One strange thing is that is working
Alessandro Benedetti benedetti.ale...@gmail.com
:
mmm wondering how to pass the payload for the PUT using that structure
with SolrQuery...
2014-07-09 15:42 GMT+01:00 Alessandro Benedetti
benedetti.ale...@gmail.com:
Thank's Elaine !
Worked for the GET Method !
I will test soon with the PUT method
Hi guys,
I'm struggling testing schemaAPI REST endpoints thourgh EmbeddedSolrServer.
Out of the box the Embedded Solr Server is not able to recognize the schema
request handler. So I was trying to follow an approach like this :
public static void init() throws Exception {
final
Thank you Chris,
Exactly as you suggested I was looking into related classes to that one.
Playing with :
@BeforeClass
public static void init() throws Exception {
final SortedMapServletHolder,String extraServlets = new
TreeMapServletHolder,String();
final ServletHolder solrRestApi
Just had the very same problem, and I confirm that currently is quite a
mess to manage suggestions in SolrJ !
I have to go with manual Json parsing.
Cheers
2015-02-02 12:17 GMT+00:00 Jan Høydahl jan@cominvent.com:
Using the /suggest handler wired to SuggestComponent, the
SpellCheckResponse
Exactly Tomnaso ,
I was referring to that !
I wrote another mail in the dev mailing list, I will open a Jira Issue for
that !
Cheers
2015-04-29 12:16 GMT+01:00 Tommaso Teofili tommaso.teof...@gmail.com:
2015-04-27 19:22 GMT+02:00 Alessandro Benedetti
benedetti.ale...@gmail.com
:
Just
Hi guys,
was thinking to the clustering and the integration of the clustering search
component in an existing request handler.
I am talking about Online clustering ( the clustering of search results).
Once you configure the search component with the engine definition,
clustering will happen
Hi !
Currently Solr builds FST to provide proper fuzzy search or spellcheck
suggestions based on the string distance .
The current default algorithm is the Levenstein distance ( that returns the
number of edit as distance metric).
In your case you should calculate client side, the edit you want to
!2520Searches
Just to check, will this affect the performance of the system?
Regards,
Edwin
On 7 May 2015 at 20:00, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Hi !
Currently Solr builds FST to provide proper fuzzy search or spellcheck
suggestions based on the string
Let's explain little bit better here :
First of all, the SynonimFilter is a Token Filter, and being a Token Filter
it can be part of an Analysis pipeline at Indexing and Query Time.
As the different type of analysis explicitly explains when the filtering
happens, let's go to the details of the
it won't be a problem in comparison with the actual query
time.
Regards,
Edwin
On 8 May 2015 at 16:53, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Hi Zheng,
actually that version of the fuzzy search is deprecated!
Currently the fuzzy search syntax is :
query~1 or query~2
at 17:10, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Let's explain little bit better here :
First of all, the SynonimFilter is a Token Filter, and being a Token
Filter
it can be part of an Analysis pipeline at Indexing and Query Time.
As the different type of analysis explicitly
, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
I found this very interesting article that I think can help in better
understanding the problem :
http://lucidworks.com/blog/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/
And this :
http
Is it possible to know a little bit more about the nature of that
multi-lingual field ?
I can see the keywordTokenizer and then a lot of grams calculated from that
token .
What is that field used for ?
2015-05-07 19:23 GMT+01:00 Kuntal Ganguly gangulykuntal1...@gmail.com:
Our current production
When working with Suggesters I suggest to take a deep look to this guide :
http://lucidworks.com/blog/solr-suggester/
It was really helpful.
Cheers
2015-05-07 16:58 GMT+01:00 Rajesh Hazari rajeshhaz...@gmail.com:
Good to know that its working as expected.
I have some couple of question on
This is a quite big Sinonym corpus !
If it's not feasible to have only 1 big synonym file ( I haven't checked,
so I assume the 1 Mb limit is true, even if strange)
I would do an experiment :
1) testing query time with a Solr Classic config
2) Use an Ad Hoc Solr Core to manage Synonyms ( in this
have), and index
them
into an Ad Hoc Solr Core to manage them?
I probably can only try them out properly when I can get the server
machine with more RAM.
Regards,
Edwin
On 8 May 2015 at 22:16, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
This is a quite big
A simple OR query should be fine :
tags:(T1 T2 T3)
Cheers
2015-05-11 15:39 GMT+01:00 Sujit Pal sujit@comcast.net:
Hi Naresh,
Couldn't you could just model this as an OR query since your requirement is
at least one (but can be more than one), ie:
tags:T1 tags:T2 tags:T3
-sujit
On
I think with a proper configuration of the Edismax query parser and a
proper management of field boosting,
it's much more precise to use the list of interesting fields than a big
blob copy field.
Cheers
2015-05-13 15:54 GMT+01:00 Steven White swhite4...@gmail.com:
Hi Everyone,
In my search
There was a similar scission few days ago, take a look here :
I found this very interesting article that I think can help in better
understanding the problem :
http://lucidworks.com/blog/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing-tokenfilter/
And this :
Hi Siamak,
1) You can do that with the managed resources :
Take a look to the synonym section.
https://cwiki.apache.org/confluence/display/solr/Managed+Resources
Specifically :
To determine the synonyms for a specific term, you send a GET request for
the child resource, such as
Hi ,
One year ago or something, it was not possible to have in Solr the results
of the Join sorted ( it was not using the lucene sorting) .
In solr it was only a filter query with no scoring.
I should verify if we are currently in the same scenario.
For sure it should not be a big deal to port the
So , have you customised your Solr with a plugin ?
Do you have additional info or documentation ? What is the new child
transformer ? I never used it !
Cheers
2015-05-12 16:12 GMT+01:00 StrW_dev r.j.bamb...@structweb.nl:
I actually did some digging and changed the default ScoreMode in the
Hi Bram,
what do you mean with :
I
would like it to provide the unique value myself, without having the
deduplicator create a hash of field values .
This is not reduplication, but simple document filtering based on a
constraint.
In the case you want de-duplication ( which seemed from your very
is something you can play with and
customise if needed.
Clarified that, do you think can fit in some way, or definitely you are not
talking about deduce ?
2015-05-20 8:37 GMT+01:00 Bram Van Dam bram.van...@intix.eu:
On 19/05/15 14:47, Alessandro Benedetti wrote:
Hi Bram,
what do you mean
This scenario is a perfect fit to play with Solr Joins [1] .
As you observed, you would prefer to go with a query time join.
THis kind of join can be done inter-collection .
You can have you deal collection and product collection .
Every product will have one field dealId to match all the parent
with the quality of the clustering results
when I change the hl.fragszie to a even though I've set my
carrot.produceSummary to true.
Regards,
Edwin
On 1 June 2015 at 17:31, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Only to clarify the initial mail, The carrot.fragSize has
Hi Edwin,
I have worked extensively recently in Suggester and the blog I feel to
suggest is Erick's one.
It's really detailed and good for a beginner and expert as well. [1]
Apart that let's see you particular use case :
1) Do you want to be able to get also where the suggestions are coming from
no suggestions showed.
Regards,
Edwin
On 5 June 2015 at 17:54, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Have you verified that you actually have values stored for the field you
want to build suggestions from ?
Was the field stored from the beginning or you changed
at 18:28, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
To verify if you have valued stored, simply do some simple query.
But if was stored from the beginning , probably it is ok.
Please check the logs as well for anything.
If no problem there I can take a look better to the config
I would like to add this , to Shawn description :
DocValues are only available for specific field types. The types chosen
determine the underlying Lucene docValue type that will be used. The
available Solr field types are:
- StrField and UUIDField.
- If the field is single-valued
so much for your advice.
Regards,
Edwin
On 4 June 2015 at 22:30, Alessandro Benedetti
benedetti.ale...@gmail.com
wrote:
Please remember this :
to be used as the basis for a suggestion, the field must be stored
From the official guide.
Cheers
2015-06-04 11:19 GMT+01
would like to do now
bin/post -c mydb /DATA1
I would like to know If my SOLR5 will run fine and no provide an memory
error because there are too many files
in one post without doing a commit?
The commit will be done at the end of 1 000 000.
Is it ok ?
Le 05/06/2015 16:59, Alessandro
suggestions from --
Is there a need to re-index the documents in order for this to work?
Regards,
Edwin
On 2 June 2015 at 17:25, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Hi Edwin,
I have worked extensively recently in Suggester and the blog I feel to
suggest is Erick's one
a new copyField. It works when I used the
copyField that's created before the indexing is done.
As I'm using the spellcheck dictionary as my suggester, so does that mean I
just need to build the spellcheck dictionary?
Regards,
Edwin
On 3 June 2015 at 17:36, Alessandro Benedetti benedetti.ale
question was more related to check if the system
was healthy from the system resource point of view.
Cheers
On Wed, Jun 10, 2015 at 2:13 PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Let me try to help you, first of all I would like to encourage people to
post more information
PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Let me try to help you, first of all I would like to encourage people
to
post more information about their scenario than This is my log, index
deleted, help me :)
This kind of Info can be really useful :
1) Solr
Let me try to help you, first of all I would like to encourage people to
post more information about their scenario than This is my log, index
deleted, help me :)
This kind of Info can be really useful :
1) Solr version
2) Solr architecture ( Solr Cloud ? Solr Cloud configuration ? Manual
Hi Edwin,
let's do this step by step.
Clustering is problem solved by unsupervised machine learning algorithms.
The scope of clustering is to group per similarity a corpus of documents,
trying to have meaningful groups for a human being.
Solr currently provides different approaches for *Query
Erick will correct me if I am wrong but this function query I don't think
it exists.
But maybe can be a nice contribution.
It should take in input a date format and a field and give in response the
new formatted Date.
The would be simple to use it :
be 10K) to retrieve the docs in
core 1 with same id and
3) facet on tags in core1
so my /select is to run on core0 and facet on tag field of core1
thank you Alessandro
On Thu, Jun 4, 2015 at 9:28 AM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Lets try to make clear some point
Please remember this :
to be used as the basis for a suggestion, the field must be stored
From the official guide.
Cheers
2015-06-04 11:19 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
If you are using an existing indexed field to provide suggestions, you
simply need to build
Hi Rob,
Reading your use case I can not understand why the Query Time join is not a
fit for you !
The documents returned by the Query Time Join will be from core1, so
faceting and filter querying that core, would definitely be possible !
I can not see your problem honestly !
Cheers
2015-06-04
wrote:
Thank you for your suggestions.
Will try that out and update on the results again.
Regards,
Edwin
On 3 June 2015 at 21:13, Alessandro Benedetti
benedetti.ale...@gmail.com
wrote:
I can see a lot of confusion
more
frequently during my research.
Just to confirm, do I need to re-index the data in order for this new
approach to work if I'm using an existing field?
Regards,
Edwin
On 4 June 2015 at 16:58, Alessandro Benedetti benedetti.ale...@gmail.com
wrote:
Let me try to clarify the things
I think this mail is really poor in term of details.
Which version of Solr are you using ?
Architecture ?
Load expected ?
Indexing approach ?
When does your problem happens ?
More detail we give, easier will be to provide help.
Cheers
2015-06-04 12:19 GMT+01:00 Toke Eskildsen
Honestly your auto-commit configuration seems not alarming at all!
Can you give me more details regarding :
Load expected : currently it is 7- 15 should be below 1
What does this mean ? Without a unit of measure i find hard to understand
plain numbers :)
was expecting the number of documents
Hi Bruno,
I can not see what is your challenge.
Of course you can index your data in the flavour you want and do a commit
whenever you want…
Are those xml Solr xml ?
If not you would need to use the DIH, the extract update handler or any
custom Indexer application.
Maybe I missed your point…
Give
I had the very same issue,
because I had some document with a redundant field, and I was using the
Infix Suggester as well.
Because the Infix Suggester returns the whole field content, if you have
duplicated fields across your docs, you will se duplicate suggestions.
Do you have any intermediate
Hi Advait ,
First of all I suggest you to study Solr a little bit [1]. because your
requirements are actually really simple :
1) You can simply use more than one suggest dictionary if you care to keep
the suggestions separated ( keeping if a term is coming from the name or
from the the category)
The syntax seems find to me.
One of the requirement for the MLT is to have the field(s) to use for the
processing to be stored ( if termVector is enabled better) .
Apparently from your snippets this is not your problem. Can you confirm you
have the field you are interested stored ( it seems so
Can any of our beloved super guru take a look to my mail ?
It could help Edwin as well :)
Cheers
2015-06-19 11:53 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
Actually the documentation is not clear enough.
Let's try to understand this suggester.
*Building*
This suggester
with this
approach and others.
Cheers
On 22 Jun 2015, at 11:23, Alessandro Benedetti benedetti.ale...@gmail.com
mailto:benedetti.ale...@gmail.com wrote:
I would suggest you to take a look to the Solr Join ( block and query time
join) .
This is what you are looking for.
Anyway, this request points out
In this situations I would suggest you to use the DebugQuery=true.
To simplify the understanding you can also use : splainer.io .
Really nice tool to quickly identify why a document is there.
Cheers
2015-06-23 15:37 GMT+01:00 Freakheart subash.kouti...@classing.de:
I am trying to set up Solr
Let's start from this :
I have a search handler I wrote that runs a sub query so that the main
query
I send to the SearchHandler is extended.
What is the problem you are searching to solve ?
Why a classic filter query or re-ranking query is not ok for you ?
Can you give us some indication of that
It sounds the classic XY problem , can you explain us a little bit better
your problem ?
Why you have such strange field content, how do you produce it ?
Can this be solved with an analysis ad hoc for your language ?
It sounds to me as a tokenization problem, and you are not going to solve
it
You should work at the UpdateProcessor level :
https://wiki.apache.org/solr/UpdateRequestProcessor#Implementing_a_conditional_copyField
This should give you some hint.
Cheers
2015-06-23 13:45 GMT+01:00 Alistair Young alistair.yo...@uhi.ac.uk:
Hi folks,
is it possible to copyField only if
This is strictly a front end development .
You need to modify the velocity template to provide that feature UI side .
Solr side you configure your facet.limit ( on a per field basis if
necessary).
In the browse example it's in the appended params under the specific
request handler.
Cheers
We would like more information, but the first thing I notice is that hardly
would make any sense to use a string type for a file content.
Can you give more details about the exception ?
Have you debugged a little bit ?
How does the solr input document look before it is sent to Solr ?
Furthermore
I would suggest you to take a look to the Solr Join ( block and query time
join) .
This is what you are looking for.
Anyway, this request points out that the documentation is not good enough
to address people to Nested Objects problems.
Maybe this should highlight the need of improving that part
I have provided a Patch, actually i am not sure about the contribution
process, i will read the documentation.
Can anybody give me a feedback ?
https://issues.apache.org/jira/browse/SOLR-7719
Cheers
2015-06-24 14:31 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
https
Up, Can anyone gently take a look to my considerations related the FreeText
Suggester ?
I am curious to have more insight.
Eventually I will deeply analyse the code to understand my errors.
Cheers
2015-06-19 11:53 GMT+01:00 Alessandro Benedetti benedetti.ale...@gmail.com
:
Actually
I agree with David, I see a Ton of wrong configuration in yours …
Please have a read of the documentation linked .
And take a look to this mailing list, we have tons of messages related that
can help you.
A first suggestion for you anyway, is to take care of your analysis.
The suggestions will be
I agree with Updaya,
furthermore It doesn't make any sense to try to solve a Phrase search
problem , not tokenising at all the text …
It's not going to work and it is fundamentally wrong to not tokenise long
textual fields if you want to do free text search in them.
Can you explain us better your
1 - 100 of 628 matches
Mail list logo