, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
Hi guys,
I was working with the ContentStreamUpdateRequest in solr 4.5 to send to
Solr a document with a set of metaData through an HTTP POST request.
Following the tutorial is easy to structure the request
I copy here this information as well.
Another detail that comes to my mind is that the SolrServer used to process
the request is *CloudSolrServer.*
I will check the implementation of the method.
2013/12/14 Alessandro Benedetti benedetti.ale...@gmail.com
Thank you Raymond,
so what's wrong
Hi guys,
I was working with the ContentStreamUpdateRequest in solr 4.5 to send to
Solr a document with a set of metaData through an HTTP POST request.
Following the tutorial is easy to structure the request :
*contentStreamUpdateRequest.setParam(literal.field1,value1);*
running and what's the version of SolrJ? I am
guessing they are different.
On Wed, Oct 30, 2013 at 8:32 PM, Alessandro Benedetti
benedetti.ale...@gmail.com wrote:
I have a zookeeper ensemble hotes in one amazon server.
Using the CloudSolrServer and trying to connect , I obtain
I have a zookeeper ensemble hotes in one amazon server.
Using the CloudSolrServer and trying to connect , I obtain this nreally
unusual error :
969 [main] INFO org.apache.solr.common.cloud.ConnectionManager - Client is
connected to ZooKeeper
1043 [main] INFO
Hi guys,
I was thinking how to activate the DocValues approach for using faceting.
Tell me if I am correct :
1) enable in schema.xml for the field of interest, the DocValues attribute
set to true.
2) use one of these 2 faceting strategies : fc ( Field Cache) or fcs (Per
segment Field Cache)
3)
Hi guys, I think this is a very simple bug, but i didn't know where to
quickly post it :
In the schemaless mode, in the solr admin, if you select a Core and then
select the schema tab, a wild error will appear, because no schema.xml file
exists :
Nope, it's not the last component problem, but it's definetely the
request handler problem, it was the same for me ...
Switching to the /tvrh requesthandler solved my problem.
We should update the wiki !
2013/9/27 Shawn Heisey s...@elyograg.org
On 9/27/2013 4:02 PM, Jack Krupansky wrote:
It's really simple indeed. Solr provide the SpellCheck[1] feature that
allow you to do this.
You have only to configure the RequestHandler and the Search Component.
And of course develop a simple ui ( you can find an example in the velocity
response handler Solritas[2] .
Cheers
[1]
I think this could help : http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning
Cheers
2013/9/27 ewinclub7 ewincl...@hotmail.com
ด้วยที่แทงบอลแบบออนไลน์กำลังมาแรงทำให้พวกโต๊ะบอลเดี๋ยวนี้ก็เริ่มขยับขยายมาเปิดรับแทงบอลออนไลน์เอง
download goldclub http://www.goldclub.net/download/
Hi guys,
I was studying in deep the join feature, and I noticed that in Solr , the
join query parser is not working in scoring.
If you add the parameter scoreMode it is completely ignored...
Checking the source code it's possible to see that the join query is built
as follow :
public class
?
In simple words I want a bq to activate specific bf.
Cheers
--
---
Alessandro Benedetti
Sourcesense - making sense of Open Source: http://www.sourcesense.com
the date boost.
But this function I wrote , has a wrong sintax, I need to correct the
exists part.
Any hint?
2012/10/26 Alessandro Benedetti a.benede...@sourcesense.com
Hi guys,
I was fighting with boost factor in my edismax request handler :
lst name=appends
str name=defTypeedismax
document, this is the principal need of
my plugin.
How can i search the early indexed documents? How can i open the new
searcher? and where?
Inside the postCommit seems to be not good...
Any suggestion?
2011/12/29 Alessandro Benedetti benedetti.ale...@gmail.com
Hi guys,
I'm developing a custom
2011/12/31 Alessandro Benedetti benedetti.ale...@gmail.com
Ok, I have made progresses, I built my architecture and I execute queries
, inside the PostCommit method, and they are launched as i want.
But The core can't see the early updated documents and the commit ends
after than
Hi guys,
I'm developing a custom SolrEventListener, and inside the PostCommit()
method I need to execute some queries and collect results.
In my SolrEventListener class, I have a SolrCore
Object( org.apache.solr.core.SolrCore) and a list of queries (Strings ).
How can I use the SolrCore to
Hi Guys,
I probably found a way to mime the delta import for the fileEntityProcessor
( I have used it for xml files ... )
Adding this configuration in the xml-data-config :
entity name=personeImpreseList rootEntity=false dataSource=null
processor=FileListEntityProcessor
fileName=^.*\.xml$
Any News?
I'm also interested in this topic :)
2011/12/12 Brian Lamb brian.l...@journalexperts.com
Hi all,
According to
http://wiki.apache.org/solr/DataImportHandler#Usage_with_XML.2BAC8-HTTP_Datasource
a
delta-import is not currently implemented for URLDataSource. I say
currently
you
2011/9/29 Smiley, David W. dsmi...@mitre.org
On Sep 29, 2011, at 5:10 PM, Alessandro Benedetti wrote:
Sorry David, probably I misunderstood your reply, what do you mean?
I'm using Lucid Work Enterprise 1.8, and, as I know , it includes
geohashes
patch.
Solr 3x, trunk, and I
?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12994256#comment-12994256
).
Am I indexing wrong? Am I missing something?
The type of my spatial field is geohash ...
Cheers
--
---
Alessandro Benedetti
Sourcesense - making sense
with geohashes is an extension of what's in Solr,
it's not what's in Solr today. Recently I ported SOLR-2155 to Solr 3x, and
in a way that does NOT require that you patch Solr. I attached it to the
issue just now.
~ David Smiley
On Sep 29, 2011, at 9:37 AM, Alessandro Benedetti wrote:
Hi all,
I
We developed a custom Highlighter to solve this issue.
We added a url field in the solr schema doc for our domain and when
highlighting is called, we access the file, extract the information and send
them to the custom highlighter.
If you still need some help, I can provide you, our solution in
the exception?
2010/9/7 Grant Ingersoll gsing...@apache.org
On Sep 7, 2010, at 7:08 AM, Alessandro Benedetti wrote:
Hi all,
I need to retrieve query-results with a ranking independent from each
query-result's default lucene score, which means assigning the same score
to
each query result
Hi all,
I need to retrieve query-results with a ranking independent from each
query-result's default lucene score, which means assigning the same score to
each query result.
I tried to use a zero boost factor ( ^0 ) to reset to zero each
query-result's score.
This strategy seems to work within the
. It would be very nice to have a Solr implementation using
the
newest versions of PDFBox Tika and actually have content being
extracted...=)
Best,
Dave
-Original Message-
From: Alessandro Benedetti [mailto:benedetti.ale...@gmail.com]
Sent: Tuesday, July 27, 2010 6:09 AM
Hi Jon,
During the last days we front the same problem.
Using Solr 1.4.1 classic (tika 0.4 ),from some pdf files we can't extract
content and from others, Solr throws an exception during the Indexing
Process .
You must:
Update tika libraries (into /contrib/extraction/lib)with tika-core.0.8
Hi all,
as I saw in this discussion [1] there were many issues with PDF indexing in
Solr 1.4 due to TIka library (0.4 Version).
In Solr 1.4.1 the tika library is the same so I guess the issues are the
same.
Could anyone, who contributed to the previous thread, help me in resolving
these issues?
Hi all,
I'm going to develop a search architecture solr based and i wonder if you
could suggest me which Solr version will suite best my needs.
I have 10 Solr machines which use replication, sharding and multi-core ; 1
Solr server would index Documents (Xml, *Pdf*,Text ... ) on a *NFS*
601 - 628 of 628 matches
Mail list logo