Aaahhh.. missed that.
So if I'm using SolrJ, do I need to set that explicitly through set(); or
can I use setFacetSort() somehow? ('cause I can't find an example anywhere,
and it's not inherently obvious).
Hi All!
Can anyone explain me the what are runtimeParameters
specified in the uimaConfig as in link
http://wiki.apache.org/solr/SolrUIMA. also tell me how to integrate our
own analysis engine to solr. I am new to this.
Thanks in advance!
Hi Isha,
To integrate your UIMA analysis engine with Solr. Try:
http://uima.apache.org/sandbox.html#solrcas.consumer
http://uima.apache.org/sandbox.html#solrcas.consumerRegards,
Anuj
On Mon, Apr 18, 2011 at 12:05 PM, Isha Garg isha.g...@orkash.com wrote:
Hi All!
Can anyone explain
Hi,
when using
http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport to
periodically
run a delta-import, is it necessary to run a separate normal
delta-import after it to delete entries
from the index (using deletedPkQuery)?
If so, what's the point of using this method for
It runs delta imports faster. Normally you need to get the Pks that
changed, and then run it through query= which is slow when you have a
lot of Ids
It would be better if someone could just write a new fastdeltaquery= s
that you could do it in one step and also remove the queries...
On
On 18.04.11 09:23, Bill Bell wrote:
It runs delta imports faster. Normally you need to get the Pks that
changed, and then run it through query= which is slow when you have a
lot of Ids
but the query= only adds/updates entries. I'm not sure how to delete
entries
by running a query like
Hi,
I am starting my solr instance with the command java
-Dsolr.solr.home=./test1/solr/ -jar start.jar
where I have a solr.xml file
?xml version=1.0 encoding=UTF-8 standalone=yes?
solr sharedLib=lib persistent=true
cores adminPath=/admin/cores
core default=false
Also if I check
solr/tester/dataimport it responds:
response
−
lst name=responseHeader
int name=status0/int
int name=QTime0/int
/lst
−
lst name=initArgs
−
lst name=defaults
str name=configdataimporter.xml/str
/lst
/lst
str name=statusidle/str
str name=importResponse/
−
lst name=statusMessages
str
did you try with the comlete xpath ?
field column=title xpath=/ARTIKEL/DOKTITEL/OVERSKRIFT1 /
field column=text xpath=/ARTIKEL/AKROP/TXT /
Ludovic.
-
Jouve
France.
--
View this message in context:
hah, actually I tried with complete xpaths earlier but they weren't
working but that was because I had made a mistake in my foreach.. and
then I decided that probably the foreach and the other xpaths were
being concatenated.
however it is not absolutely correct yet, if I run
If a document contains multiple 'txt' fields, it should be marked as
'multiValued'.
field name=txt type=text indexed=true stored=true
multiValued=true/
But if I'm understanding well, you also tried this ? :
field column=text xpath=/ARTIKEL/AKROP /
And for your search (MomsManual),
Can you include hadoop.log output? Likely the other commands fail as well but
don't write the exception to stdout.
On Monday 18 April 2011 12:47:41 McGibbney, Lewis John wrote:
Hi list,
I am using Nutch-1.3 branch, which I checked out today to crawl a couple of
urls in local mode. I have
well basically I copied out the RSS example as I figured that would be
the closest to what I wanted to do
?xml version=1.0 encoding=UTF-8 ?
schema name=tester version=1.1
types
fieldType name=string class=solr.StrField
sortMissingLast=true omitNorms=true/
fieldType name=boolean
Hmm, ok I see the schema was wrong - I was calling the TEXT field
txt... also now I am getting results on my title search after another
restart and reindex - setting the TXT fields to be multiValued.
Thanks,
Bryan Rasmussen
On Mon, Apr 18, 2011 at 1:09 PM, bryan rasmussen
Hello,
I think I have found something extrange with local params and edismax. If I do
querys like :
params:{
hl.requireFieldMatch:true,
hl.fragsize:200,
json.wrf:callback0,
indent:on,
hl.fl:domicilio,deno,
wt:json,
hl:true,
rows:5,
Hi Markus,
hadoop.log from beginning of solr commands as follows
2011-04-18 11:27:05,480 INFO solr.SolrIndexer - SolrIndexer: starting at
2011-04-18 11:27:05
2011-04-18 11:27:05,562 INFO indexer.IndexerMapReduce - IndexerMapReduce:
crawldb: crawl/crawldb
2011-04-18 11:27:05,562 INFO
And you are really sure there's a Solr instance runnning having an update
handler at : http://localhost:8080/wombra/data/update ? Anyway, your URL is
somewhat uncommon in Solr land. It's usually something like:
http://host:port/solr/[core]/update/
On Monday 18 April 2011 14:03:53 McGibbney,
This is a problem of these files in nutch lib. You can easily change these
files with in solr dist directory.
apache-solr-core-1.4.0.jar
apache-solr-solrj-1.4.0.jar
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-from-Nutch-crawl-tp2833862p2834270.html
Sent from
Hi,
I am using a DataImportHandler to get files from the file system, if I
do the url
http://localhost:8983/solr/tester/dataimport?command=full-import it
ends up indexing 11 documents.
If I do
http://localhost:8983/solr/tester/dataimport?command=full-importrows=817
(the number of documents I
any ideas why in this case the stats summaries are so slow ? Thank you
very much in advance for any ideas/suggestions. Johannes
2011/4/5 Johannes Goll johannes.g...@gmail.com
Hi,
thank you for making the new apache-solr-3.1 available.
I have installed the version from
The first question I'd have is whether you're somehow not committing after your
full-import command.
And have you looked at:
http://wiki.apache.org/solr/DataImportHandler#interactive?
This is a little-known feature in Solr to help with DIH.
Is it possible that your JDBC configuration is
Hi Ramires,
I have been using Solr 1.4.1
My understanding from the example solrconfig.xml is that jar's will be loaded
from the /lib directory. I do not have a /dist directory as I have copied the
example directory as my solr home directory therefore I have commented out
these entires in the
There is no problem with your files. Nutch still ships SolrJ 1.4.1. If you
would be using Solr 3.1 you would get a javabin error and not a Not Found
error.
On Monday 18 April 2011 15:37:42 McGibbney, Lewis John wrote:
Hi Ramires,
I have been using Solr 1.4.1
My understanding from the
hi markus
i misunderstood before.
i use nutch.1.2-rc4 with solr-.4.0 trunk. You just need replace these files
apache-solr-core-4.0-SNAPSHOT.jar
apache-solr-solrj-4.0-SNAPSHOT.jar
which are in solr/dist directory with nutch 1.4.1 solrj and core.
--
View this message in context:
It's probably not accurate to say that a lot of sites were *relying* on that
feature. It's an optimization.
Getting a working patch applying to trunk is on my TODO-list within the next
couple months.
https://issues.apache.org/jira/browse/SOLR-752
Watch the issue to see when I get to it.
~
Hello,
I'm interested in trying out the new ICU features in Solr 3.1. However, when I
attempt to set up a field type using solr.ICUTokenizerFactory and/or
solr.ICUFoldingFilterFactory, Solr refuses to start up, issuing Error loading
class exceptions.
I did see the README.txt file that
I don't think you want to put them in solr_home, I think you want to put
them in solr_home/lib/. Or did you mean that's where you put them?
On 4/18/2011 1:31 PM, Demian Katz wrote:
Hello,
I'm interested in trying out the new ICU features in Solr 3.1. However, when I attempt
to set up a
On Mon, Apr 18, 2011 at 1:31 PM, Demian Katz demian.k...@villanova.edu wrote:
Hello,
I'm interested in trying out the new ICU features in Solr 3.1. However, when
I attempt to set up a field type using solr.ICUTokenizerFactory and/or
solr.ICUFoldingFilterFactory, Solr refuses to start up,
As far as I know, Solr will never arrive to a segment file greater than 2GB,
so this shouldn't be a problem.
Solr can easily create a file size over 2GB, it just depends on how much data
you index and your particular Solr configuration, including your
ramBufferSizeMB, your mergeFactor, and
Hello guys, how do you guys output the solr data into frontend? I know you
guys have 30M documents. Are you guys writing an application to do it? or
are you guys using a CMS with solr intergation? Thanks
Hi Li,
Who are you referring to in your question, having 30M docs?
Solr is possible to integrate in tons of different ways. Perhaps if you
describe your use case and requirements, we can suggest the best way for your
particular situation. Please elaborate on what you are trying to accomplish.
I'm sorry, you're right, I was thinking in the 2GB default value for
maxMergeMB.
*Juan*
On Mon, Apr 18, 2011 at 3:16 PM, Burton-West, Tom tburt...@umich.eduwrote:
As far as I know, Solr will never arrive to a segment file greater than
2GB,
so this shouldn't be a problem.
Solr can easily
Thanks Jan. lol.
1. For example, I have a large solr database that contains 30M documents.
I want to show the datas in a web application, how should I do it? Write an
application or use a CMS like Liferay, Magnolia, or Drupal to do it.
On Mon, Apr 18, 2011 at 11:35 AM, Jan Høydahl
Hi All,
I am using apache-solr-1.4.1 and jdk1.6. I have the following scenario.
I have 3 categories of data indexed in solr i.e. CITIES, STATES, COUNTRY.
When I query data from SOLR I need the data from SOLR based on the following
criteria:
In a single query to Solr Engine I need data
I'm now having the same problem but I'm not finding the problem yet.
$ bin/nutch solrindex http://localhost:8080/solr crawl/crawldb/0
crawl/linkdb crawl/segments/0/20110418100309
SolrIndexer: starting at 2011-04-18 10:03:40
java.io.IOException: Job failed!
But everything else seems to have
Thanks! apache-solr-analysis-extras-3.1.jar was the missing piece that was
causing all of my trouble; I didn't see any mention of it in the documentation
-- might be worth adding!
Thanks,
Demian
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Monday, April 18,
Right, I placed my files relative to solr_home, not in it -- but obviously
having a solr_home/lucene-libs directory didn't do me any good. :-)
- Demian
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Monday, April 18, 2011 1:46 PM
To:
Solr is a seach *engine*. It doesn't have anything to do with the presentation.
Most users have an application layer that gets the documents from Solr
via http (in XML, JSON or other format) and then extracts the pieces to
create a web page. What you use for the application layer is totally your
Hi all,
Does the lucene-solr git repository have a tag that marks the 3.1 release?
Context: I want to apply a patch to 3.1 and wish to start from a
well-defined point (i.e. official 3.1 release)
Executing these commands, I would have expected to see a tag marking the 3.1
release. I only see
I think most people are probably writing an application, at least most people
on this list. I am not aware of whether any popular CMS's somehow provide a way
to be a front-end to Solr. It seems a bit out of the mission of a CMS to me and
unlikely, but I'm not familiar with those CMSs (haven't
Hi All!
I want to integrate Uima-solr . I followed the steps in the
readme file.I am using apache solr3.1. The jar file starts fine. But I
dont know the exact syntax in solrj to index my documents for Uima-solr
integration .Can anyone help me out rgarding this
Thanks!
Isha Garg
Li,
there's many ways to output data to the front-end, including solr-itas (a
velocity front-end) and the xslt. Both work almost out of the box (for /itas
you need to use the things described in contribs.
Solr can be populated, at upload time, with verbatim view code (e.g. HTML)
which, I
42 matches
Mail list logo