Hi Gora,
Thanks for the answer. I want to index all the PDF,HTML documents
lying within a tree hierarchy at FTP Server.
In addition, can i add an attribute location whose value is the FTP
FILE LOCATION.
If you can give me, the sample configuration, it will be great.
/
On Fri, Jan 21, 2011 at 1:31 PM, pankaj bhatt panbh...@gmail.com wrote:
Hi Gora,
Thanks for the answer. I want to index all the PDF,HTML documents
lying within a tree hierarchy at FTP Server.
In addition, can i add an attribute location whose value is the FTP
FILE LOCATION.
Hi Gora,
Thanks, however i think it would be a cumbersome process, to do all
this manual.
Aren't there any plugin or extracter does this automatically.???
Anyone in the group, if had done this previously.?
/ Pankaj Bhatt.
On Fri, Jan 21, 2011 at 1:41 PM, Gora Mohanty
On Fri, Jan 21, 2011 at 1:47 PM, pankaj bhatt panbh...@gmail.com wrote:
Hi Gora,
Thanks, however i think it would be a cumbersome process, to do all
this manual.
Aren't there any plugin or extracter does this automatically.???
Anyone in the group, if had done this previously.?
Hello,
I cannot find any examples how to configure
FinnishLightStemFilterFactory (I understood that
SnowballPorterFilterFactory for Finnish language doesn't work correctly).
I tried following in my schema.xml, but I got
org.apache.solr.common.SolrException: Error loading class
On 01/21/2011 11:26 AM, Laura Virtala wrote:
Hello,
I cannot find any examples how to configure
FinnishLightStemFilterFactory (I understood that
SnowballPorterFilterFactory for Finnish language doesn't work correctly).
I tried following in my schema.xml, but I got
Hello, i just finished implementing a master with two slaves (this is a test
for now :P).
I'm trying to figure out how to do backups and restores without stopping the
service or using a passive slave.
Right now i'm backing up using the str name=backupAfteroptimize/str,
and it creates a snapshot
I have setup a Master with two slaves. Let's call the Master Jabba and the
slaves Leia and C3PO (very nerdy! lol).
Well, i have setup in Jabba the replication, with the following confFiles
str
name=confFilessolrconfig_slave.xml:solrconfig.xml,schema.xml,stopwords.txt,elevate.xml/str
But in the
You'll can use a property and define it for each slave in solrcore.properties.
http://wiki.apache.org/solr/SolrConfigXml#System_property_substitution
On Friday 21 January 2011 14:04:28 Ezequiel Calderara wrote:
I have setup a Master with two slaves. Let's call the Master Jabba and
the slaves
On Tue, Jan 18, 2011 at 6:04 PM, Grant Ingersoll gsing...@apache.orgwrote:
Where do you get your Lucene/Solr downloads from?
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build
Thanks!, thats what i needed!
There is always some much to learn about Solr/Lucene!
On Fri, Jan 21, 2011 at 10:08 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
solrcore.properties
--
__
Ezequiel.
Http://www.ironicnet.com
Grant Ingersoll schrieb:
As devs of Lucene/Solr, due to the way ASF mirrors, etc. works, we really
don't have a good sense of how people get Lucene and Solr for use in their
application. Because of this, there has been some talk of dropping Maven
support for Lucene artifacts (or at least
Somehow it's not working :(
i have set it up like:
#solrcore.properties
data.dir=D:\Solr\PAU\data
But it keeps going to the dataDir configured in the solrconfig.xml.
Also, when i go to the replication admin i see this:
*Master* http://10.11.33.180:8787/solr/replication *Poll Interval*
You have defined the property and its value but you're not using it. Set the
property in solrconfig.xml.
On Friday 21 January 2011 15:38:58 Ezequiel Calderara wrote:
Somehow it's not working :(
i have set it up like:
#solrcore.properties
data.dir=D:\Solr\PAU\data
But it keeps
Did you modify the solrconfig file with:
dataDir${data.dir}/dataDir
??
On Fri, Jan 21, 2011 at 11:38 AM, Ezequiel Calderara ezech...@gmail.comwrote:
Somehow it's not working :(
i have set it up like:
#solrcore.properties
data.dir=D:\Solr\PAU\data
But it keeps going to the
Ohh i see... i was setting a default value in the solrconfig_slave like
this:
dataDir${solr.data.dir:.\data}/dataDir
i will try the ${data.dir}
2011/1/21 Tomás Fernández Löbbe tomasflo...@gmail.com
Did you modify the solrconfig file with:
dataDir${data.dir}/dataDir
??
On Fri, Jan 21,
It worked! :)
On Fri, Jan 21, 2011 at 12:02 PM, Ezequiel Calderara ezech...@gmail.comwrote:
Ohh i see... i was setting a default value in the solrconfig_slave like
this:
dataDir${solr.data.dir:.\data}/dataDir
i will try the ${data.dir}
2011/1/21 Tomás Fernández Löbbe
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company
Where do you get your Lucene/Solr downloads from?
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
On
Out of curiousity, how would Lucandra help in the NRT use case?
On Thu, Jan 20, 2011 at 11:42 PM, Espen Amble Kolstad es...@trank.no wrote:
I haven't tried myself, but you could look at solandra :
https://github.com/tjake/Lucandra
- Espen
On Thu, Jan 20, 2011 at 6:30 PM, stockii
Hi,
To get phrase search with proximity work fine I am planning to integrate
SurroundQueryParser. However, I wanted to know whether the functionality
provided in SOLR 1604 (i.e. Wildcard, OR's inside Phrases) would work fine
with it or not?
If not, what's the alternative as I need both
First, its more Solandra now (although the project is still named
lucandra) ;)
Second, it can help because data which is written to the index is
immediately (configurable) available for search.
solandra is distributed + real time solr, with no changes required on
client side (be it SolrJ or
It seems SurroundQueryParser is in Lucene NOT Solr. So does this mean I will
have to integrate it in Lucene and update that jar file in SOLR?
Thanks
On Fri, Jan 21, 2011 at 11:33 PM, Ahmet Arslan iori...@yahoo.com wrote:
--- On Fri, 1/21/11, Salman Akram salman.ak...@northbaysolutions.net
It seems SurroundQueryParser is in
Lucene NOT Solr. So does this mean I will
have to integrate it in Lucene and update that jar file in
SOLR?
Yes, it is in Lucene only. You need a custom QParserPlugin to port it to solr.
You cannot use it as it is.
Peter,
thanks for sharing.
It sounds interesting. However: What are the downsides of this combo?
Regards
Peter Karich wrote:
First, its more Solandra now (although the project is still named
lucandra) ;)
Second, it can help because data which is written to the index is
immediately
Does anyone know if there is a function in solr that allows
us to log
optimize times? ie - the length of time optimization takes.
I can find alot
of questions of how long optimization should take, but thus
far nothing on
how to access how long a particular run actually took.
There is a
Hello All,
MY index documents represent a set of papers each with an author id and the id
of the referee that reviewed the paper.
I also end up with a field in each document that tells me whether the referee
still has the paper, but has not graded it. THis can be a boolean.
In my final result
It seems the proper xpath statement to select the href for the link child
when rel=self is
/feed/link[@rel='self']/string(@ref) for the root
/feed/entry/link[@rel='alternate']/string(@ref) should get the childern .
But it doesn't work in the DIH, does work on other xpath query processors.
Can
Hi,
Please take a look at Apache Nutch. I can crawl through a file system over FTP.
After crawling, it can use Tika to extract the content from your PDF files and
other. Finally you can then send the data to your Solr server for indexing.
http://nutch.apache.org/
Hi All,
Is there is any
Hi all,
I've been using solr 1.4 and it's working great for what I'm
doing. However, I'm now finding a need to filter results by location.
Searching around, I see that the distance functions are implemented in
solr 4.0, but there's no full release yet.
So my question is, is solr
Hello,
I have been trying to integrate Katta with Solr sadly, without success.
I am using the information from JIRA issue 1395 as a guide. However, this
information seems out of date and incomplete.
So far, I have attempted to integrate Katta with both solr trunk and
branch-1.4.
I am
Hi,
You can use Solr 1.4.1 and a third party plugin [1]. It does a pretty good job
in spatial search. You could also try the Solr 3.1 branch which also has some
spatial features on-board. It, however, does not return computed distances but
can filter and sort using the great circle algorithm
I would like to have two sets of data and search them separately (they are
used for two different websites).
How can I do it?
Thanks!
you can configure it as two different instances in a tomcat server or keep
running two jetty apps... :P
On Fri, Jan 21, 2011 at 8:51 PM, Igor Chudov ichu...@gmail.com wrote:
I would like to have two sets of data and search them separately (they are
used for two different websites).
How can I
Igor,
you can set two different Solr cores in solr.xml and search them separately.
See multicore example in Solr distribution.
-Alexander
On Fri, Jan 21, 2011 at 3:51 PM, Igor Chudov ichu...@gmail.com wrote:
I would like to have two sets of data and search them separately (they are
used for
The Solr 4 branch is nowhere near ready for prime time. For example,
within the past week code was added that forces you to completely
reindex all of the documents you had. Solr 4 is really the trunk.
The low-level stuff is being massively changed to allow very big
performance improvements and new
Most security systems don't authenticate by user, they authenticate by
roles. Each user has one or more roles. In one large enterprise there
are 7000 roles (including printer access), each user may have 300, and
each document may have 20-50.
It works well to add roles as a multivalued field in
Hello
We've just started using solr to provide search functionality for our
application with the DataImportHandler performing a delta-import every 1
fired by crontab, which works great, however it does occasionally miss
records that are added to the database while the delta-import is running.
Hello,
I have been trying to integrate Katta with Solr sadly, without success.
I am using the information from JIRA issue 1395 as a guide. However, this
information seems out of date and incomplete.
So far, I have attempted to integrate Katta with both solr trunk and branch-1.4.
I am unable to
Hi Jerry,
Sorry, not a direct answer, but why Katta? Why nor SolrCloud (i.e. trunk)
instead?
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Jerry Mindek jerry.min...@gmail.com
To:
Steve,
Does http://wiki.apache.org/solr/FieldCollapsing do what you need?
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Steve Fuchs st...@aps.org
To: solr-user@lucene.apache.org
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream
Hi, all,
In the past, I always used SolrNet to interact with Solr. It works great.
Now, I need to use SolrJ. I think it should be easier to do that than
SolrNet since Solr and SolrJ should be homogeneous. But I cannot find a
tutorial that is easy to follow. No tutorials explain the SolrJ
The timestamp thing is not perfect. You can instead do a search
against Solr and find the latest timestamp in the index. SOLR-1499
allows you to search against Solr in the DataImportHandler.
On Fri, Jan 21, 2011 at 2:27 AM, btucker btuc...@mintel.com wrote:
Hello
We've just started using solr
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
On
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
[X]
The unit tests are simple and show the steps.
Lance
On Fri, Jan 21, 2011 at 10:41 PM, Bing Li lbl...@gmail.com wrote:
Hi, all,
In the past, I always used SolrNet to interact with Solr. It works great.
Now, I need to use SolrJ. I think it should be easier to do that than
SolrNet since Solr
48 matches
Mail list logo