Use Chrome.
On Thu, Sep 19, 2013 at 7:32 AM, Micheal Chao fisher030...@hotmail.comwrote:
hi, I have installed solr4.4 on tomcat7.0. the problem is I can't see the
solr admin page, it's always show loading. I can't find any error in
tomcat logs, and I can send search request, and get the
Doc count did not change after I restarted the nodes. I am doing a single
commit after all 80k docs. Using Solr 4.4.
Regards,
Saurabh
On Mon, Sep 23, 2013 at 6:37 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Interesting. Did the doc count change after you started the nodes again?
(Your schema and query only appear on the nabble.com forum, it is mostly
empty for me on the mailing list)
What you want is probable to change OR to AND :
params.set(q.op, AND);
André
On 09/23/2013 04:44 PM, asuka wrote:
Hi Jack,
I've been working with the following schema field
Hi - I think there is a bug in the conversion methods for SolrParams. But it
seems that using ModifiableSolrParams (to add and remove parameters and values,
which is what I want to do), is the way to go.
/Peter
-Original Message-
From: Peter Kirk [mailto:p...@alpha-solutions.dk]
Rest of the queries work and i have added the following in solrconfig.xml:
requestHandler name=/update/extract
class=solr.extraction.ExtractingRequestHandler
lst name=defaults
str name=map.Last-Modifiedlast_modified/str
str name=fmap.contentcontents/str
str name=lowernamestrue/str
str
That is in the pipeline. within next 3-4 months for sure
On Mon, Sep 23, 2013 at 11:07 PM, lochri loc...@web.de wrote:
Yes, actually that would be a very comfortable solution.
Is that planned ? And if so, when will it be released ?
--
View this message in context:
Shawn: unfortunately the current problems are with facet.method=enum!
Erick: We already round our date queries so they're the same for at least
an hour so thankfully our fq entries will be reusable. However, I'll take a
look at reducing the cache and autowarming counts and see what the effect
on
First I indexed documents using indexing xml files to solr(sending doc to
solr using xml file)
Then I made changes to schema.xml ie. I added analyzer and tokenizer.
I then indexed some new documents using same procedure,now my searching with
spaces works only for newly indexed files and not the
On 24 September 2013 14:34, Nutan nutanshinde1...@gmail.com wrote:
First I indexed documents using indexing xml files to solr(sending doc to
solr using xml file)
Then I made changes to schema.xml ie. I added analyzer and tokenizer.
I then indexed some new documents using same procedure,now my
It's not always that when you change schema.xml you need to re-index.
For eg., if you add any tokenizer for Query Analyser you don't need to reindex.
But in below case I suppose your changes in schema is related for indexing
time. Then you need to re-index.
Sequencing of documents depends
Okay thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/searching-within-documents-tp4090173p4091705.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Can we call a java class inside a solr ddata-config.xml file similar to
calling a script function.
I have few manipulations to do before sending data via dataimporthandler.
For each row, can I pass that row to a java class in the same way we pass
it to a script function?
Thanks,
Prasi
Why does it happens that for few words it shows output and for few it does
not?
For example,
1)
q=contents:Sushant
numfound is 0
q=contents:sushant
gives output
2)
q=contents:acted
numfound 0
q=contents:well
gives output
This is the document:
result name=response numFound=1 start=0
doc
I found the solution.
http://dzoessolr020:8080/solr4/person/select/?
q=
(
( ( GenderSFD:Male )
AND {!join from=PersonID to=CoreID fromIndex=personjob
v='((CoCompanyName:hospital) OR (PoPositionsAllS:developer))'}
AND {!join from=DocPersonAttachS to=CoreID fromIndex=document v='(DocNameS:
I am struggling to get a deep understanding of soft commit.
I have read Erick's post
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
which helped me a lot with when and why we should call each type of commit.
But still, I cant understand what
I think delta imports only work on the parent entity and cached child entities
will load in full, even if you only need to look up a few rows for the delta.
Others though might have a way to get this to work.
Here's two possible workarounds.
On the child entity, specify:
entity
Use Mozilla for better use even in IE it is not working properly
-Original Message-
From: William Bell [mailto:billnb...@gmail.com]
Sent: Tuesday, September 24, 2013 12:02 PM
To: solr-user@lucene.apache.org
Subject: Re: solr4.4 admin page show loading
Use Chrome.
On Thu, Sep 19, 2013
You probably want to write a custom Transformer. See:
http://wiki.apache.org/solr/DIHCustomTransformer
Or maybe a custom Evaluator. See:
http://wiki.apache.org/solr/DataImportHandler#Evaluators_-_Custom_formatting_in_queries_and_urls
Possibly one or more of the built-in Transformers will do
Mu Query Looks Like:
start=0rows=10hl=truehl.fl=contentqt=dismax
q=pookan
fl=id,application,timestamp,name,score,metaData,metaDataDate
fq=application:OnlineR3_6_4
fq=(metaData:channelId/101 OR metaData:channelId/104)
sort=score desc
but not getting result as per desired
doc
str
I need to implement further functionality picture of it is attached below.
http://lucene.472066.n3.nabble.com/file/n4091734/iphone.png I have
already running application based o Solr search.
In a few words about it: drop down will contain similar search phrases
within concrete category and number
Luís would you mind sharing your findings for others / archive?
On Tuesday, September 10, 2013 at 6:49 PM, Luís Portela Afonso wrote:
Solved
On Sep 10, 2013, at 4:55 PM, Luís Portela Afonso meligalet...@gmail.com
(mailto:meligalet...@gmail.com) wrote:
It's that possible to execute
Hi,
I believe data is not fsynched to disk until a hard commit (and even
then disks can lie to you and tell you data is safe even though it's
still in disk cache waiting to really be written to the medium) ,
which is why you can lose it between hard commits. Soft commits just
make newly added
Is it possible that some of those 80K docs were simply not valid? e.g.
had a wrong field, had a missing required field, anything like that?
What happens if you clear this collection and just re-run the same
indexing process and do everything else the same? Still some docs
missing? Same number?
Peter
You can access request params that way: ${dataimporter.request.command} (from
https://wiki.apache.org/solr/DataImportHandler#Accessing_request_parameters) -
although i'm not sure what happens if you provide the same param multiple times.
Perhaps i'd go with oid=5,6 as url param and use
It's not clear what you're trying to do. Do you want to un-group the results?
By that I mean are you trying to take the grouped results you get back and
display them in one flat list ordered by score?
If that's the case, the simplest thing to do would be to do this on
the application
side with
Consider using a SolrJ program, perhaps multiple
ones running in parallel.
See: http://searchhub.org/dev/2012/02/14/indexing-with-solrj/
Best,
Erick
On Mon, Sep 23, 2013 at 3:31 PM, Sadika Amreen samr...@pyaanalytics.com wrote:
Hi all,
I am looking to index the entire directory of PDF
Sure, index the parent node id (perhaps multiple) with each child
and add fq=parent_id:12.
you can do the reverse and index each node with it's child node IDs
to to ask the inverse question.
This won't extend to grandchildren/parents, but you haven't stated that you
need to do this.
Best,
Erick
bq: As an aside, it would be nice if the queryparser could do the same
thing in Lucene
Lucene does not and (probably) will not ever know anything about the
schema. It's
purposely unaware of this higher-level construct. I wish you great good luck
persuading the lucene guys to have anything like a
Jérôme
Just had a quick look at the source of
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathEntityProcessor.java?view=markup#l324
.. which looks like there is LOG.warn(msg, e); Statement on Line 331 where msg
Thank you Erick.
I actually do need it to extend to grandchildren as stated in I need to
be able to find *all descendants* of a node with one query.
I already have an index that allows me to find the direct children of a
node, what I need is to be able to get all descendants of a node
Did all of the curl update commands return success? Ane errors in the logs?
wunder
On Sep 24, 2013, at 6:40 AM, Otis Gospodnetic wrote:
Is it possible that some of those 80K docs were simply not valid? e.g.
had a wrong field, had a missing required field, anything like that?
What happens if
Hi Andre,
I don't want to get documents that fit my whole query, I want those
documents that are fully satisfied with some terms of the query.
In other words, I'm interested in an exact match from the point of view of
the document, not from the point of view of the query.
Asuka
Andre
Thanks Michael.
Arcadius.
On 23 September 2013 05:32, Michael Ryan mr...@moreover.com wrote:
Sounds like https://issues.apache.org/jira/browse/LUCENE-3821 (issue
seems to be fixed but still shows as open).
-Michael
-Original Message-
From: Arcadius Ahouansou
I discovered how to use the
ScriptTransformerhttp://wiki.apache.org/solr/DataImportHandler#ScriptTransformer
which
worked to solve my problem. I had to make use
of context.setSessionAttribute(...,...,'global') to store a flag for the
value in the file because the script is only called if there
Are you saying that the more times the word appears, the more you want
it to score?
Note, add debugQuery=true to your query, and look near the end of the
output, you will be able to see exactly how the score was calculated and
thus which component wasn't behaving as you expected (you might want
Thanks Erick for your response.
My goal is
1. try to search from solr. In the search result, we would like show no more
than two results from the same source id.
2. For the search results, we would like all these results sorted by their
score.
So If I use solr result grouping to get the top two
Summary - when constraining a search using filter query, how can I exclude
the constraint for a particular facet?
Detail - Suppose I have the following facet results for a query q=*
mainquery*:
lst name=facet_counts
lst name=facet_queries/
lst name=facet_fields
lst name=foo
int name=A491/int
int
: documentation that I can limit results to category A as follows:
:
: fq={!raw f=foo}A
:
: But I cannot seem to (Solr 3.6.1) exclude that way:
:
: fq={!raw f=foo}-A
with the raw qparser, there is no markup syntax at all -- so it's
interpreting the - as part of the literal term value you are
Your requirement is still somewhat ambiguous - you use fully and some in
the same sentence. Which is it?
If you simply want documents that contain every one of the query terms,
using the explicit AND operator (+ or AND) or set the implicit operator
to AND.
But... we are still in the dark as
: Your requirement is still somewhat ambiguous - you use fully and some in
: the same sentence. Which is it?
the request seems pretty clear to me...
: I don't want to get documents that fit my whole query, I want those
: documents that are fully satisfied with some terms of the query.
...my
We ran into 1 snag during development with SOLR and I thought I'd run it by
anyone to see if they had any slick ways to solve this issue.
Basically, we're performing a SOLR query with grouping and want to be able
to sort by the number of documents found within each group.
Our query response from
: q=pookan
...
: Acutually i want particular word for that match max in content tag that
: come first (relevancy based)
the default TF-IDF scoring mechanism rewards documents for matching a term
multiple times (thats the the TF part) but there is also a length
normalization factor that
Hi,
I'm using Solr's Suggester function to implement an autocomplete feature.
I have it setup to check against the username and name fields. Problem
is when running a query against the name, the second term, after
whitespace (surename) returns 0 results. Works if if query is a partial
name
Thanks Chris,
that's exactly what I was looking for.
One last question. As far as I can see, the solution that you are offering
me, termfreq is for Solr 4+, isn't it?
Right now I'm working with Solr 3.6.2. Is there any solution for such
version or do I need an upgrade?
Kind Regards
--
why does stripHTML=false have no effect in dih? the html is strippedin text
and text_nohtml when i do display the index with select?q=*
i'm trying to get a field without html and one with it so i can also index the
links on the page.
data-config.xml
entity name=rec
(NOTE: cross-posted to various lists, please reply only to general@lucene
w/ any questions or follow ups)
Hey folks,
2 announcements regarding the upcoming Lucene/Solr Revolution EU 2013 in
Dublin (November 4-7)...
## 1) Session List Now Posted
I'd like to thank everyone who helped vote
@Furkan Yes, I have run a commit, other text is searchable.
Not sure what you mean there for MultiPhraseQuery. It is mentioned in
context to SynonymFilterFactory, RemoveDuplicatesTokenFilterFactory and
PositionFilterFactory. Which part are you referring to?
@Jason I get this response (I have
On 9/24/2013 5:51 AM, adfel70 wrote:
My conclusion is that soft commit always flushes the data, but because of
the implementation of NRTCachingDirectoryFactory, the data will be written
to the disk when its getting too big.
The NRTCachingDirectoryFactory (which creates NRTCachingDirectory
Hi,
I'm new to SolrCloud , trying to set up a test environment based on
the wiki documentation. Based on the example, the setup and sample indexing
/ query works fine. But I need some pointers on the best practices of
indexing / querying in SolrCloud. For e.g. I've 2 shards, with 1 leader
WordDelimiterFilterFactory was the culprit. Removing that fixed the problem.
Thanks,
-Utkarsh
On Tue, Sep 24, 2013 at 12:17 PM, Utkarsh Sengar utkarsh2...@gmail.comwrote:
@Furkan Yes, I have run a commit, other text is searchable.
Not sure what you mean there for MultiPhraseQuery. It is
On 9/24/2013 2:46 PM, Shamik Bandopadhyay wrote:
Now, I'm using SolrJ client (CloudSolrServer) to send documents for
indexing. Based on SolrCloud fundamentals, I can send the document to any
of the four servers or to a specific shard id. Is it advisable to use the
server information directly
Hi, I'm new to SolrCloud , trying to set up a test environment based on the
wiki documentation. Based on the example, the setup and sample indexing /
query works fine. But I need some pointers on the best practices of indexing
/ querying in SolrCloud. For e.g. I've 2 shards, with 1 leader and a
Thanks for the insight Shawn, extremely helpful.
Appreciate it.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Best-practice-to-index-and-query-SolrCloud-tp4091823p4091836.html
Sent from the Solr - User mailing list archive at Nabble.com.
Run maven dependency tree command and you can easily understand the cause
of dependency conflict if not you can send your command line output and we
can help you.
21 Eylül 2013 Cumartesi tarihinde Erick Erickson erickerick...@gmail.com
adlı kullanıcı şöyle yazdı:
bq: Caused by:
http://namibia4u.greendoor12.com/blinkus.php Am Ende besteht das Leben eines
Menschen meist nur noch aus einigen wenigen ungestrichenen Passagen.
Ok, thanks for your answers!
Scott
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: Wednesday, September 18, 2013 5:36 PM
To: solr-user@lucene.apache.org
Subject: Re: Querying a non-indexed field?
Moreover, you may be trying to save/optimize in
Hello,
I created my own codec and Solr can find it sometimes and not other times.
When I start fresh (delete the data folder and run Solr), it all works fine. I
can add data and query it. When I stop Solr and start it again, I get:
Caused by: java.lang.IllegalArgumentException: A SPI class
On 9/24/2013 6:32 PM, Scott Schneider wrote:
I created my own codec and Solr can find it sometimes and not other times.
When I start fresh (delete the data folder and run Solr), it all works fine.
I can add data and query it. When I stop Solr and start it again, I get:
Caused by:
Not Sure though ,depends how your schema is configured. If it has WordDelimiter
Filters at time of indexing or querying then search will not be as you desired.
Check your index creation using solr analysis for this type of string.
Thanks,
Abhinav
-Original Message-
From: Viresh Modi
59 matches
Mail list logo