Field type is long and not multi valued.
Using solr 3.3 war file ,
Tried on solr 1.4.1 index and solr 3.3 index , both cases its not working.
query :
http://localhost:8091/Group/select?/indent=onq=studyid:120sort=studyidasc,groupid
asc,subjectid ascstart=0rows=10
all the ID fields are long
Hi all,
In my dismax request handler I'm usually using both qf and pf
parameters in order to do phrse and query search with different
boosting.
Now there are some scenario when I want just the pf active (without
qf). Othen then surrounding my query with double quotes, is there
another way to do
Am 14.11.2011 09:33, schrieb rajini maski:
query :
http://localhost:8091/Group/select?/indent=onq=studyid:120sort=studyidasc,groupid
asc,subjectid ascstart=0rows=10
Is it a copy-and-paste error, or did you realls sort on studyidasc?
I don't think you have a field studyidasc, and Solr
thanks for the replies... the problem with Synonyms is that they would need
to be tracked... there could be new words entered that will need to be
added to the list on a regular basis...
@Otis: As for the option of a custom TokenFilter, how would that work? i
have not coded anything into Solr or
Thanks for your reply Mr. Erick
All I want to do is that I have indexed some of my pdf
files and doc files.
Now, any changes I make to them, I want a
delta-import(incremental) so that
I do not have to re index whole document by full import .
Only changes made
to these documents should get
Hi Erick, hi Yury,
thanks to your input I found a perfect solution for my case. Even though
this is not a solr-only solution, I will just briefly describe how it works
since it might be of interest to others:
I have put up a mysql database holding two tables. The first only has a
primarykey with
Hi,
By counting in facet results I mean resolve the problem:
I have 7 documents:
A1 B1 C1
A2 B1 C1
A3 B2 C1
A4 B2 C2
A5 B3 C2
A6 B3 C2
A7 B3 C2
If I make the facet query by field B, get the result: B1=2, B2=2, B3=3.
A1 B1 C1
A2 B1 C1 2 - facing by B
Thanks for your reply...my data-config.xml is
dataConfig
dataSource type=BinFileDataSource name=bin/
document
entity name=f pk=id processor=FileListEntityProcessor
recursive=true
rootEntity=false
dataSource=null baseDir=/var/data/solr
Hi, i think what you are looking for is *nested facets* or *
HierarchicalFaceting http://wiki.apache.org/solr/HierarchicalFaceting*
*
*
Category A - Subcategory A1
Category A - Subcategory A1
Category B - Subcategory A1
Category B - Subcategory B2
Category A - Subcategory A2
Faceting by Category:
Thanks for your reply...my
data-config.xml is
dataConfig
dataSource
type=BinFileDataSource name=bin/
document
entity
name=f pk=id processor=FileListEntityProcessor
recursive=true
rootEntity=false
dataSource=null baseDir=/var/data/solr
Earlier issue has been resolved but stuck up on something else. Can you tell
me which poi jar version would work with tika.0.6. Currently I have
poi-3.7.jar. Error which i am getting is this
SEVERE: Exception while processing: js_logins document :
SolrInputDocument[{id=id(1.0)={100984},
Hi,
I'm planning to do some information retrieval experiments with Solr.
I'd like to compare different IR methods. I have a test collection
with topics and judgements available. I'm considering using Solr (and
not Lemur/Indri etc.) for the tests, because Solr supports several
nice methods
Hi Mark,
In the above case , what if the index is optimized partly ie. by
specifying the max no of segments we want.
It has been observed that after optimizing(even partly optimization), the
indexing as well as searching had been faster than in case of an
unoptimized one.
Decreasing the merge
There is no error as such.
When I do a basic sort on *long *field. the sort doesn't happen.
Query is :
-http://blr-ws-195:8091/Solr3.3/select/?q=2%3A104+AND+526%3A27747version=2.2start=0rows=10indent=onsort=469%20ascfl=469#
lst name=*responseHeader*
int name=*status*0/int
int
I use Solandra that integrates Solr 3.4 with Cassandra. So, is there any way
to solve this problem with Solr 3.4 (without pivots)?
Your results are:
Cat: A=3
SubCat: A1=2 and A2=1
Cat: B=2
SubCat: A1=1 and B2=1
but I would like to have:
Cat: A=3
SubCat: 2 (losing information about the
I'm planning to do some information retrieval experiments
with Solr.
I'd like to compare different IR methods. I have a test
collection
with topics and judgements available. I'm considering using
Solr (and
not Lemur/Indri etc.) for the tests, because Solr supports
several
nice methods
Hi,
Whenever I am searching with the words OfficeJet or officejet or
Officejet or oFiiIcejET. I am getting the different results for each
search respectively. I am not able to understand why this is happening?
I want to solve this problem such a way that search will become case
insensitive and
When I do a basic sort on *long *field. the sort doesn't
happen.
Query is :
-http://blr-ws-195:8091/Solr3.3/select/?q=2%3A104+AND+526%3A27747version=2.2start=0rows=10indent=onsort=469%20ascfl=469#
lst name=*responseHeader*
int name=*status*0/int
int name=*QTime*3/int
Check this :
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.LowerCaseFilterFactory
On Mon, Nov 14, 2011 at 3:24 PM, jayanta sahoo jsahoo1...@gmail.com wrote:
Hi,
Whenever I am searching with the words OfficeJet or officejet or
Officejet or oFiiIcejET. I am getting the
On Nov 14, 2011, at 8:27 AM, Isan Fulia wrote:
Hi Mark,
In the above case , what if the index is optimized partly ie. by
specifying the max no of segments we want.
It has been observed that after optimizing(even partly optimization), the
indexing as well as searching had been faster than
I
On Mon, Nov 14, 2011 at 7:23 PM, Ahmet Arslan iori...@yahoo.com wrote:
When I do a basic sort on *long *field. the sort doesn't
happen.
Query is :
-
http://blr-ws-195:8091/Solr3.3/select/?q=2%3A104+AND+526%3A27747version=2.2start=0rows=10indent=onsort=469%20ascfl=469#
lst
I tried this one. fieldType
name=tlong class=solr.TrieLongField
precisionStep=8 omitNorms=true
positionIncrementGap=0/
It didn't work :(
Sort didn't happen
Did you restart tomcat and perform re-index?
Hello All,
i am using xslt to transform solr xml response, when made search;getting
below warning
WARNING [org.apache.solr.util.xslt.TransformerProvider] The
TransformerProvider's simplistic XSLT caching mechanism is not appropriate
for high load scenarios, unless a single XSLT transform is used
Set the cache lifetime high, like it says.
Questions - why use the XSLT response writer? What are you transforming the
response into and digesting it with?
Erik
On Nov 14, 2011, at 09:31 , vrpar...@gmail.com wrote:
Hello All,
i am using xslt to transform solr xml response, when
And you cannot update-in-place. That is, you can't update
just selected fields in a document, you have to re-index the
whole document.
Best
Erick
On Mon, Nov 14, 2011 at 6:11 AM, Ahmet Arslan iori...@yahoo.com wrote:
Thanks for your reply...my
data-config.xml is
dataConfig
Yes .
On 11/14/11, Ahmet Arslan iori...@yahoo.com wrote:
I tried this one. fieldType
name=tlong class=solr.TrieLongField
precisionStep=8 omitNorms=true
positionIncrementGap=0/
It didn't work :(
Sort didn't happen
Did you restart tomcat and perform re-index?
Yes .
Did you restart tomcat and perform re-index?
Okey, one thing left. Http caching may cause stale response. Delete your
browsers cache if you are using a browser to query solr.
In solrconfig.xml, change the xsltCacheLifetimeSeconds property of the
XSLTResponseWriter to the desired value (this example 6000secs):
queryResponseWriter name=xslt class=solr.XSLTResponseWriter
int name=xsltCacheLifetimeSeconds6000/int
/queryResponseWriter
On Mon, 2011-11-14 at 15:31 +0100,
Hi Solr,
Does anyone know of an easy way to tell if there are pending documents waiting
for commit?
Our application performs operations that are never safe to perform while
commits are pending. We make this work by making sure that all indexing
operations end in a commit, and stop the unsafe
Hello everyone,
A newbie question: how do I find out how documents have been indexed
across all shards?
Thanks much!
Hi all,
I saw one issue is ram usage keep increase when we run query.
After look in the code, looks like Lucene use MMapDirectory to map index file
to ram.
According to
http://lucene.apache.org/java/3_1_0/api/core/org/apache/lucene/store/MMapDirectory.html
comments, it will use lot of memory.
I'm planning to do some information retrieval experiments with Solr.
There some existing implementations in Lucene
http://lucene.apache.org/java/3_0_2/api/contrib-benchmark/org/apache/lucene/benchmark/quality/trec/package-summary.html
Have you used that with Solr? How?
//Ismo
Could someone take a look at this page:
http://wiki.apache.org/solr/ContentStreamUpdateRequestExample
... and tell me what code changes I would need to make to be able to
stream a LOT of files at once rather than just one? It has to be
something simple like a collection of some sort but I
: Although I don't have statistics to back my claim, I suspect that the really
: nasty filters don't have as high a hitcount as the ones that are more simple.
: Typically the really nasty filters are used when an employee logs into the
: site. Employees have access to a lot more than customers
Hello All,
i am this strange issue of http 411 Length required error. My Solr is hosted
on third party hosting company and it was working fine all these while.
i really don't understand why this happened. Attached is the stack trace any
help will be appreciated
: Thanks for the reply. There are many keyword terms (1000?) and not sure if
: Solr would choke on a query string that long. Perhaps solr is not built to
Did you try it?
1000 facet.query params is not a strain for Solr -- but you may find
problems with your servlet container if you try
Hi All,
We are using Solr 1.4.1 in production and are considering an upgrade to
newer version.
It seems that Solr 3.x requires a complete rebuild of index as the format
seems to have changed.
Is Solr 4.0 index file format compatible with Solr 3.x format?
Please advise.
Thanks
Saroj
Hi,
I have a very large index, and I'm trying to add a spell checker for it.
I don't want to copy all text in index to extra spell field, since that would
be prohibitively big, and index is already close to how big it can
reasonably be,
so I just want to extract word frequencies as I index for
HI,
Even if i have used all the posibility way like filter
class=solr.LowerCaseFilterFactory/ still i am getting same problrm.If
anyone faced before same problem please let me know how you have solved.
--
View this message in context:
39 matches
Mail list logo