Dear sir,
To Apache Solr support!
wish you have a good day!
I'm new in Solr, please help me to confirm bellow information :
1. The URL must use the standard ports for HTTP (80) and HTTPS (443).
The port is implied by the scheme, but may also be mentioned in the URL as
long as the port is
Dear sir,
To Apache Solr support!
wish you have a good day!
I'm new in Solr, please help me to confirm bellow information :
1. The URL must use the standard ports for HTTP (80) and HTTPS (443).
The port is implied by the scheme, but may also be mentioned in the URL as
long as the port is
Hi all,
I have a server which uses Solr and for some reason the solr got
terminated. When I restart it with java -jar start.jar it uses stdout as
logger. Should I just redirect this with to a file location or is there
an idomatic Solr way this should be done?
Thanks,
Can
Solr already writes the logs to a file 'solr.log'. Its located in the same
folder as start.jar (logs/solr.log).
I'm not sure if thats what you looking for :-).
--
View this message in context:
http://lucene.472066.n3.nabble.com/start-jar-config-tp4119201p4119203.html
Sent from the Solr - User
Thank you Alexander for your reply.
Here I am posting my schema definition
field name=doctor_id type=intindexed=true stored=true
multiValued=false required=false/
field name=id type=int indexed=true stored=true
multiValued=false /
copyField source=doctor_id dest=id/
But I
I'm not sure if I would be missing any configuration params here, however
when I tried to assign an xpath field from URLDataSource (XML end point) to
two fields defined in schema.xml.
Here is my scenario,
I have two fields
*profile_display* and *profile_indexed*
My assignment in DataImpotHandler
On 24 February 2014 12:51, Chandan khatua chand...@nrifintech.com wrote:
Hi,
We have raw binary data stored in database(not word,excel,xml etc files) in
BLOB.
We are trying to index using TikaEntityProcessor but nothing seems to get
indexed.
But the same configuration works when
Yes, indeed. Every release of luke is tested on the corresponding solr
version's indexes. The indexes are created based on the exampledocs of the
solr package.
Dmitry
On Mon, Feb 17, 2014 at 12:41 AM, Bill Bell billnb...@gmail.com wrote:
Yes it works with Solr
Bill Bell
Sent from mobile
On 24 February 2014 12:39, Quốc Nguyễn nhquoc1...@gmail.com wrote:
Dear sir,
To Apache Solr support!
wish you have a good day!
I'm new in Solr, please help me to confirm bellow information :
1. The URL must use the standard ports for HTTP (80) and HTTPS (443).
The port is implied by
On 24 February 2014 14:45, manju16832003 manju16832...@gmail.com wrote:
I'm not sure if I would be missing any configuration params here, however
when I tried to assign an xpath field from URLDataSource (XML end point) to
two fields defined in schema.xml.
Here is my scenario,
I have two
I opened SOLR-5768
https://issues.apache.org/jira/browse/SOLR-5768
On Mon, Feb 24, 2014 at 12:56 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
Yes that should be simple. But regardless of the parameter, the
fl=id,score use-case should be optimized by default. I think I'll
commit the
Hi Gora !
Your concern was What is the type of the column used to store the binary
data in Oracle?
The column type is BLOB in DB. The column can also have rich text file.
Regards,
Chandan
-Original Message-
From: Gora Mohanty [mailto:g...@mimirtech.com]
Sent: Monday, February 24,
My input is that:
{| style=text-align: left; width: 50%; table-layout: fixed; border=0 |}
Analysis is as follows:
WT
textraw_bytesstartendtypeflagsposition
style[73 74 79 6c 65]38ALPHANUM01
text[74 65 78 74]1014ALPHANUM02
align[61 6c 69 67 6e]1520ALPHANUM03
left[6c 65 66 74]2226ALPHANUM04
I've done something like this; the key was to use a FieldStreamDataSource
to read from the BLOB field.
Something like
datasource name=main ...
dataSource type=FieldStreamDataSource name=fieldstream/
then
entity name=tika processor=TikaEntityProcessor
dataField=main.BLOB
Hi Raymond !
I've data-config.xml like bellow:
?xml version=1.0 encoding=UTF-8 ?
dataConfig
dataSource name=db driver=oracle.jdbc.driver.OracleDriver
url=jdbc:oracle:thin:@//x.x.x.x:x/d11gr21 user=x password=x/
dataSource name=dastream type=FieldStreamDataSource /
document
entity
Try replacing the inner entity with something like
entity name=message
dataSource=dastream
processor=TikaEntityProcessor
dataField=messages.MESSAGE
format=xml
field column=text name=mxMsg/
/entity
--- this assumes that you get the blob from a
I've tried as per your guide. But, no data are indexing.
The output of Query screen looks like :
doc
str name=id2158/str
arr name=mxMsg
str?xml version=1.0 encoding=UTF-8?html
xmlns=http://www.w3.org/1999/xhtml;
head
meta name=Content-Type content=application/octet-stream/
title/
The XPathEntityProcessor supports only one field mapping per xpath so
using copyField is the only way.
On Mon, Feb 24, 2014 at 2:45 PM, manju16832003 manju16832...@gmail.com wrote:
I'm not sure if I would be missing any configuration params here, however
when I tried to assign an xpath field
We are currently using Solr Cloud Version 4.3, with the following set-up, a
core with 2 shards - Shard1 and Shard2, each shard has replication factor 1.
We have noticed that in one of the shards, the document differs between the
leader and the replica. Though the doc exists in both the
Can you brief as how to make a direct call to Zookeeper instead of Cloud
Collection(as currently I was querying the Cloud something like
*http://192.168.2.183:8900/solr/collection1/select?q=*:*
http://192.168.2.183:8900/solr/collection1/select?q=*:** ) from UI, now
if I assume shard 8900 is down
Try running the query for the outer entity (messages) in an sql client,
and verify that your blob column is called MESSAGE.
On Mon, Feb 24, 2014 at 12:22 PM, Chandan khatua chand...@nrifintech.comwrote:
I've tried as per your guide. But, no data are indexing.
The output of Query screen looks
On 24 February 2014 15:34, Chandan khatua chand...@nrifintech.com wrote:
Hi Gora !
Your concern was What is the type of the column used to store the binary
data in Oracle?
The column type is BLOB in DB. The column can also have rich text file.
Um, your original message said that it does
Hello!
Just few random points:
1. Interesting site. I'd say there are similar sites, but this one has
cleaner interface. How does your site compare to this one, for example, in
terms of feature set?
http://qnalist.com/questions/4640870/luke-4-6-0-released
At least, the user ranking seems to be
Below is the url which will hit the middle layer then middle layer will
form the solr query and fire it.
*listing?offset=0sortparam=0limit=20q=Chennai~Tambaram~1~2,3~45~2500~800~2000~~24*
Chennai--city
Tambaram--locality
1--blah
2,3--blah
45~2500--price_min and max
Vineet, I'm assuming that you are executing your search from a Java
Client. If so, just use CloudSolrServer present in the Solrj API and
save yourself from all these troubles. If you are not using a Java
client, then you need to put a few or all your servers behind a load
balancer and invoke
Please provide the *Solr* queries that are being invoked by your
middle layer along with the results you expect and the results you
actually got from Solr with cache-enabled.
On Mon, Feb 24, 2014 at 6:23 PM, Senthilnathan Vijayaraja
senthilnat...@8kmiles.com wrote:
Below is the url which will
This bug was fixed on Solr 4.6.1—
/Yago Riveiro
On Mon, Feb 24, 2014 at 11:56 AM, abhijit das abhijitdas1...@outlook.com
wrote:
We are currently using Solr Cloud Version 4.3, with the following set-up, a
core with 2 shards - Shard1 and Shard2, each shard has replication factor 1.
We have
On 24/02/14 13:04, Vineet Mishra wrote:
Can you brief as how to make a direct call to Zookeeper instead of Cloud
Collection(as currently I was querying the Cloud something like
*http://192.168.2.183:8900/solr/collection1/select?q=*:*
http://192.168.2.183:8900/solr/collection1/select?q=*:** )
Yes, that issue is fixed. We are on trunk and seeing it happen again. Kill some
nodes when indexing, trigger OOM or reload the collection and you are in
trouble again.
-Original message-
From:Yago Riveiro yago.rive...@gmail.com
Sent: Monday 24th February 2014 14:54
To:
Probably when originally started, whoever did it piped the output to
dev/null.
You can also change this permanently by altering the logging, see:
https://wiki.apache.org/solr/SolrLogging
Best,
Erick
On Mon, Feb 24, 2014 at 12:56 AM, manju16832003 manju16832...@gmail.comwrote:
Solr already
Hi,
we've built an index (Solr 4.3), which contains approx. 1 Million docs and
its size is around 20 GB (optimized).
In our index we have one field which contains the tokenized words of
indexed documents and a second field with the stemmed contents
(SnowballFilter, German2).
During our tests
This is really strange. You should have _fewer_ tokens in your stemmed
field.
Plus, the up-front processing to stem the field in the query shouldn't be
noticeable.
Let's see the query and results from debug=all being added to the URL
because something is completely strange here.
Best,
Erick
On
Maybe some heap/GC issue from using more of this 20 GB index. Maybe it was
running at the edge and just one more field was too much for the heap.
The timing section of the debug query response should shed a little light.
-- Jack Krupansky
-Original Message-
From: Erick Erickson
On Mon, Feb 24, 2014 at 8:03 PM, Navaa
navnath.thomb...@xtremumsolutions.com wrote:
copyField source=doctor_id dest=id/
So, you are probably supplying an id and then also merging doctor_id
and id fields together. Which gives you two fields values in ID. I
would have expected Solr to complaint
Thanks.
We found some evidence that this could be the issue. We're monitoring closely
to confirm this.
One question though: none of our nodes show more that 50% of physical memory
used. So there is enough memory available for memory mapped files. Can this
kind of pause still happen?
I¹ll second that thank-you, this is awesome.
I asked about this issue in 2010, but when I didn¹t hear anything (and
disappointingly didn¹t find SOLR-1880), we ended up rolling our own
version of this functionality. I¹ve been laboriously migrating it every
time we bump our Solr version ever
Hi
I have a 4 node solrcloud cluster with more than 50 collections with 4
shards each. Everytime I want to make a schema change, I upload configs to
zookeeper and then restart all nodes. However the restart of every node is
very slow and takes about 20-30 minutes per node.
Is it recommended to
There is a RELOAD collection command you might try:
https://cwiki.apache.org/confluence/display/solr/Collections+API#Collection
sAPI-api2
I think you¹ll find this a lot faster than restarting your whole JVM.
On 2/24/14, 4:12 PM, KNitin nitin.t...@gmail.com wrote:
Hi
I have a 4 node
I'm not sure how you're measuring free RAM. Maybe this will help:
http://www.linuxatemyram.com/play.html
Michael Della Bitta
Applications Developer
o: +1 646 532 3062
appinions inc.
The Science of Influence Marketing
18 East 41st Street
New York, NY 10017
t: @appinions
We fetch a large number of documents -- 1000+ -- for each search. Each
request fetches only the uniqueKey or the uniqueKey plus one secondary
integer key. Despite this, we find that we spent a sizable amount of time
in SolrIndexSearcher#doc(int docId, SetString fields). Time is spent
fetching the
Hi
I have a 4 node solrcloud cluster with more than 50 collections with 4
shards each. Everytime I want to make a schema change, I upload configs to
zookeeper and then restart all nodes. However the restart of every node is
very slow and takes about 20-30 minutes per node.
Is it
Hi,
Slow startup could it be your transaction logs are being replayed? Are
they very big? Do you see lots of disk reading during those 20-30 minutes?
Shawn was referring to http://wiki.apache.org/solr/SolrPerformanceProblems
Otis
--
Performance Monitoring * Log Analytics * Search
What is your firstSearcher set to in solrconfig.xml? If you're
doing something really crazy there that might be an issue.
But I think Otis' suggestion is a lot more probable. What
are your autocommits configured to?
Best,
Erick
On Mon, Feb 24, 2014 at 7:41 PM, Shawn Heisey s...@elyograg.org
Dear all,
Could you guys please help me?
I just try to index document into solar it doesn't give me any error but it
doesn't index document too but it used to work but not now please see..
#Solr Log
WARN - 2014-02-25 11:30:35.675; org.apache.solr.handler.loader.XMLLoader;
Unknown attribute
Well, what was that last thing you changed?
There's really not much here to go on, you
have to provide more details about what
you've tried, what evidence you have that
the doc isn't indexed, etc.
have you looked at your Solr admin screen to
see if maxDoc has increased? Have you
committed your
Thank you Eric,
I figured out something that actually the document is indexed but it doesn't
show on my api result because it missed some field.
So I would like to delete this post how can I?
Thank you very much,
Chun.
--
View this message in context:
No, can't delete posts. Having them around keeps a history for
others as well, so that's an added benefit.
Glad you figured it out!
Erick
On Mon, Feb 24, 2014 at 9:20 PM, rachun rachun.c...@gmail.com wrote:
Thank you Eric,
I figured out something that actually the document is indexed but it
I have verified that blob column is called MESSAGE.
In my data-config file the field column named 'id' is indexed in solr. But
the data(field column name=mxMsg) is not indexed. It comes empty with in
quotes.
The same configuration is working on xml data (stored BLOB type in DB), But
not on
Hi,
select?br=2+3version=2fl=id,level,name,city,amenities,$lanorm,$relscore,$bscoreq=*:*fq={!lucene
q.op=OR df=property_type v=$ptype}ptype=1fq={!lucene q.op=OR df=city
v=$cit}sort=$bscore desc,$relscore
desccit=Chennairelscore=product($banorm,15)bscore=banorm($la,amenities,10)la=8
this is the
For more info : http://www.packtpub.com/apache-solr-beginners-guide/book
--
View this message in context:
http://lucene.472066.n3.nabble.com/Apache-Solr-Beginner-s-Guide-tp4119486.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes, and? There is a bunch of books on Solr, including a couple for
the beginners. Packt in particular has obviously gone for the volume
approach :-)
If yo have a question about the book, you may want to send it to Packt
or book author or other forums. Or rephrase it as a specific Solr
questions
Hey,
I have a solr cloud setup with 2 shards. I am trying to use solr's field
grouping feature
My query looks like q=*:*fq=field:valuegroup=truegroup.field=otherValue
The *ordering* of the groups *differs based on which shard the query is
fired from* and it seems that docs located in that same
52 matches
Mail list logo