Hi Raymond,
I keep trying to encode the '' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
Hi Raymond,
I keep trying to encode the '' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Im using PHPSolrClient.
Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
Hi Raymond,
I keep trying to encode the '' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Im using SolrPHPClient.
Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
Facts:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
Optional JVM parameters set xmx = 3072, xms = 1024
Solr version 4.5.0.
One Core instance (both for querying and indexing)
*Schema config:*
minGramSize=2 maxGramSize=20
most of the fields are
That's exactly what I would expect from url-encoding ''. So, the thing
that you're doing works as it should, but you're probably doing something
that you should not do (in this case, urlencode).
I have not used SolrPHPClient myself, but from the example at
Followup: I *think* something like this should work:
$results = $solr-search($query, $start, $rows, array('sort' = 'price_min
asc,update_date desc', 'facet.query' = 'price_min:[* TO 1300]'));
On Mon, Jan 20, 2014 at 11:05 AM, Raymond Wiker rwi...@gmail.com wrote:
That's exactly what I would
Hi folks, have any of you successfully implemented LSH (MinHash) in
Solr? If so, could you share some details of how you went about it?
I know LSH is available in Mahout, but was hoping if someone has a
solr or Lucene implementation.
Thanks
The fact that you see the memory consumed too high should be consecuency of
that some memory of the heap is only released after a full GC. With the
VisualVM tool you can try to force a full GC and see if the memory is released.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM,
Other thing, Solr use a lot the OS cache to cache the index and gain
performance. This can be another reason why the solr process has a high memory
value allocated.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM, onetwothree joydivis...@telenet.be
wrote:
Facts:
OS Windows server
Hi,
I have a query on Multi-Lingual Analyser.
Which one of the below is the best approach?
1.1.To develop a translator that translates a/any language to
English and then use standard English analyzer to analyse – use translator,
both at index time and while search time?
2.
Hi guys, following this thread I have some question :
1) regarding LUCENE-5350, what is the context quoted ? Is it the context a
filter query ?
2) regarding https://issues.apache.org/jira/browse/SOLR-5378, do we have
the final documentation available ?
Cheers
2014/1/16 Hamish Campbell
Thank you very much Mr. Raymond
You just saved my world ;)
It's worked and *sort by conditions *
but facet.query=price_min:[* TO 1300] not working yet but I will try to
google for the right solution.
Million thanks _/|\_
Rachun.
--
View this message in context:
Hi,
I had the same problem.
In my case the error was, I had a copy/paste typo in my solr.xml.
str name=genericCoreNodeNames${genericCoreNodeNames:true}/str
!^! Ouch!
With the type 'bool' instead of 'str' it works definitely better. ;-)
Uwe
Am 28.11.2013 08:53, schrieb lansing:
On Mon, 2014-01-20 at 11:02 +0100, onetwothree wrote:
Optional JVM parameters set xmx = 3072, xms = 1024
directoryFactory: MMapDirectory
[...]
So it seems that filesystem buffers are consuming all the leftover memory??,
and don't release memory, even after a quite amount of time?
As long as
Well it is hard to get a specific anchor because there is usually more than
one. The content of the anchors field should be correct. What would you expect
if there are multiple anchors?
-Original message-
From:Teague James teag...@insystechinc.com
Sent: Friday 17th January 2014
Zitat von Mikhail Khludnev mkhlud...@griddynamics.com:
On Sat, Jan 18, 2014 at 11:25 PM, d...@geschan.de wrote:
So, my question now: can I change my existing index in just adding a
is_parent and a _root_ field and saving the journal id there like I did
with j-id or do I have to reindex all
It Depends (tm). Approach (2) will give you better, more specific
search results. (1) is simpler to implement and might be good
enough...
On Mon, Jan 20, 2014 at 5:21 AM, David Philip
davidphilipshe...@gmail.com wrote:
Hi,
I have a query on Multi-Lingual Analyser.
Which one of the
Hi Solr Users,
Drew Farris, Tom Morton and I are currently working on the 2nd Edition of
Taming Text (http://www.manning.com/ingersoll for first ed.) and are soliciting
interested parties who would be willing to contribute to a chapter on practical
use cases (i.e. you have something in
Hi!
I need a little help from you.
We have complex documents stored in database. On the page we show them from
database. We index them and not store them in Solr. So we can't use Solr
Highlighter. But still we would like to highlight the search words found in
the document. What approach would
On Mon, Jan 20, 2014 at 6:11 PM, d...@geschan.de wrote:
Zitat von Mikhail Khludnev mkhlud...@griddynamics.com:
On Sat, Jan 18, 2014 at 11:25 PM, d...@geschan.de wrote:
So, my question now: can I change my existing index in just adding a
is_parent and a _root_ field and saving the journal
On 1/20/2014 3:02 AM, onetwothree wrote:
OS Windows server 2008
4 Cpu
8 GB Ram
snip
We're using a .Net Service (based on Solr.Net) for updating and inserting
documents on a single Solr Core instance. The size of documents sent to Solr
vary from 1 Kb up to 8Mb, we're sending the documents in
Hello!
I've installed a classical two shards Solr 4.5 topology without SolrCloud
balancing with an HA proxy. I've got a *copyField* like this:
* field name=tagValues type=string indexed=true stored=true
multiValued=false/*
Copied from this one:
* field name=tags type=searchableTextTokenized
We are testing our shiny new Solr Cloud architecture but we are
experiencing some issues when doing bulk indexing.
We have 5 solr cloud machines running and 3 indexing machines (separate
from the cloud servers). The indexing machines pull off ids from a queue
then they index and ship over a
Hi Luis,
Do you have deletions? What happens when you expunge Deletes?
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22commit.22
Ahmet
On Monday, January 20, 2014 10:08 PM, Luis Cappa Banda luisca...@gmail.com
wrote:
Hello!
I've installed a classical two shards
Questions: How often do you commit your updates? What is your
indexing rate in docs/second?
In a SolrCloud setup, you should be using a CloudSolrServer. If the
server is having trouble keeping up with updates, switching to CUSS
probably wouldn't help.
So I suspect there's something not optimal
We commit have a soft commit every 5 seconds and hard commit every 30. As
far as docs/second it would guess around 200/sec which doesn't seem that
high.
On Mon, Jan 20, 2014 at 2:26 PM, Erick Erickson erickerick...@gmail.comwrote:
Questions: How often do you commit your updates? What is your
We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
updates get sent to one machine or something?
On Mon, Jan 20, 2014 at 2:42 PM, Software Dev static.void@gmail.comwrote:
We commit have a soft commit every 5 seconds and hard commit every 30. As
far as docs/second it
What version are you running?
- Mark
On Jan 20, 2014, at 5:43 PM, Software Dev static.void@gmail.com wrote:
We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
updates get sent to one machine or something?
On Mon, Jan 20, 2014 at 2:42 PM, Software Dev
4.6.0
On Mon, Jan 20, 2014 at 2:47 PM, Mark Miller markrmil...@gmail.com wrote:
What version are you running?
- Mark
On Jan 20, 2014, at 5:43 PM, Software Dev static.void@gmail.com
wrote:
We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
updates get sent to
MT is not nearly good enough to allow approach 1 to work.
On Mon, Jan 20, 2014 at 9:25 AM, Erick Erickson erickerick...@gmail.com wrote:
It Depends (tm). Approach (2) will give you better, more specific
search results. (1) is simpler to implement and might be good
enough...
On Mon, Jan 20,
All,
I know normally index should be optimized on master and it will be
replicated to slave but we have an issue with the network bandwidth.
We optimize indexes weekly (total size is around 1.5TB). We have few slaves
set up on local network so replication the whole indexes is not a big
issue.
Thanks for the reply, dropbox image added.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Usage-on-Windows-Os-while-indexing-tp4112262p4112403.html
Sent from the Solr - User mailing list archive at Nabble.com.
32 matches
Mail list logo