Hi,
I have been doing some research on highlighting partial matches, there are
some information on google but is far from complete and I just can't get it
to work.
*I have highlighting working but it highlights complete words, example:*
*http://localhost:8983/solr/pcsearch/select?indent=on&q=co
> For cleaner version control on my system, I create $SOLR_HOME/lib as a
> symlink to another directory that I create myself --
> $INSTALL_DIR/solrlib ... that way, all binary stuff like jars is in the
> *program* version control location, not the *data* location.
That’s right. It is possible to c
If Solr has GC pauses greater than 15 seconds, zookeeper is going to assume
node is down and hence would send it into recovery when node comes out of a
GC pause and reconnects to zookeeper.
You should look into keeping GC pause as short as possible.
Using G1GC with ParallelRefProcEnabled has help
Hi Dominique,
Unfortunately Solr doesn't support metrics you are interested in. You can
however have another process that makes jmx queries on the solr process, do
required transformation and store data in some kind of data store.
Just make sure you are not DDOSing your Solr instances :-)
On Oct
Thanks John... yes that was the first idea came to our mind, but it will
require doubling our servers (in replica data centers as well etc),
definitely we can't afford the cost.
We have thought of first establishing a small pool of 'hot' servers and use
them to take incoming new index data using u
If you will have numerous documents, splitting documents into shard is a
strategy. This split is independent of lingo of document.
For documents with different languages, its necessary to use language
specific analyzers to obtain good search results. For example, assume you
have english language d
Renee - you have probably already thought of this, but just in case it
helps... (It helped me a lot several years ago and I hadn't thought of it
at the time...)
If you end up needing to do a big re-index, Production doesn't have to be
affected (assuming you have the hardware/cloud resources). You
Thanks for the replies. I made the changes so that the external file field
is loaded per:
Shawn and Ari,
the 3rd party jars are exactly just one of the concerns I have.
We had more than just a multi-lingual integration, we have to integrate with
many other 3rd party tools. We basically deploy all those jars into an
'external' lib extension path in production, then for each 3rd party too
On 10/10/2016 9:58 AM, Dominique De Vito wrote:
> It looks like the Solr metric "avgTimePerRequest" is computed with
> requests from t0 (startup time).
The percentile metrics (available in 4.1 and later if memory serves) are
generally far more useful than the average time.
> If so, is there a wa
Hi,
It looks like the Solr metric "avgTimePerRequest" is computed with requests
from t0 (startup time).
If so, it's quite useless, for example, for detecting a surge in latency
within the last 10 mn for example.
Is my understanding correct ?
If so, is there a way
(1) to configure Solr to comput
Hello,
Solr Set up
Solr 5.4.1, Zookeeper 3.4.6 (5 zookeeper ensemble)
We have one collection which has multiple shards (two shards for each week).
Each shard has a leader and a replica. We only write to the latest week- two
shards (four cores) which we refer to a ‘hot cores’. The rest,
Hi,
I'm started working on the project which will likely have lots of
documents in every single language and because of that I'm a bit worried
storing everything into one single shard. What would be the best way for
data store, any advices how I should split my data ? I was thinking
about go
For what it's worth / in case it's helpful...
I haven't dealt with JDBC in this context so I can't offer anything useful
there...
You can reference the data in Zookeeper when creating a new collection - so
you don't need to put the configs anywhere on the Solr boxes themselves.
It's not automati
>
> Just a sanity check. That directory mentioned, what kind of file system is
> that on? NFS, NAS, RAID?
I'm using Ext4 with options "noatime,nodiratime,barrier=0" on a hardware RAID10
with 4 SSD disks
__
Gesendet mit Maills.de - mehr als n
>
> What I have been hoping to see is the exact text of an OutOfMemoryError
> in solr.log so I can tell whether it's happening because of heap space
> or some other problem, like stack space. The stacktrace on such an
> error might be helpfultoo.
>
Hi,
I did understand what you need, I'm newbie
On 10/10/2016 6:59 AM, Aristedes Maniatis wrote:
> We also use com.vividsolutions.jts.geom so we need to add this jar
> somewhere. The only places that seem to work are inside the Jetty
> installation because that's where the classloader will find it.
> ${solr.installation.dir}/server/lib or into
>
Hi
I could not find "Could not download file" in the logs. Should I
increase the log level somewhere? Just let me know... so I can provide
you more detailed logs...
Thx!
Arkadi
On 02-09-16 11:21, Arkadi Colson wrote:
Hi
I cannot find a string in the logs matching "Could not download file.
On 10/10/2016 2:49 AM, Alexandre Rafalovitch wrote:
> Just a sanity check. That directory mentioned, what kind of file
> system is that on? NFS, NAS, RAID?
The original post says it's hardware RAID10 with locally installed SSD
disks. It doesn't mention what filesystem is on it. If I were buildi
Hi Shalin
Thanks for your reply.
On 10/10/16 3:28pm, Shalin Shekhar Mangar wrote:
> As far as configuration is concerned -- everything Solr specific goes to
> zookeeper. The solr.xml can also be put into zookeeper. JNDI is not
> supported -- can you explain why you need it?
Where do I put the JD
> Really, there is nothing in solr. log. I did not change any option
> related
> to this in config. Solr died again some hours ago and the last entry
> is: 2016-10-09 22:02:31.051 WARN (qtp225493257-1097) [ ]
> o.a.s.h.a.LukeRequestHandler Error getting file length for
> [segments_9102]
That's a
Hey,
is there a way to rename shards or ideally give them names when creating
collection? For example instead of names like Shard1, Shard2 on admin
page I would like to see countries instead, for example GB, US, AU etc ...
Is there any way to do that ?
Thanks
Just a sanity check. That directory mentioned, what kind of file system is
that on? NFS, NAS, RAID?
Regards,
Alex
On 10 Oct 2016 1:09 AM, "Reinhard Budenstecher" wrote:
>
> That's considerably larger than you initially indicated. In just one
> index, you've got almost 300 million docs taki
I use XMLStarlet. We will also discuss any useful tools at the SolrERG
(original subject of the thread)
Regards,
Alex
On 10 Oct 2016 4:15 AM, wrote:
> Rick Leir said:> I commonly do a diff of the xmls to see what has changed
> in a new
> > release, or what differs in an example. The in
Will this be on the flax blog or somewhere else? Interested to read what you're
doing with streaming.
On Friday, October 7, 2016 5:24 PM, Charlie Hull wrote:
Yes I'll blog about it and we'll try and get as much as possible captured
in the Github folder. If you've got ideas for Tuesday
Rick Leir said:> I commonly do a diff of the xmls to see what has changed in a
new
> release, or what differs in an example. The indentation is often
> 'tidied up' in different ways, making the diff almost useless. Perhaps I
> need to run an xml formatter before doing any diff's on xml's. Als
26 matches
Mail list logo