I've added the includeSpanScore parameter back in for 5.4, so you might do
better to wait until that's released. Otherwise, the code looks correct to me.
Alan Woodward
www.flax.co.uk
On 19 Oct 2015, at 21:57, William Bell wrote:
> Alan,
>
> Does this code look equivalent? And how do I
Jeff,
so far tests routine is reasonable, but since we count a facet, we expect
that filtering by one of this values is used at the following requests. I
suppose the next request with fq=popularity:1 or so might show reuse that
cached filter, but it's just my speculation.
On Tue, Oct 6, 2015 at
Hello Hangu,
OPTION1. you can write complex/nested join queries with DIH and have
functions written in javascript for transformations in data-config if that
meets your domain requirement
OPTION2. use Java program with SolrJ and read data using jdbc and apply
domain specific rules, create solr
Hi - we have some code inside a unit test, extending
AbstractFullDistribZkTestBase. I am indexing thousands of documents as part of
the test to getCommonCloudSolrClient(); Somewhere down the line it trips over a
document. I've debugged inspected the bas document but cannot find anything
wrong
Is there a maximum size to objects in the blob store? How are objects
stored? As a stored field?
I've got some machine learning models that are 2-4Gb in size, and whilst
machine learning models is one of the intended uses of the blob store,
putting GB of data in it scares me a little. Is it
did you try something like
$> zkcli.sh -zkhost localhost:2181 -cmd putfile /solr.xml /path/to/solr.xml
?
On Mon, Oct 19, 2015 at 11:15 PM, hangu choi wrote:
> Hi,
>
> I am trying to start SolrCloud with embedded ZooKeeper.
>
> I know how to config solrconfig.xml and
No, the maximum size is limited to 2MB for now. The use-case behind
the blob store is to store small jars (custom plugins) and stopwords,
synonyms etc (even though those aren't usable right now) so maybe we
can relax the limits a little bit. However, it is definitely not meant
for GBs of data.
On
It's unfortunate that the Blob Store API wasn't named "Small File Store
API" to convey to users its intended purpose.
That said, maybe you could use the same technique as is recommended for
large synonym files: Break them into a sequence of smaller files and then
take advantage of the fact that
Hi,
I am newbie for solr and I hope to check my idea is good or terrible if
someone can help.
# background
* I have mysql as my primary data storage.
* I want to import data from mysql to solr (solrcloud).
* I have domain logics to make solr document - (means I can't make solr
Hi,
I am trying to start SolrCloud with embedded ZooKeeper.
I know how to config solrconfig.xml and schema.xml, and other things for
data import handler.
but when I trying to config it with solrCloud, I don't know where to start.
I know there is no conf directory in SolrCloud because conf
Hi,
I hit the following exception when I try to build suggester :
suggest?suggest.build=true
I am using solr 5.3.0, openJDK 1.7
any ideas?
mySuggester
FuzzyLookupFactory
DocumentDictionaryFactory
autocomplete
text_lws
false
true
10
mySuggester
suggest
Hi,
on solr 4.7 I've ran into a strange issue. Whilst setting up a field I've
noticed in the analysis form when I use a char filter factory (for example
HTMLSCF) with a tokeniser (ST) the analysis chain grinds to a halt. the
char filter does not seem to pass anything into the tokeniser.
Field
Instead of using SolrQuery to make a query request, you can use the
UpdateRequest class. Something like the following should do the same
as your intended request:
CloudSolrClient solr = new
CloudSolrClient("127.0.0.1:2181,127.0.0.2:2181, 127.0.0.3:218/solr1");
solr.setIdField("cust");
On 20 October 2015 at 10:26, Lee Carroll wrote:
> B*ll*cks, before posting I spent an hour searching for issues, honest.
> Soon as I post within seconds I find
>
> https://issues.apache.org/jira/browse/SOLR-5800
We are always glad to be of help. Including by
I checked the code and the limit is actually 5MB and configurable via
the blob.max.size.mb config property. I posted a comment on the Solr doc
for this.
In any case, thanks for sharing info that you gleaned from the conference,
for all of us who couldn't make it.
-- Jack Krupansky
On Tue, Oct
Yes, sorry I checked as well and the limit is 5MB. And it is
configurable using the property mentioned by Jack. Thanks for
correcting me.
On Tue, Oct 20, 2015 at 7:48 PM, Jack Krupansky
wrote:
> I checked the code and the limit is actually 5MB and configurable via
> the
B*ll*cks, before posting I spent an hour searching for issues, honest.
Soon as I post within seconds I find
https://issues.apache.org/jira/browse/SOLR-5800
On 20 October 2015 at 15:21, Lee Carroll
wrote:
> Hi,
>
> on solr 4.7 I've ran into a strange issue.
Hey guys!
I had a 52GB solr-8983-console.log on my Solr 5.2.1 Amazon Linux
64-bit box and decided to `cat /dev/null > solr-8983-console.log` to
free space.
The weird thing is that when I checked Sematext I noticed the OS had
freed a lot of memory at the same exact instant I did that.
Hi again,
After searching with grep, I figured that bin/solr has Xss too.
Finder didn't bring it at the first time.
Sorry for the noise.
Thanks,
Ahmet
On Tuesday, October 20, 2015 7:12 PM, Ahmet Arslan
wrote:
Hi,
After getting stack overflow error from
What is this limit limiting? Is this effectively a stored field, and the
bigger it gets, the more issues we'll have with segment merges/etc?
Upayavira
On Tue, Oct 20, 2015, at 09:25 AM, Shalin Shekhar Mangar wrote:
> Yes, sorry I checked as well and the limit is 5MB. And it is
> configurable
You should fix your log4j.properties file to no log to console ...
it's there for the initial getting started experience, but you don't
need to send log messages to 2 places.
On Tue, Oct 20, 2015 at 10:42 AM, Shawn Heisey wrote:
> On 10/20/2015 9:19 AM, Eric Torti wrote:
>>
Hi,
we try to get the number of documents for given time slots in the index
efficiently.
For this, we query the solr index like this:
On 10/20/2015 9:19 AM, Eric Torti wrote:
> I had a 52GB solr-8983-console.log on my Solr 5.2.1 Amazon Linux
> 64-bit box and decided to `cat /dev/null > solr-8983-console.log` to
> free space.
>
> The weird thing is that when I checked Sematext I noticed the OS had
> freed a lot of memory at the
Solr 4.6.1, 4 node cloud with 3 zk
I see the following thread as blocked. Could somebody please help me
understand what is going on here and how will it impact solr cloud? All
four of these threads blocked. Thanks.
"coreZkRegister-1-thread-1" id=74 idx=0x108 tid=32162 prio=5 alive,
parked,
Hi Hongu,
Scaling shouldn't be a problem if you follow the proposed approach but if
the updates are frequent then there can be issue with high latency.
I am also following the same approach where the data is first
updated/written to Mysql and then in MongoDB. I believe using this we can
take
Hi,
After getting stack overflow error from suggester component, I tried to
increase Xss.
I searched for 'Xss' in bin folder, only solr.cmd has -Xss256k.
I inserted Xss in solr.in.sh however admin screen displays two entries in jvm
args :
-Xss1024k -Xss256k
I am confused, where does solr grab
Yep, I misunderstood the problem.
The multiple tokens at the same offset might be messing things up. One
thing you can do is copyField to a field that doesn't have n-grams and do
something like f.textng.hl.alternateField= in your solrconfig. That'll use
the other field during highlighting. Yeah,
No Alexandre its just Sod's law (http://www.thefreedictionary.com/Sod's+Law)
:-)
Lee C
On 20 October 2015 at 15:38, Alexandre Rafalovitch
wrote:
> On 20 October 2015 at 10:26, Lee Carroll
> wrote:
> > B*ll*cks, before posting I spent an hour
It appears that DIH entity caching (e.g. SortedMapBackedCache) does not work
with deltas... is this simply a bug with the DIH cache support or somehow by
design?
Any ideas on a workaround for this? Ideally, I could just omit the
"cacheImpl" attribute but that leaves the query (using the default
Thanks, Davis, Jeff.
We are not using AWS. Is there any scripts/framework already developed
using puppet available?
On Tue, Oct 20, 2015 at 7:59 PM, Jeff Wartes wrote:
>
> If you’re using AWS, there’s this:
> https://github.com/LucidWorks/solr-scale-tk
> If you’re
If you’re using AWS, there’s this:
https://github.com/LucidWorks/solr-scale-tk
If you’re using chef, there’s this:
https://github.com/vkhatri/chef-solrcloud
(There are several other chef cookbooks for Solr out there, but this is
the only one I’m aware of that supports Solr 5.3.)
For ZK, I’m
Thank you Susheel, chandan.
Susheel,
In Option 1, I have to manage my domain logic two places, one for java, one
for sql.
and my domain logic is not stable and may change frequently.. I think
changing complext sql query whenever my domain logic change will be
painful...
and yes, I've thought
Is it possible to execute the following via CloudSolrClient? It works via
curl.
curl
'http://localhost:8983/solr/asset/update/json?_route_="b!"="cust:b;'
-H 'Content-type:application/json' -d
'{params:{},"delete":{"query":"cust:b"},"commit": {},"optimize": {}}'
I've tried the following, but
Okay, thx. I heard it mentioned at Lucene Revolution as a location for
storing machine learning models. Do people really have models coming in
at under 2Mb?
It'd be good to get this limitation into the BlobStore docs.
Upayavira
On Tue, Oct 20, 2015, at 07:19 AM, Shalin Shekhar Mangar wrote:
>
Hello,
Resending to see opinion from Dev-Ops perspective on the tools for
installing/deployment of Solr & ZK on large no of machines and maintaining
them. I have heard Bladelogic or HP OO (commercial tools) etc. being used.
Please share your experience or pros / cons of such tools.
Thanks,
Waste of money in my opinion. I would point you towards other tools - bash
scripts and free configuration managers such as puppet, chef, salt, or ansible.
Depending on what development you are doing, you may want a continuous
integration environment. For a small company starting out,
36 matches
Mail list logo