Erick,
bin/start pipes stdout to solr-$PORT-console.log or such. With no
rotation. So we are setting people up to fail right from the get-go.
That's what I'm hoping the attached ticket will resolve.
Upayavira
On Fri, Nov 6, 2015, at 03:52 PM, Erick Erickson wrote:
> How do you start s
and use log4j.properties to send log
events to a file that *is* rotated.
Upayavira
the update expansion before your lang detect processor,
but there is no gap between them.
>From my reading of the code, you could create an AtomicUpdateProcessor
that simply expands updates, and insert that before the
LangDetectUpdateProcessor.
Upayavira
On Tue, Nov 3, 2015, at 06:38 AM, Chau
I think it was around 4.7 that the Java7 requirement was introduced. You
may find trying 4.6 will get you what you are needing. I'd expect the
artifacts in the Maven repo should be compiled with Java6 from that
point backwards.
Upayavira
On Tue, Nov 3, 2015, at 10:33 PM, Erick Erickson wrote
Actually, you are right. It would be executed on every node if you put
LandDetect after a deliberately inserted
DistrubutedUpdateProcessorFactory entry.
Not optimal, but would work.
Upayavira
On Tue, Nov 3, 2015, at 12:26 PM, Alexandre Rafalovitch wrote:
> I wonder what would hap
an excellent blog post on how to write query objects),
so really the important part is how to do the matching on many terms
efficiently.
Upayavira
On Mon, Nov 2, 2015, at 06:47 PM, Erick Erickson wrote:
> Or a really simple--minded approach, just use the frequency
> as a ration of numFound to es
to make this more efficient? Does the TermsQuery
work differently from the BooleanQuery regarding large numbers of terms?
Upayavira
cloud option is starting it in SolrCloud mode, in which
case you should be using the collections API to create a collection. The
two scenarios above *aren't* the same.
Upayavira
Solr won't index arbitrary json.
Please research the format that solr expects.
Upayavira
On Tue, Oct 27, 2015, at 07:23 AM, Prathmesh Gat wrote:
> Hi,
>
> Using Solr Ver 4.10, when we try to import the attached JSON we get an
> error saying:
> { "responseHeader": {
ngs like log files, use time based
collections, then create/destroy collection aliases to point to them.
I've had a "today" alias that points to logs_20151027 and logs_20151026,
meaning all content for the last 24hrs is available via
http://localhost:8983/solr/today. I had "week" and "month" also.
Dunno if that works for you.
Upayavira
ion.
Hope this helps.
Upayavira
On Mon, Oct 26, 2015, at 11:21 AM, Chaushu, Shani wrote:
> Hi,
> Is there an API to copy all the documents from one collection to another
> collection in the same solr server simply?
> I'm using solr cloud
Use the analysis tab on the admin UI to see what analysis is doing to
your terms.
Then bear in mind that a query parser will split on space. So, you might
want to do clientName:"st ju me" to make the tokenisation happen within
the analysis chain rather than the query parser.
Upayavi
r when a
new document is posted. You would need, somehow, to post every document
again in order to trigger the update processor's ability to do its work.
Upayavira
requently.
Splitting such an index into shards is one approach to dealing with this
issue.
Upayavira
er? My bias is that we should not do
> that, but I do not see it as particularly harmful.
Or to have the upconfig command barf if there isn't a solrconfig.xml
file in the directory concerned. That'd give quick feedback that
something is being done wrong.
Upayavira
In the meantime, a
switch to disable the solrconfig check sounds reasonable.
Wanna create the ticket?
Upayavira
be an easier way - index a field that is the
same for all documents, and facet on it. Instead of counting the number
of documents, calculate the sum() of your word count field.
I *think* that should work.
Upayavira
On Sat, Oct 24, 2015, at 04:24 PM, Aki Balogh wrote:
> Hi Jack,
>
> I'm just u
it to one segment, and thus might
ever so slightly speed the aggregation of term frequencies, but I doubt
it'd make enough difference to make it worth doing.
Upayavira
On Sat, Oct 24, 2015, at 03:37 PM, Aki Balogh wrote:
> Thanks, Jack. I did some more research and found similar resu
Can you explain more what you are using TF for? Because it sounds rather
like scoring. You could disable field norms and IDF and scoring would be
mostly TF, no?
Upayavira
On Sat, Oct 24, 2015, at 07:28 PM, Aki Balogh wrote:
> Thanks, let me think about that.
>
> We're using termfr
ng if
> there's a more direct way.
> On Oct 24, 2015 4:00 PM, "Upayavira" <u...@odoko.co.uk> wrote:
>
> > Can you explain more what you are using TF for? Because it sounds rather
> > like scoring. You could disable field norms and IDF and scoring would be
> >
add debugQuery=true to your query, and look at the parsed query - it'll
show you a lot about what's going on.
Also, try the phrase "good building constructor" in the admin UI
analysis tab, for your full_text field. It'll help you understand what's
happening in terms of tokenisation.
latest-and-greatest
release of anything).
Upayavira
On Fri, Oct 23, 2015, at 07:50 PM, Alexandre Rafalovitch wrote:
> Definitely 5.x. Lots of new goodies. It is true that some of the
> startup scripts are different and the example schemas could be
> slightly confusing if following a bo
are coded on disk, and see if you can make changes that
would benefit all users across the board.
Upayavira
On Wed, Oct 21, 2015, at 08:52 AM, Robert Krüger wrote:
> Thanks everyone, for your answers. I will probably make a simple
> parametric
> test pumping a solr index full of those
No, you cannot tell Solr to handle wildcards differently. However, you
can use regular expressions for searching:
title:/magnet.?/ should do it.
Upayavira
On Wed, Oct 21, 2015, at 11:35 AM, Bruno Mannina wrote:
> Dear Solr-user,
>
> I'm surprise to see in my SOLR 5.0 that the
a regexp (and a wildcard for that matter) is:
* search through the list of terms in your field for terms that match
your regexp (uses an FST for speed)
* search for documents that contain those resulting terms
Upayavira
On Wed, Oct 21, 2015, at 12:08 PM, Bruno Mannina wrote:
> title:/mag
. Is it reasonable and does
it work?
Upayavira
What is this limit limiting? Is this effectively a stored field, and the
bigger it gets, the more issues we'll have with segment merges/etc?
Upayavira
On Tue, Oct 20, 2015, at 09:25 AM, Shalin Shekhar Mangar wrote:
> Yes, sorry I checked as well and the limit is 5MB. And it is
> config
Okay, thx. I heard it mentioned at Lucene Revolution as a location for
storing machine learning models. Do people really have models coming in
at under 2Mb?
It'd be good to get this limitation into the BlobStore docs.
Upayavira
On Tue, Oct 20, 2015, at 07:19 AM, Shalin Shekhar Mangar wrote
en, I regularly run svn update which keeps this checkout up-to-date,
and confirm it hasn't broken things.
If you wanted to run against a specific version in Solr, you could force
SVN to a specific revision (e.g. of the 5x branch) - the one that was
released, and git merge your patches into it, etc, etc, etc.
Upayavira
The fuzzy query does not need mentioning in schema.xml. a search for
Steve~ or Steve~0.5 will trigger a fuzzy query.
Upayavira
On Sat, Oct 10, 2015, at 08:27 PM, vit wrote:
> I am using Solr 4.2
> For some reason I cannot find an example of FuzzyQuery
> filter in schema.xml.
>
Do you use it? If so, how?
Upayavira
On Mon, Oct 12, 2015, at 02:05 AM, Bill Au wrote:
> admin-extra allows one to include additional links and/or information in
> the Solr admin main page:
>
> https://cwiki.apache.org/confluence/display/solr/Core-Specific+Tools
>
> Bill
I think Walter suggested the simplest: make two requests. When you've
got both results back, you can stick them together to make results.
At present, there is no method to do multiple actions within a single
request.
Upayavira
On Sun, Oct 11, 2015, at 01:38 PM, liviuchrist...@yahoo.com.INVALID
a little.
Upayavira
On Sat, Oct 10, 2015, at 03:13 PM, liviuchrist...@yahoo.com.INVALID
wrote:
> Hi Upayavira & Walter & everyone else
>
> About the requirements:1. I need to return no more than 3 paid results on
> a page of 12 results2. Paid results should be sorted like this
In which case you'd be happy to wait for 30s for it to complete, in
which case the func or frange function query should be fine.
Upayavira
On Fri, Oct 9, 2015, at 05:55 PM, Aman Tandon wrote:
> Thanks Mikhail the suggestion. I will try that on monday will let you
> know.
>
&g
ff already in
your index. And the reason you are likely getting all that stuff is
because you have a copyField that copies it over into the 'text' field.
If you'll never want to search on some fields, switch them to
index=false, make sure you aren't doing a copyField on them, and then
reindex.
Upayavira
, but might
catch you out later.
Upayavira
On Fri, Oct 9, 2015, at 10:59 AM, Aman Tandon wrote:
> Hi,
>
> I tried to use the same as mentioned in the url
> <http://stackoverflow.com/questions/16258605/query-for-document-that-two-fields-are-equal>
> .
>
> And I used the
son=true, with a
> > Content-Type: text/xml header. I still noted the consistent loss of
> > another document with the update above.
> >
> > John
> >
> >
> > On 08/10/15 00:38, Upayavira wrote:
> >> What ID are you using? Are you possibly using
Yay!
On Thu, Oct 8, 2015, at 08:38 AM, John Smith wrote:
> Yes indeed, the update chain had been activated... I commented it out
> again and the problem vanished.
>
> Good job, thanks Erick and Upayavira!
> John
>
>
> On 08/10/15 08:58, Upayavira wrote:
> > Loo
Are all instances of Solr the same version? Mixing versions could cause
what Erick describes.
On Thu, Oct 8, 2015, at 03:19 AM, Erick Erickson wrote:
> Sounds like you're somehow mixing old and new versions of the ZK state
> when you restart. I have no idea how that would be happening, but...
>
You can either specify the update chain via an update.chain request
parameter, or you can configure a new request parameter with its own URL
and separate update.chain value.
I have no idea how you would then reference that in the DIH - I've never
really used it.
Upayavira
On Thu, Oct 8, 2015
field.
Upayavira
On Thu, Oct 8, 2015, at 01:34 PM, NutchDev wrote:
> Hi Christian,
>
> You can take a look at Solr's QueryElevationComponent
> <https://wiki.apache.org/solr/QueryElevationComponent> .
>
> It will allow you to configure the top results for a
take on the requirements,
e.g. how should paid results be sorted, how many paid results do you
show, etc, etc. Without these details we're all guessing.
Upayavira
On Thu, Oct 8, 2015, at 04:45 PM, Walter Underwood wrote:
> Sorting all paid above all unpaid will give bad results w
in
another way.
Upayavira
solr 5.1.0
How are you starting zookeeper? Embedded within Solr? Stand-alone?
If you are starting it embedded, check the solr/zoo_data directory -
that's where Zookeeper is writing its info. If that is getting lost
somehow, you could loose your collections/etc.
Upayavira
What ID are you using? Are you possibly using the same ID field for
both, so the second document you visit causes the first to be
overwritten?
Upayavira
On Wed, Oct 7, 2015, at 06:38 PM, Erick Erickson wrote:
> This certainly should not be happening. I'd
> take a careful look at wh
Do you use admin-extra within the admin UI?
If so, please go to [1] and document your use case. The feature
currently isn't implemented in the new admin UI, and without use-cases,
it likely won't be - so if you want it in there, please help us
understand how you use it!
Thanks!
Upayavira
[1
.
Upayavira
On Wed, Oct 7, 2015, at 11:49 AM, Adrian Liew wrote:
> Hi Edwin,
>
> You may want to try explore some of the configuration properties to
> configure in zookeeper.
>
> http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_zkMulitServerSetup
>
> My recommend
the authentication framework be able to prevent
access to the HTML/CSS/JS, as this is what users expect of a UI. Hiding
the API is needed for security, hiding the UI is valuable in terms of
user experience - e.g. what does a user see if the API is blocked?
Probably a heap of nasty exceptions.
Upayavira
On Mon, Oct
The new collections UI isn't yet committed. It is close, and i would
like to have it in 5.4.
Upayavira
On Fri, Oct 2, 2015, at 09:35 PM, Ravi Solr wrote:
> Thank you very much Erick and Uchida. I will take a look at the URL u
> gave
> Erick.
>
> Thanks
>
> Ravi Kiran Bha
could give us
clues.
Upayavira
On Fri, Oct 2, 2015, at 06:58 AM, Shawn Heisey wrote:
> On 10/1/2015 1:26 PM, Rallavagu wrote:
> > Solr 4.6.1 single shard with 4 nodes. Zookeeper 3.4.5 ensemble of 3.
> >
> > See following errors in ZK and Solr and they are connecte
be done.
It would then be great to put your thoughts and ideas into a JIRA
ticket.
Upayavira
On Thu, Oct 1, 2015, at 11:31 PM, Teague James wrote:
> Hi everyone!
>
> Pardon if it's not proper etiquette to chime in, but that feature would
> solve some issues I have with my app for the sam
What are you trying to achieve by using virtualisation?
If it is just code separation, consider using containers and Docker
rather than fully fledged VMs.
CPU is shared, but each container sees its own view of its file system.
Upayavira
On Thu, Oct 1, 2015, at 07:47 AM, Bernd Fehling wrote
Why don't you create DNS names, or such, so that you can replace a
zookeeper instance at the same hostname:port rather than having to edit
solr.xml across your whole Solr farm?
The idea is that your list of zookeeper hostnames is a virtual one, not
a real one.
Upayavira
On Wed, Sep 30, 2015
those in a separate core. Then you can
request those immediately after your search query. Or reindex your
content with that data stored alongside.
Upayavira
On Wed, Sep 30, 2015, at 09:16 AM, Alessandro Benedetti wrote:
> I am still missing why you quote the number of the documents...
> If yo
an burying ourselves neck-deep in MLT
problems.
Upayavira
[1]
http://mylazycoding.blogspot.co.uk/2012/03/cluster-apache-solr-data-using-apache_13.html
[2] https://cwiki.apache.org/confluence/display/solr/Result+Clustering
On Tue, Sep 29, 2015, at 12:42 PM, Szűcs Roland wrote:
> Hello Upayavira
e much faster.
Thus, on the content field of your documents, add termVectors="true" in
your schema, and re-index. Then you could well find MLT becoming a lot
more efficient.
Upayavira
On Tue, Sep 29, 2015, at 10:39 AM, Szűcs Roland wrote:
> Hi Alessandro,
>
> My origi
You can change the strings that are inserted into the text, and could
place markers that you use to identify the start/end of highlighting
elements. Does that work?
Upayavira
On Mon, Sep 28, 2015, at 09:55 PM, Mark Fenbers wrote:
> Greetings!
>
> I have highlighting turned on i
anything you can do here (without a substantial
programming effort) other than add a layer in front of Solr that adds
x+X, y+Y and z+Z.
As such, Solr doesn't have an enumeration data type - you'd have to just
use a string field and enforce it outside of Solr.
Upayavira
You could use the MLT query parser, and combine that with other queries,
whether as filters or boosts.
You can't yet use stream.body yet, so would need to use the handler if
you need that.
Upayavira
On Mon, Sep 28, 2015, at 09:53 AM, Alessandro Benedetti wrote:
> Hi Upaya,
>
I would expect this to be negligible.
Upayavira
On Mon, Sep 28, 2015, at 01:30 PM, Oliver Schrenk wrote:
> Hi,
>
> I want to register multiple but identical search handler to have multiple
> buckets to measure performance for our different apis and consumers (and
> to find out
- hence try
facet.limit = 100 or such
Upayavira
On Mon, Sep 28, 2015, at 11:47 AM, Moen Endre wrote:
> How does facet_count work with a facet field that is defined as solr.
> PathHierarchyTokenizerFactory?
>
> I have multiple records that contains field Parameter whic
. Look at
the maxDocs vs numDocs (visible via the admin UI for your
core/collection). If maxDocs>numDocs, it means that some docs have been
overwritten - i.e. the ID field that Nutch is using is not unique.
Upayavira
On Mon, Sep 28, 2015, at 10:19 AM, Daniel Holmes wrote:
> Hi,
> I am usi
Once you understand what MLT is doing, you will probably not find it so
hard to create your own version which is better suited to your own
use-case.
Of course, this would probably be better constructed as a QueryParser
rather than a request handler, but that's a detail.
Upayavira
On Fri, Sep 25, 2015
and data
dir aren't needed.
Upayavira
On Wed, Sep 23, 2015, at 10:46 PM, Erick Erickson wrote:
> OK, this is bizarre. You'd have had to set up SolrCloud by specifying the
> -zkRun command when you start Solr or the -zkHost; highly unlikely. On
> the
> admin page there would be a
, but it has
a precision of 0, meaning it is only indexed once.
Upayavira
On Thu, Sep 24, 2015, at 03:00 AM, Ravi Solr wrote:
> Recently I installed 5.3.0 and started seeing weird exception which
> baffled
> me. Has anybody encountered such an issue ? The indexing was done via
> DIH,
ith Docker?
Hi Aurelien,
I'm wondering if there's anything specific that is needed to run Solr
inside Docker? Is there something you have in mind?
Upayavira
ut generally, we can expect that
there will be occasions when something that seems obviously spam gets
through our systems.
Upayavira
You cannot do multi valued fields with LatLongType fields. Therefore, if
that is a need, you will have to investigate RPT fields.
I'm not sure how you do distance boosting there, so I'd suggest you ask
that as a separate question with a new title.
Upayavira
On Mon, Sep 21, 2015, at 01:27 PM
can index
shapes into it as well as locations.
I'd suggest you read this page, and pay particular attention to mentions
of RPT:
https://cwiki.apache.org/confluence/display/solr/Spatial+Search
Upayavira
On Mon, Sep 21, 2015, at 10:36 AM, Aman Tandon wrote:
> Upayavira, please help
>
>
As it says below, -c enables a Zookeeper node within the same JVM as
Solr. You don't want that, as you already have an ensemble up and
running.
Upayavira
On Mon, Sep 21, 2015, at 09:35 PM, Ravi Solr wrote:
> Can somebody kindly help me understand the difference between the
> following
>
Can you show the error you are getting, and how you know it is because
of stored="true"?
Upayavira
On Mon, Sep 21, 2015, at 09:30 AM, Aman Tandon wrote:
> Hi Erick,
>
> I am getting the same error because my dynamic field *_coordinate is
> stored="true".
&
in relation to
nested structures) as that isn't what it was designed for. Really,
you're gonna want to identify what you want OUT of your data, and then
identify a data structure that will allow you to achieve it. You cannot
assume that there is a standard way of doing it that will support every
use-case.
Upayavira
() {
return SolrRequestInfo.getRequestInfo().getNOW();
}
};
}
}
Effectively, all it does is return the value of NOW according to the
request, as the default value.
You could construct that on a per invocation basis, using
System.getMillis() or whatever.
Upayavira
On Mon, Sep 21
It is worth noting that the ref guide page on configsets refers to
non-cloud mode (a useful new feature) whereas people may confuse this
with configsets in cloud mode, which use Zookeeper.
Upayavira
On Sun, Sep 20, 2015, at 04:59 AM, Ravi Solr wrote:
> Cant thank you enough for clarify
set wrong. Watch for more than
one wt=, I bet Solr is always honouring the first.
Upayavira
On Fri, Sep 18, 2015, at 06:39 PM, Mark Fenbers wrote:
> Greetings!
>
> I cannot seem to configure the spell-checker to return results in XML
> instead of JSON. I tried prog
and replicationFactor is the number of copies of your data, not the
number of servers marked 'replica'. So as has been said, if you have one
leader, and three replicas, your replicationFactor will be 4.
Upayavira
On Thu, Sep 17, 2015, at 03:29 AM, Erick Erickson wrote:
> Ravi:
>
&g
How many CPUs on that machine? How many other requests using the server?
On Thu, Sep 17, 2015, at 09:58 AM, Zheng Lin Edwin Yeo wrote:
> Thanks for the information.
>
> I was trying with 2 shards and 4 shards but all on the same machine, and
> they have the same performance (no improvement in
other". This would have been impossible (or very hard indeed) to
implement using a pure scoring algorithm.
Upayavira
On Wed, Sep 16, 2015, at 12:21 PM, Parvesh Garg wrote:
> Hi All,
>
> I wanted to understand the difference between CustomScoreQuery and
> RankQuery. From the outside, it
the config (in solrconfig.xml) for your spellchecker?
Also, a simple way to get spell checking started is to look at the
/browse example that comes with the techproducts sample configs. It has
spellchecking already working, so starting there can be a way to get
something going easily.
Upayavira
That is, use a TextField plus a KeywordTokenizerFactory, rather than a
StringField
On Wed, Sep 16, 2015, at 09:03 PM, Upayavira wrote:
> If you want to analyse a string field, use the KeywordTokenizer - it
> just passes the whole field through as a single tokenizer.
>
> Does
ex based
> fuzzy
> matching from multi-valued field. However, the solr string field does not
> allow to customise the default analyser. Is there any other way to
> circumvent the problem?
>
> thanks,
> Jerry
>
>
>
> On 16 September 2015 at 19:55, Upayavira <
can configure scheme to
> do phonetic matching/query?
Phonetic matching is done at index time - that is - you use a
PhoneticFilterFactory in your analysis chain, such that you are doing
exact match lookups on the phonetic terms.
Make sense?
Upayavira
ether or not you will recommend Solr 5.3.0?
Upayavira
I bet the terms component does not analyse the terms, so you will need
to hand in already analysed phonetic terms. You could use the
http://localhost:8983/solr/YOUR-CORE/analysis/field URL to have Solr
analyse the field for you before passing it back to the term component.
Upayavira
On Wed, Sep
I bet you have the admin UI open on your second slave. The _=144... is
the give-away. Those requests are the admin UI asking the replication
handler for the status of replication.
Upayavira
On Wed, Sep 9, 2015, at 06:32 AM, Kamal Kishore Aggarwal wrote:
> Hi Team,
>
> I am currentl
Your "correct" doc isn't valid json. Try tag:["tag1", "tag2"] which
would be valid.
Upayavira
On Sat, Sep 12, 2015, at 08:49 AM, sara hajili wrote:
> hi
> in my schema i have a tag field.
> this field set multiValued="true".
> now my quest
Are you getting out of order scores? Or does the score change between
requests? Can you show us some results that you are getting so we might
see what's going on?
Upayavira
On Fri, Sep 11, 2015, at 05:07 AM, Modassar Ather wrote:
> Thanks Erick and Upayavira for the responses. One thing whic
rms
from your 100k that are included in that particular document.
Does that get it?
Upayavira
On Fri, Sep 11, 2015, at 03:21 AM, Francisco Andrés Fernández wrote:
> Yes.
> I have many drug products leaflets, each corresponding to 1 product. In
> the
> other hand we have a medical dictio
good documents to study. It would be very helpful if you could shed
> some lights into this matter.
How are you going to do this with machine learning? What corpus are you
going to use to learn from? Do you have some documents that have been
manually stemmed for which you also have the originals?
Upayavira
That's curious. Have a look at both the parsed query, and the explains
output for a very simple (even *:*) query. You should see the boost
present there and be able to see whether it is applied once or twice.
Upayavira
On Thu, Sep 10, 2015, at 06:16 AM, Aman Tandon wrote:
> Hi,
>
&g
Add fl=id,score,[shard] to your query, and show us the results of two
differing executions.
Perhaps we will be able to see the cause of the difference.
Upayavira
On Thu, Sep 10, 2015, at 05:35 AM, Modassar Ather wrote:
> Thanks Erick. There are no replicas on my cluster and the indexing is
Aman,
If you are using edismax then what you have written is just fine.
For Lucene query parser queries, wrap them with the boost query parser:
q={!boost b=product_guideline_score v=$qq}=jute
Note in your example you don't need product(), just do
boost=product_guideline_score
Upayavira
, you could map:
run,running,runs,ran,runner=>run
walk,walked,walking,walker=>walk
Then all you need to do is generate a synonym file and use the
SynonymFilterFactory with it, in place of a stemmer.
Would that work?
Upayavira
On Thu, Sep 10, 2015, at 09:59 AM, Imtiaz Shakil Siddique
What scores are you getting? If two documents come back from different
shards with the same score, the order would not be predictable -
probably down to which shard responds first.
Fix it with something like sort=score,timestamp or some other time
related field.
Upayavira
On Thu, Sep 10, 2015
That would be fantastic, Erik. I've got a somewhat complex setup where I
rsync between folders. Being able to serve directly from the SVN
location would be very handy.
Upayavira
On Thu, Sep 10, 2015, at 04:58 PM, Erik Hatcher wrote:
> With the exploded structure, maybe we can move the web
r-webapp/webapp, and refresh my
browser when I edit them. I've never had any issue with that, doing most
of my development in Chrome, because I find its dev tools to be better.
I then rsync those files into webapp/web in order to commit them.
Upayavira
ve an edismax query in the main query
and a join in a filter. You could combine multiple queries to have an
edismax clause and a join clause.
Depends on what you're trying to do but as far as you have phrased the
question, I don't see any issues.
Upayavira
On Thu, Sep 10, 2015, at 10:52 PM, Erik Hatcher wrote:
> Upayavira, could you give this a try and see if this works (patch is for
> trunk): https://issues.apache.org/jira/browse/SOLR-8035
> <https://issues.apache.org/jira/browse/SOLR-8035>
Will look :-)
> And when do we m
stopwords. However:
q=jack and jill
will score docs that have "jack" or "jill" or preferably both way above
docs that just have "and".
If I needed stopwords, I'd do something like you suggested, then show
the results to a native speaker and see what they think.
Upayavir
join core, you will be doing a
100k term search, which will invariably be painful, because the more
terms you include in the search, the slower it will be.
How many matching docs do you have on the other side of your query?
Upayavira
On Tue, Sep 8, 2015, at 02:09 PM, Russell Taylor wrote:
>
101 - 200 of 854 matches
Mail list logo