Yes thats right, there is no "best" setup at all, only one that
gives most advantage to your requirements.
And any setup has some disadvantages.
Currently I'm short in time and have to bring our Cloud to production
but a write-up is in the queue as already done with other developments.
On Tue, 2018-08-28 at 09:37 +0200, Bernd Fehling wrote:
> Yes, I tested many cases.
Erick is absolutely right about the challenge of finding "best" setups.
What we can do is gather observations, as you have done, and hope that
people with similar use cases finds them. With that in mind, have you
deployment configuration in solr cloud.
> When
> > > > we need to increase the number of shards in solr cloud, there are two
> > > > options:
> > > >
> > > > 1. Run multiple solr instances per host, each with a different port
> and
> > > > hostin
are not seen with a multi instance setup.
> >>
> >> Tested about 2 month ago with SolCloud 6.4.2.
> >>
> >> Regards,
> >> Bernd
> >>
> >>
> >> Am 26.08.2018 um 08:00 schrieb Wei:
> >>> Hi,
> >>>
> >
estion about the deployment configuration in solr cloud. When
we need to increase the number of shards in solr cloud, there are two
options:
1. Run multiple solr instances per host, each with a different port and
hosting a single core for one shard.
2. Run one solr instance per host, and have mu
. Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks
n in solr cloud. When
> > we need to increase the number of shards in solr cloud, there are two
> > options:
> >
> > 1. Run multiple solr instances per host, each with a different port and
> > hosting a single core for one shard.
> >
> > 2. Run one solr in
the number of shards in solr cloud, there are two
>> options:
>> 1. Run multiple solr instances per host, each with a different port and
>> hosting a single core for one shard.
>> 2. Run one solr instance per host, and have multiple cores(shards) in the
>> same
instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks,
Wei
lr instances per host, each with a different port and
>>> hosting a single core for one shard.
>>>
>>> 2. Run one solr instance per host, and have multiple cores(shards) in
>> the
>>> same solr instance.
>>>
>>> Which would be better
d. When
> > > we need to increase the number of shards in solr cloud, there are two
> > > options:
> > >
> > > 1. Run multiple solr instances per host, each with a different port and
> > > hosting a single core for one shard.
> > >
> &
shards in solr cloud, there are two
> > options:
> >
> > 1. Run multiple solr instances per host, each with a different port and
> > hosting a single core for one shard.
> >
> > 2. Run one solr instance per host, and have multiple cores(shards) in
> the
.
2. Run one solr instance per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization
per host, and have multiple cores(shards) in the
same solr instance.
Which would be better performance wise? For the first option I think JVM
size for each solr instance can be smaller, but deployment is more
complicated? Are there any differences for cpu utilization?
Thanks,
Wei
.
To: solr-user@lucene.apache.org
Subject: Re: Multiple cores versus a "source" field.
One more opinion on source field vs separate collections for multiple corpora.
Index statistics don’t really settle down until at least 100k documents. Below
that, idf is pretty noisy. With Ultraseek, we
]
> Sent: Tuesday, 5 December 2017 4:11 p.m.
> To: solr-user <solr-user@lucene.apache.org>
> Subject: Re: Multiple cores versus a "source" field.
>
> That's the unpleasant part of semi-structued documents (PDF, Word, whatever).
> You never know the relationship bet
ave a play with that now.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tuesday, 5 December 2017 4:11 p.m.
To: solr-user <solr-user@lucene.apache.org>
Subject: Re: Multiple cores versus a "source" field.
That's the unpleasant part of semi-structued docu
That's the unpleasant part of semi-structued documents (PDF, Word,
whatever). You never know the relationship between raw size and
indexable text.
Basically anything that you don't care to contribute to _scoring_ is
often better in an fq clause. You can also use {!cache=false} to
bypass actually
>You'll have a few economies of scale I think with a single core, but frankly I
>don't know if they'd be enough to measure. You say the docs are "quite large"
>though, >are you talking books? Magazine articles? is 20K large or are the 20M?
Technical reports. Sometimes up to 200MB pdfs, but that
At that scale, whatever you find administratively most convenient.
You'll have a few economies of scale I think with a single core, but
frankly I don't know if they'd be enough to measure. You say the docs
are "quite large" though, are you talking books? Magazine articles? is
20K large or are the
I have two different document stores that I want index. Both are quite small
(<50,000 documents though documents can be quite large). They are quite capable
of using the same schema, but you would not want to search both simultaneously.
I can see two approaches to handling this case.
1/ Create
This question has been asked before. I found a few postings to Solr user and a
couple on Google-in-the-large.
But I am still not sure which is best.
My project currently has two distinct datasets (documents) with no shared
fields.
But at times, we need to query across both of them.
So we
and “operate” on “generic" documents
>(faceting, etc) regardless of the source.
>
>From the management (e.g. import) and search relevance (e.g. analysis,
>relevance, etc) point of view, what is considered “best practice”:
>
>one core for all sources and import through different ent
core for all sources and import through different entities
one core per source and search across multiple cores
something else
?
It would be great if you can share your experience or point me to some articles.
Thank you in advance!
uery such as the following in Solr?
> >> >>
> >> >> SELECT
> >> >> child. *, parent. *
> >> >> FROM child
> >> >> JOIN parent
> >> >> WHERE child.parent_id = parent.id AND parent.tag = 'hoge'`
> >> >>
> >> >> child a
Let's back up a bit and ask what your primary goal is. Just indexing a
bunch of stuff as fast as possible? By and large, I'd index to a
single core with multiple threads rather than the approach you're
taking (I'm assuming that there's a MERGEINDEXES somewhere in this
process). You should be able
Hi,
I wanted to check if the following would work;
1. Spawn n threads
2. Create n-cores
3. Index n records simultaneously in n-cores
4. Merge all core indexes into a single master core
I have been able to successfully do this for 5 threads (5 cores) with 1000
documents each. However, I wanted
g = 'hoge'`
> >>
> >> child and parent is not that parent is more than in a many-to-one
> >> relationship.
> >> I try this but can not.
> >>
> >> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
> >>
> >>
>
gt; >> WHERE child.parent_id = parent.id AND parent.tag = 'hoge'`
>> >>
>> >> child and parent is not that parent is more than in a many-to-one
>> >> relationship.
>> >> I try this but can not.
>> >>
>> >> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> View this message in context:
>> >>
>> >> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
>> >> Sent from the Solr - User mailing list archive at Nabble.com.
>> >>
>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
>
e than in a many-to-one
>> relationship.
>> I try this but can not.
>>
>> /select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
not.
/select/?q={!join from=parent_id to=id fromIndex=parent}id:1+tag:hoge
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
Sent from the Solr - User mailing list archive at Nabble.com.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-get-the-join-data-by-multiple-cores-tp4235799.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
so far without Solr Cloud, and I anticipate
> having multiple cores.
>
> For now, I can make use solr/corename/admin/ping, but how can I have Solr
> ping all cores?
>
> Dan Davis, Systems/Applications Architect (Contractor),
> Office of Computer and Communications Systems,
> National Library of Medicine, NIH
>
>
I'm wondering what different folks do out there for a health monitor for Solr.
I'm running Solr 5.2.1, so far without Solr Cloud, and I anticipate having
multiple cores.
For now, I can make use solr/corename/admin/ping, but how can I have Solr ping
all cores?
Dan Davis, Systems/Applications
For backup purposes to an offsite data center, I need to make sure that each
core's configuration has replication to a consistently defined backup directory
on a Netapp filer. The Netapp filer's snapshot can be invoked manually, and
its snap mirror will copy the data to the offsite data
https://issues.apache.org/jira/browse/SOLR-6234
{!scorejoin} which is a Solr QParser brings Lucene JoinUtil, for sure.
replying into appropriate list.
On Wed, Dec 10, 2014 at 10:14 PM, Parnit Pooni parni...@gmail.com wrote:
Hi,
I'm running into an issue attempting to sort, here is the
As mentioned in antoher post we (already) have a (Lucene-based) generic
indexing framework which allows any source/entity to provide
indexable/searchable data.
Sources may be:
pages
events
products
customers
...
As their names imply they have nothing in common ;) Never the less we'd like to
Depending on the size, I'd go for (a). IOW, I wouldn't change the
sharding to use (a), but if you have the same shard setup in that
case, it's easier.
You'd index a type field with each doc indicating the source of your
document. Then use the grouping feature to return the top N from each
of the
You really can't tell until you prototype and measure. Here's a long
blog on why what you're asking, although a reasonable request,
is just about impossible to answer without prototyping and measuring.
I need to store in SOLR all data of my clients mailing activitiy
The data contains meta data like From;To:Date;Time:Subject etc
I would easily have 1000 Million records every 2 months.
What I am currently doing is creating cores per client. So I have 400 cores
already.
Is this a good idea to
Hi Ramprasad,
You can certainly have a system with hundreds of cores. I know of more than
a few people who have done that successfully in their setups.
At the same time, I'd also recommend to you to have a look at SolrCloud.
SolrCloud takes away the operational pains like replication/recovery
On Tue, 2014-08-12 at 08:40 +0200, Ramprasad Padmanabhan wrote:
I need to store in SOLR all data of my clients mailing activitiy
The data contains meta data like From;To:Date;Time:Subject etc
I would easily have 1000 Million records every 2 months.
If standard searches are always inside a
I think this question is more aimed at design and performance of large
number of cores.
Also solr is designed to handle multiple cores effectively, however it
would be interesting to know If you have observed any performance problem
with growing number of cores, with number of nodes and solr
Obviously I can always add more nodes to solr, but I need to justify how
much I need.
On 12 August 2014 12:48, Harshvardhan Ojha ojha.harshvard...@gmail.com
wrote:
I think this question is more aimed at design and performance of large
number of cores.
Also solr is designed to handle multiple
On Tue, 2014-08-12 at 11:50 +0200, Ramprasad Padmanabhan wrote:
Are there documented benchmarks with number of cores
As of now I just have a test bed.
We have 150 million records ( will go up to 1000 M ) , distributed in 400
cores.
A single machine 16GB RAM + 16 cores search is working
Sorry for missing information. My solr-cores take less than 200MB of disk
What I am worried about is If I run too many cores from a single solr
machine there will be a limit to the number of concurrent searches it can
support. I am still benchmarking for this.
Also another major bottleneck I
On Tue, 2014-08-12 at 14:14 +0200, Ramprasad Padmanabhan wrote:
Sorry for missing information. My solr-cores take less than 200MB of
disk
So ~3GB/server. If you do not have special heavy queries, high query
rate or heavy requirements for index availability, that really sounds
like you could
Hi Ramprasad,
I have used it in a cluster with millions of users (1 user per core) in
legacy cloud mode .We used the on demand core loading feature where each
Solr had 30,000 cores and at a time only 2000 cores were in memory. You are
just hitting 400 and I don't see much of a problem . What is
Hi Paul and Ramprasad,
I follow your discussion with interest as I will have more or less the
same requirement.
When you say that you use on demand core loading, are you talking about
LotsOfCore stuff?
Erick told me that it does not work very well in a distributed
environnement.
How do you
On 12 August 2014 18:18, Noble Paul noble.p...@gmail.com wrote:
Hi Ramprasad,
I have used it in a cluster with millions of users (1 user per core) in
legacy cloud mode .We used the on demand core loading feature where each
Solr had 30,000 cores and at a time only 2000 cores were in memory.
Ramprasad Padmanabhan [ramprasad...@gmail.com] wrote:
I have a single machine 16GB Ram with 16 cpu cores
Ah! I thought you had more machines, each with 16 Solr cores.
This changes a lot. 400 Solr cores of ~200MB ~= 80GB of data. You're aiming for
7 times that, so about 500GB of data. Running
The machines were 32GB ram boxes. You must do the RAM requirement
calculation for your indexes . Just the no:of indexes alone won't be enough
to arrive at the RAM requirement
On Tue, Aug 12, 2014 at 6:59 PM, Ramprasad Padmanabhan
ramprasad...@gmail.com wrote:
On 12 August 2014 18:18, Noble
And how many machines running the SOLR ?
On 12 August 2014 22:12, Noble Paul noble.p...@gmail.com wrote:
The machines were 32GB ram boxes. You must do the RAM requirement
And how many machines running the SOLR ?
I expect that I will have to add more servers. What I am looking for is how
.
Any inputs will be of great help.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-multiple-cores-tp4136059p4139063.html
Sent from the Solr - User mailing list archive at Nabble.com.
is not
core-specific, so when the suggester is defined for multiple cores, they
collide: you get exceptions attempting to obtain the lock, and the
suggestions bleed from one core to the other. There is an (undocumented)
indexPath parameter that can be used to control this, so I think I can
work
as if the location of the suggester dictionary directory is not
core-specific, so when the suggester is defined for multiple cores, they
collide: you get exceptions attempting to obtain the lock, and the
suggestions bleed from one core to the other. There is an (undocumented)
indexPath parameter
, 2014 at 8:27 PM, Jay Potharaju jspothar...@gmail.com wrote:
Hi,
I am trying to join across multiple cores using query time join. Following
is my setup
3 cores - Solr 4.7
core1: 0.5 million documents
core2: 4 million documents and growing. This contains the child documents
for documents
jspothar...@gmail.comwrote:
Hi,
I am trying to join across multiple cores using query time join. Following
is my setup
3 cores - Solr 4.7
core1: 0.5 million documents
core2: 4 million documents and growing. This contains the child documents
for documents in core1.
core3: 2 million documents
It seems as if the location of the suggester dictionary directory is not
core-specific, so when the suggester is defined for multiple cores, they
collide: you get exceptions attempting to obtain the lock, and the
suggestions bleed from one core to the other. There is an
(undocumented
Hi,
I am trying to join across multiple cores using query time join. Following
is my setup
3 cores - Solr 4.7
core1: 0.5 million documents
core2: 4 million documents and growing. This contains the child documents
for documents in core1.
core3: 2 million documents and growing. Contains records
I want consolidation of fields from multiple cores and there are two fields
in common across all cores.
I have data stored in normalized form across 3 cores on same JVM. Want to
merge and select multiple fields depending on WHERE clause/common fields in
each core.
Any help would be appreciated
1, Are the cores join-able?
2. Could you give me an example about how to write a multiple core join?
3. Can we do equivalent of JOIN in SOLR across multiple cores
Select T1.*,T2.*
FROM Table1 T1,Table2 T2
WHERE T1.id = T2.id
--
View this message in context:
http://lucene
in SOLR across multiple cores
Select T1.*,T2.*
FROM Table1 T1,Table2 T2
WHERE T1.id = T2.id
--
View this message in context:
http://lucene.472066.n3.nabble.com/Equivalent-of-SQL-JOIN-in-SOLR-across-multiple-cores-tp4106152.html
Sent from the Solr - User mailing list archive
Any good/recent documentation that I can reference on setting up multiple cores
in Solr 4.5.0?
Thanks all,
Mark
IMPORTANT NOTICE: This e-mail message is intended to be received only by
persons entitled to receive the confidential information it may contain. E-mail
messages sent from
mark.re...@bpiedu.com wrote:
Any good/recent documentation that I can reference on setting up multiple
cores in Solr 4.5.0?
Thanks all,
Mark
IMPORTANT NOTICE: This e-mail message is intended to be received only by
persons entitled to receive the confidential information it may contain.
E
Hi,
I'm using solr 4.3 and I have data in multiple cores which are different in
structure like (Core1 - col1 col2) (Core2 - col3 col4).
Now I would like to run a search query on both of the cores and in the end
to get a single result set from the 2 cores combines.
Please help me out
On Fri, Oct 25, 2013 at 9:46 AM, Jamshaid Ashraf jamshaid...@gmail.comwrote:
Hi,
I'm using solr 4.3 and I have data in multiple cores which are different in
structure like (Core1 - col1 col2) (Core2 - col3 col4).
Now I would like to run a search query on both of the cores and in the end
Hello,
I still have this issue using Solr 4.4, removing firstSearcher queries did
make the problem go away.
Note that I'm using Tomcat 7 and that if I'm using my own Java application
launching an Embedded Solr Server pointing to the same Solr configuration
the server fully starts with no hang.
Hi
I want to display result as one Dataset thorough solr using Multicore.In one
core Containg EnglishCollectionData and onther containg HindiCollectionData.
When I am join two core result is displayed when I am giving English Parameter
But does not work For Hindi Parameter.Could me give the
try latest solr? There was a library loading bug with multiple
cores. Not a perfect match to your description but close enough.
Regards,
Alex
On 21 Sep 2013 02:28, Hayden Muhl haydenm...@gmail.com wrote:
I have two cores favorite and user running in the same Tomcat
instance.
In each
Did you try latest solr? There was a library loading bug with multiple
cores. Not a perfect match to your description but close enough.
Regards,
Alex
On 21 Sep 2013 02:28, Hayden Muhl haydenm...@gmail.com wrote:
I have two cores favorite and user running in the same Tomcat instance
I have two cores favorite and user running in the same Tomcat instance.
In each of these cores I have identical field types text_en, text_de,
text_fr, and text_ja. These fields use some custom token filters I've
written. Everything was going smoothly when I only had the favorite core.
When I added
in the name of
them.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, September 06, 2013 9:18 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 4.3 Startup with Multiple Cores Hangs on Registering Core
bq: I'm actually not using the transaction log
bq: I'm actually not using the transaction log (or the
NRTCachingDirectoryFactory); it's currently set up to use the
MMapDirectoryFactory,
This isn't relevant to whether you're using the update log or not, this is
just how the index is handled. Look for something in your solrconfig.xml
like:
: Sorry for the multi-post, seems like the .tdump files didn't get
: attached. I've tried attaching them as .txt files this time.
Interesting ... it looks like 2 of your cores are blocked in loaded while
waiting for the searchers to open ... not clera if it's a deaklock or why
though - in
: Do all of your cores have newSearcher event listners configured or just
: 2 (i'm trying to figure out if it's a timing fluke that these two are
stalled, or if it's something special about the configs)
All of my cores have both the newSearcher and firstSearcher event listeners
configured. (The
Hello,
I currently have Solr 4.3 set up with about 400 cores set to load upon start
up. When starting Solr with an empty index for each core, Solr is able to load
all of the cores and start up normally as expected. However, after running a
dataimport on all cores and restarting Solr, it
: I currently have Solr 4.3 set up with about 400 cores set to load upon
: start up. When starting Solr with an empty index for each core, Solr is
: able to load all of the cores and start up normally as expected.
: However, after running a dataimport on all cores and restarting Solr, it
:
Hi,
At lucene level we have MultiSearcher to search a few cores at the same time
with same query,
at solr level can we perform such search (if using same config/schema)? Here I
donot mean to
search across shards of the same collection but independent collections?
Thanks very much for helps,
:
http://lucene.472066.n3.nabble.com/Configuring-Tomcat-6-with-Solr431-with-multiple-cores-tp4078778.html
Sent from the Solr - User mailing list archive at Nabble.com.
@lucene.apache.org
Subject: Re: How to share config files in SolrCloud between multiple
cores(collections)
To share configs in SolrCloud you just upload a single config set and then link
it to multiple collections. You don't actually use solr.xml to do it.
- Mark
On Mar 19, 2013, at 10:43 AM, Li, Qiang
On 3/20/2013 1:28 PM, Li, Qiang wrote:
I just want to share the solrconfig.xml and schema.xml. As there should be
differences between collections for other files, such as the DIH's
configurations.
I believe that SolrCloud treats each config set as a completely separate
entity, with no
We have multiple cores with the same configurations, before using SolrCloud, we
can use relative path in solr.xml. But with Solr4, is seems denied for using
relative path for the schema and config in solr.xml.
Regards,
Ivan
This email message and any attachments are for the sole use
To share configs in SolrCloud you just upload a single config set and then link
it to multiple collections. You don't actually use solr.xml to do it.
- Mark
On Mar 19, 2013, at 10:43 AM, Li, Qiang qiang...@msci.com wrote:
We have multiple cores with the same configurations, before using
Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez mar...@jitisoft.com wrote:
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following issue and it eats up a lot of memory when shutting
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez mar...@jitisoft.com wrote:
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following
Influence Isn’t a Game
On Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez mar...@jitisoft.com wrote:
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing
the
following issue and it eats up a lot of memory when shutting down. Has
anyone seen this and have an idea how
Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Wed, Feb 6, 2013 at 6:09 AM, Marcos Mendez mar...@jitisoft.com wrote:
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following issue and it eats up a lot
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following issue and it eats up a lot of memory when shutting down. Has
anyone seen this and have an idea how to solve it?
Exception in thread DefaultThreadPool 196 java.lang.OutOfMemoryError:
PermGen space
2013-02-05
at 6:09 AM, Marcos Mendez mar...@jitisoft.com wrote:
Hi,
I'm deploying the SOLR war in Geronimo, with multiple cores. I'm seeing the
following issue and it eats up a lot of memory when shutting down. Has
anyone seen this and have an idea how to solve it?
Exception in thread DefaultThreadPool
Hi,
I need to build a UI that can access multiple cores. And combine them all on
an Everything tab.
The solrajax example only has 1 core.
How do I setup multicore with solrajax?
Do I setup 1 manager per core? How much of a performance hit will I take
with multiple managers running
Would http://wiki.apache.org/solr/Solrj#EmbeddedSolrServer save you some
work?
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Mon, Nov 26, 2012 at 7:18 PM, Nicholas Ding nicholas...@gmail.comwrote:
You can simplify your code by searching across cores in the SearchComponent:
1) public class YourComponent implements SolrCoreAware
-- Grab instance of CoreContainer and store (mCoreContainer =
core.getCoreDescriptor().getCoreContainer();)
2) In the process method:
* grab the core requested
Hi Otis,
Thank you so much, that's exactly what I need!
Thanks
Nicholas
On Mon, Nov 26, 2012 at 10:28 PM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
Would http://wiki.apache.org/solr/Solrj#EmbeddedSolrServer save you some
work?
Otis
--
SOLR Performance Monitoring -
On 11/14/2012 10:19 AM, Carlos Alexandro Becker wrote:
What's the best way to search in multiple cores and merge the results using
solrj?
Your best bet really is to have Solr do this for you with distributed
search. You can add the shards parameter to your queries easily with
SolrJ, or you
Hm, and in the case of my cores have different schemes?
Thanks in advance.
On Wed, Nov 14, 2012 at 3:35 PM, Shawn Heisey s...@elyograg.org wrote:
On 11/14/2012 10:19 AM, Carlos Alexandro Becker wrote:
What's the best way to search in multiple cores and merge the results
using
solrj
On 11/14/2012 10:48 AM, Carlos Alexandro Becker wrote:
Hm, and in the case of my cores have different schemes?
You might have to do all the heavy lifting yourself, after using SolrJ
to retrieve the results. I will say that I have no idea -- there may be
ways you can avoid doing that. I
hmm... the less-horrible way I could think (if solr doesn't support it by
default), is to create another core that mix the informations from other
cores, and then, search in it.
But, well, it would be ugly.
On Wed, Nov 14, 2012 at 5:14 PM, Shawn Heisey s...@elyograg.org wrote:
On 11/14/2012
thanks anyway, Shawn.
On Wed, Nov 14, 2012 at 5:24 PM, Carlos Alexandro Becker caarl...@gmail.com
wrote:
hmm... the less-horrible way I could think (if solr doesn't support it by
default), is to create another core that mix the informations from other
cores, and then, search in it.
But,
is hard.
quote: http://it.wikipedia.org/wiki/Bruno_Munari
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-in-multiple-cores-via-SolrJ-tp4020320p4020359.html
Sent from the Solr - User mailing list archive at Nabble.com.
1 - 100 of 229 matches
Mail list logo