yourself. from 'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'
EARTH has a Right To Life,
otherwise we all die.
--- On Wed, 10/27/10, mike anderson saidthero...@gmail.com wrote:
From: mike anderson saidthero...@gmail.com
Subject: Re: how well does multicore scale?
To: solr
Creating a unique id for a schema is one of those design tasks:
http://wiki.apache.org/solr/UniqueKey
A marvelously lucid and well-written page, if I do say so. And I do.
On Tue, Oct 26, 2010 at 10:16 PM, Tharindu Mathew mcclou...@gmail.com wrote:
Really great to know you were able to fire up
Tagging every document with a few hundred thousand 6 character user-ids
would increase the document size by two orders of magnitude. I can't
imagine why this wouldn't mean the index would increase by just as much
(though I really don't know much about that file structure). By my simple
math, this
Hi mike,
I think I wasn't clear,
Each document will only be tagged with one user_id, or to be specific
one tenant_id. Users of the same tenant can't upload the same document
to the same path.
So I use this to make the key unique for each tenant. So I can index,
delete without a problem.
On
On Wed, 2010-10-27 at 14:20 +0200, mike anderson wrote:
[...] By my simple math, this would mean that if we want each shard's
index to be able to fit in memory, [...]
Might I ask why you're planning on using memory-based sharding? The
performance gap between memory and SSDs is not very big so
That's a great point. If SSDs are sufficient, then what does the Index size
vs Response time curve look like? Since that would dictate the number of
machines needed. I took a look at
http://wiki.apache.org/solr/SolrPerformanceData but only one use case seemed
comparable. We currently have about
mike anderson [saidthero...@gmail.com] wrote:
That's a great point. If SSDs are sufficient, then what does the Index size
vs Response time curve look like? Since that would dictate the number
of machines needed. I took a look at
http://wiki.apache.org/solr/SolrPerformanceData but only one use
So I fired up about 100 cores and used JMeter to fire off a few thousand
queries. It looks like the memory usage isn't much worse than running a
single shard. So thats good.
I'm really curious if there is a clever solution to the obvious problem
with: So your better off using a single index and
mike anderson wrote:
I'm really curious if there is a clever solution to the obvious problem
with: So your better off using a single index and with a user id and use
a query filter with the user id when fetching data., i.e.. when you have
hundreds of thousands of user IDs tagged on each article.
Really great to know you were able to fire up about 100 cores. But,
when it scales up to around 1000 or even more. I wonder how it would
perform.
I have a question regarding ids i.e. the unique key. Since there is a
potential use case that two users might add the same document, how
would we set
On Fri, Oct 22, 2010 at 11:18 AM, Lance Norskog goks...@gmail.com wrote:
There is an API now for dynamically loading, unloading, creating and
deleting cores.
Restarting a Solr with thousands of cores will take, I don't know, hours.
Is this in the trunk? Any docs available?
On Thu, Oct 21,
On 10/22/10 1:44 AM, Tharindu Mathew wrote:
Hi Mike,
I've also considered using a separate cores in a multi tenant
application, ie a separate core for each tenant/domain. But the cores
do not suit that purpose.
If you check out documentation no real API support exists for this so
it can
Thanks for the advice, everyone. I'll take a look at the API mentioned and
do some benchmarking over the weekend.
-Mike
On Fri, Oct 22, 2010 at 8:50 AM, Mark Miller markrmil...@gmail.com wrote:
On 10/22/10 1:44 AM, Tharindu Mathew wrote:
Hi Mike,
I've also considered using a separate
http://wiki.apache.org/solr/CoreAdmin
Since Solr 1.3
On Fri, Oct 22, 2010 at 1:40 PM, mike anderson saidthero...@gmail.com wrote:
Thanks for the advice, everyone. I'll take a look at the API mentioned and
do some benchmarking over the weekend.
-Mike
On Fri, Oct 22, 2010 at 8:50 AM, Mark
No, it does not seem reasonable. Why do you think you need a seperate
core for every user?
mike anderson wrote:
I'm exploring the possibility of using cores as a solution to bookmark
folders in my solr application. This would mean I'll need tens of thousands
of cores... does this seem
Hi Mike,
I've also considered using a separate cores in a multi tenant
application, ie a separate core for each tenant/domain. But the cores
do not suit that purpose.
If you check out documentation no real API support exists for this so
it can be done dynamically through SolrJ. And all use cases
16 matches
Mail list logo