On Jan 25, 2007, at 5:30 PM, Alex Karasulu wrote:
BJ Hargrave wrote:
Another solution to this is to cache resolve state results across
invocations. You will only need to invalidate the cache if the set of
installed bundles changes or some other event triggers the resolver
to modify the state.
IMO this is the only option. Indices have value if the set of
properties you have are relatively constant. Let me explain:
If bundles represent entries with properties which you need to test
for filter evaluation then you need some kind of master table with an
id to properties mapping. Then you need to build indices mapping into
that master table where the key of the index is a property value, and
the value is the id into the master. For example you may have
something like this.
Master
======
1 | Bundle A Properties
2 | Bundle B Properties
3 | Bundle C Properties
4 | Bundle D Properties
5 | Bundle E Properties
ObjectClass Property Index
=================
Foo | 3
Bar | 4
XYZ Property Index
=================
... etc
Then search would use these indices to evaluate the filter.
The problem is the properties you'll encounter are unbounded (I maybe
wrong) so the number indices you'll need will be as large as the
number of unique properties used.
So I think BJ your idea may be the best option.
No, it is not for the other reasons I explained in response to BJ's
message.
I think you hit the nail on the head, we don't need to index all of the
properties, we only need to index the important ones, like package name
and bundle symbolic name, for starters. The set of important properties
should be pretty easy to figure out and could possibly be configurable
for those who have different use cases. For non-indexed properties,
then you can resort to the slow method.
-> richard
Then the performance of the filter parser and evaluator is not so
critical since you do not have to generate a complete resolve state
everytime you start the framework up.
BJ Hargrave
Senior Technical Staff Member, IBM
OSGi Fellow and CTO of the OSGi Alliance
[EMAIL PROTECTED]
office: +1 386 848 3788
mobile: +1 386 848 3788
"Richard S. Hall" <[EMAIL PROTECTED]> 01/25/2007 04:09 PM
Please respond to
felix-dev@incubator.apache.org
To
felix-dev@incubator.apache.org
cc
Subject
Re: Needed: LDAP expression evaluation optimization
Jan,
The issue is that we have a set of capabilities (each a set of
property-value pairs) that we must run filters over to find matching
capabilities. The set of capabilities can grow to be pretty large if
you consider OBR repositories, but even at run-time you can have
quite a few capabilities since it includes all exported packages and
bundles (I have heard of examples of bundles exporting 100's of
packages).
So, assuming that we have a large set of capabilities, our new
generic approach requires even more use of filters evaluated over
these set of capabilities. So, the main reason why we have slowed
down now is due to the fact that we have to use filters more and this
will only be exacerbated as the number of capabilities grows in more
complex use cases.
I assume what we need is some way to index the properties of our
capabilities so that we can really quickly evaluate a given filter
over the set of capabilities to find all that match. This is just my
hunch.
If you think that you have something that can help out, I am all
ears. :-)
-> richard
Jan S. Rellermeyer wrote:
Hi Rick,
Is your concern more that the evaluation of the expressions is not
performing well because of the performance of the filter
implementation
or
is it that you have redundancy in the expressions and you want to
(runtime)
optimize the expressions themselves? If first is the case, I might be
able
to contribute my LDAP filter implementation used in Concierge and
jSLP.
It
went through a large series of profiling and benchmarking and is
really
quite speedy in both parsing and evaluation. In case you want to test
it,
let me know.
Cheers,
Jan.
-----------------------------------------------------------
ETH Zurich, MSc Jan S. Rellermeyer,
Information and Communication Systems Research Group (IKS),
Department of Computer Science, IFW B 47.1, Haldeneggsteig 4,
CH–8092 Zürich Tel +41 44 632 30 38, http://www.iks.inf.ethz.ch
-----------------------------------------------------------
-----Original Message-----
From: Richard S. Hall [mailto:[EMAIL PROTECTED] Sent:
Wednesday, January 24, 2007 9:04 PM
To: felix-dev@incubator.apache.org
Subject: Needed: LDAP expression evaluation optimization
Consider this a call for contributions.
The latest changes to Felix' resolver adopt a generic
capability/requirement approach for resolving package export/import
and bundle provide/require constraints (with the goal of also using
this approach for host/fragment constraints too).
The benefit of this approach is that it provides a nice generic way
off adding and resolving additional types of constraints to the
Felix resolver. Another benefit is that this resolver
implementation can be shared with OBR, so the same resolver can be
used for deployment as well as runtime wiring.
The downside of this approach is that it relies heavy on LDAP
expressions and their evaluation, which tends to slow things down a
bit.
To offset this slowdown, I have cut some corners making the
capabilities/requirements not as generic as I would like. I want
this approach to be as generic as possible, but this requires that
we optimize LDAP expression evaluation.
If anyone has experience in such areas and is willing to look into
this area for Felix, please let me know and I can explain more
precisely what we need. Overall, I think the work should be pretty
localized, so it should be an easy way to get involved for someone
with experience in this area.
Search your soul, you know you want to contribute! ;-)
-> richard