i tried several enhancements in BeancacheKey but none was enough (maybe i
didnt cut the right part)

*Romain Manni-Bucau*
*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
*Blog: **http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
*Github: https://github.com/rmannibucau*



2013/1/21 Arne Limburg <[email protected]>

> Both solutions seem not to be perfect.
>
>
> What about improving the BeanCacheKey performance?
>
> The creation of the bean cache key uses many computations, what about
> simply removing them, i.e. if we remove the sorting of the qualifiers, it
> would potentially end up in more cache entries for the same set of
> annotations, but how often does this occur in practice, event the copying
> of the array could be removed.
> Maybe also the path to the bda should be replaced by the injection point
> class. This would blow up the cache, but maybe the performance improvement
> is it worth?
>
> Cheers,
> Arne
>
> Am 21.01.13 10:51 schrieb "Romain Manni-Bucau" unter
> <[email protected]>:
>
> >well didn't manage to find a lot of optim. The only relevant one i found
> >consists in 2 steps:
> >
> >1) cache the BeanCacheKey used for the instance
> >2) cache results of select
> >
> >here a proto (patch for trunk): https://gist.github.com/4584921
> >
> >it solves pretty well the isAmbiguous() get() pattern
> >
> >here the perf:
> >
> >direct: 21ms
> >instance: 3257ms
> >direct: 0ms
> >instance: 2670ms
> >direct: 0ms
> >instance: 2790ms
> >direct: 0ms
> >instance: 2540ms
> >direct: 0ms
> >instance: 2717ms
> >direct: 0ms
> >instance: 2790ms
> >
> >do you think it's worth to use it or sthg close?
> >
> >*Romain Manni-Bucau*
> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >*Blog:
> >**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >*Github: https://github.com/rmannibucau*
> >
> >
> >
> >2013/1/21 Romain Manni-Bucau <[email protected]>
> >
> >> you are right the cache is well done...but we use it.
> >>
> >> let me explain: if i use a direct injection i have the instance itself
> >>and
> >> all is fine. If not we go through the resolution and as you said we use
> >>the
> >> cache.
> >>
> >> To use the cache we use a key (since that's a map) and the key is a
> >> BeanCacheKey: that's what takes time: mainly its construction
> >>
> >> I'll have a look if it can be enhanced a bit
> >>
> >> FYI here is my test:
> >>
> >> public class TmpInstanceImpl extends AbstractUnitTest {
> >>     @Test
> >>     public void perf()
> >>     {
> >>
> >>         final Collection<Class<?>> classes = new ArrayList<Class<?>>();
> >>         classes.add(Foo.class);
> >>         classes.add(DirectInjection.class);
> >>         classes.add(InstanceInjection.class);
> >>
> >>         startContainer(classes);
> >>
> >>         final Bean<?> dir =
> >>
> >>getBeanManager().resolve(getBeanManager().getBeans(DirectInjection.class)
> >>);
> >>         final DirectInjection di = (DirectInjection)
> >> getBeanManager().getReference(dir, DirectInjection.class,
> >> getBeanManager().createCreationalContext(dir));
> >>         Assert.assertNotNull(di);
> >>
> >>         final Bean<?> inst =
> >>
> >>getBeanManager().resolve(getBeanManager().getBeans(InstanceInjection.clas
> >>s));
> >>         final InstanceInjection ii = (InstanceInjection)
> >> getBeanManager().getReference(inst, InstanceInjection.class,
> >> getBeanManager().createCreationalContext(inst));
> >>         Assert.assertNotNull(ii);
> >>
> >>         long start, end;
> >>
> >>         for (int i = 0; i < 10; i++) {
> >>             start = System.nanoTime();
> >>             for (int k = 0; k < 1000000; k++) {
> >>                 di.resolve();
> >>             }
> >>             end = System.nanoTime();
> >>             System.out.println("direct: " +
> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
> >>
> >>             start = System.nanoTime();
> >>             for (int k = 0; k < 1000000; k++) {
> >>                 ii.resolve();
> >>             }
> >>             end = System.nanoTime();
> >>             System.out.println("instance: " +
> >> TimeUnit.NANOSECONDS.toMillis(end - start) + "ms");
> >>         }
> >>
> >>         shutDownContainer();
> >>     }
> >>
> >>     public static class Foo {
> >>         public void touch() {
> >>             // no-op
> >>         }
> >>     }
> >>
> >>     public static class DirectInjection {
> >>         @Inject
> >>         private Foo foo;
> >>
> >>         public void resolve() {
> >>             foo.touch();
> >>         }
> >>     }
> >>
> >>     public static class InstanceInjection {
> >>         private static final AnnotationLiteral<Default> AL = new
> >> DefaultLiteral();
> >>
> >>         @Inject @Any
> >>         private Instance<Foo> foo;
> >>
> >>         public void resolve() {
> >>             foo.select(AL).get().touch();
> >>         }
> >>     }
> >> }
> >>
> >> and here what looks like the result:
> >>
> >> direct: 5ms
> >> instance: 10191ms
> >> direct: 0ms
> >> instance: 10487ms
> >> direct: 0ms
> >> instance: 9722ms
> >> direct: 0ms
> >> instance: 9171ms
> >> direct: 0ms
> >>
> >> *Romain Manni-Bucau*
> >> *Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >> *Blog:
> >>**http://rmannibucau.wordpress.com/*<http://rmannibucau.wordpress.com/>
> >> *LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >> *Github: https://github.com/rmannibucau*
> >>
> >>
> >>
> >> 2013/1/20 Arne Limburg <[email protected]>
> >>
> >>> Yes, that's interesting and unexpected.
> >>>
> >>> Can you please profile, where the time is lost?
> >>> In theory, we even could cache the instance when it is a normal scoped
> >>> proxy.
> >>>
> >>> But it is interesting for the overall performance, where the time is
> >>> lost...
> >>>
> >>> Cheers,
> >>> Arne
> >>>
> >>>
> >>> Am 20.01.13 19:43 schrieb "Romain Manni-Bucau" unter
> >>> <[email protected]>:
> >>>
> >>> >Well it doesnt seem to be as efficient as expected. Replacing a plain
> >>>old
> >>> >injection by an instance i expected no more than x3 in execution time
> >>>but
> >>> >was easily x10...and i had less than 5 beans.
> >>> >
> >>> >Not sure what takes time...thought to cache the whole result in the
> >>> >instance but if you have better ideas it is still open for 1.1.8 and
> >>> 1.2.0
> >>> >:)
> >>> >Le 20 janv. 2013 19:24, "Arne Limburg"
> >>><[email protected]> a
> >>> >écrit :
> >>> >
> >>> >> Hi Romain,
> >>> >>
> >>> >> It is cached in the InjectionResolver, that should suffice.
> >>> >>
> >>> >>
> >>> >> Cheers,
> >>> >> Arne
> >>> >>
> >>> >> Am 19.01.13 17:44 schrieb "Romain Manni-Bucau" unter
> >>> >> <[email protected]>:
> >>> >>
> >>> >> >Hi guys,
> >>> >> >
> >>> >> >is there any reason
> >>> >> >why
> >>>org.apache.webbeans.inject.instance.InstanceImpl#resolveBeans() is
> >>> >>not
> >>> >> >cached in the instance of InstanceImpl (whatever the cache is - a
> >>> >>volatile
> >>> >> >var, an atomicref or even a synchronized block)?
> >>> >> >
> >>> >> >typically doing this pattern: if (!instance.isAmbiguous()) { return
> >>> >> >instance.get(); } you'll call it twice for nothing (and it can be
> >>>O(n)
> >>> >> >with
> >>> >> >n the numbe rof beans)...or just regarding runtime it seems weird
> >>>no?
> >>> >> >
> >>> >> >wdyt?
> >>> >> >
> >>> >> >*Romain Manni-Bucau*
> >>> >> >*Twitter: @rmannibucau <https://twitter.com/rmannibucau>*
> >>> >> >*Blog:
> >>> >> >**http://rmannibucau.wordpress.com/*<
> >>> http://rmannibucau.wordpress.com/>
> >>> >> >*LinkedIn: **http://fr.linkedin.com/in/rmannibucau*
> >>> >> >*Github: https://github.com/rmannibucau*
> >>> >>
> >>> >>
> >>>
> >>>
> >>
>
>

Reply via email to