Salut,

replying to each other :)

@Daniel:

> But reading this apis feels like trying to use fluent, where it doesn't fit.

please explain the reason why :) For how long there have been cases
where fluent APIs can/cannot be adopted? :)

> cacheConfigurator.allocate(MemoryUnit); //(or what was your class? ram?)

we don't have a memory model - anyway, if we had, wouldn't sound
really fluent... how would it look?

cacheConfigurator.allocate( new Ram(...) ); ?

> cacheConfigurator.dispose().every(TimeUnit);

same as above...

> cacheConfigurator.bind(Map.class).to(..); //use guice ESDL - Default 
> Implementations should
>   //never have to be bound manually ;)
> cacheConfigurator.bind(MemoryManager.class).to(..);
> cacheConfigurator.bind(Serializer.class).to(..);

yup, default values shall not be bound - this is something I can easily work on.
Anyway this is not a DI container nor Guice :)
cacheConfigurator.bind(...) allows too freedom... what if I, as user,
expect if bind(MySuperRocketLauncher.class).to(...) ? Any suggestion?

> Do you really have a put and putByteArray? That's hard. ;) And for my
> cases, I would add a putStream(..)?   :)

this is something we already have in the current APIs... see
<http://s.apache.org/2o2> at lines 44, 50, 44, that is the reason why
I added those signatures.

@Michael

while we are not NIH nor PFE guys... are we too happy with 3rd party
frameworks? :)
I mean, I really don't see DirectMemory just a backend for existing
caching solutions - which is needed anyway, I would be the first on
using it in production - but it is an OffHeap memory store that users
could integrate it in their applications as well directly without any
dependency.

We are not reinventing the wheel - and I personally don't have any
intention/time/enough energies/... to do it - but what caching
configuration has to do with stuff in DM that already have the need to
be configured? we configure MemoryManager, Serializer... seriously,
that doesn't concern the cache behavior, it is pure DM stuff presented
in a different way.
Anyway... is "I use XXXCache wich uses YYYlang to configure it" enough
to not offering to our users different APIs?!? Not sure that this is
on topic with the proposal.

I agree with latest Daniel's statement that "What is DM" != "DM APIs".

If you want to focus your energies on fixing the concurrency level, it
is *cool*, provide patches and we are happy to commit them, but this
is something that doesn't have anything to concern to APIs.

Anyway, that is a discussion that risks to go to nowhere and hijacks
the thread, so I'll stop it there - and I invite you on discussing it
in another thread.

@Raf

The static Cache façade is fine for some use cases and I don't intend
dropping it, it will use the default DM configuration as already does.
Maybe because my sample creates Cache rather than CacheService got you
confused? may this is way you said thet is not so easy? I already have
the code, and have coded the same style in other ASF projects...

Thanks all for the feedbacks!
-Simo

http://people.apache.org/~simonetripodi/
http://simonetripodi.livejournal.com/
http://twitter.com/simonetripodi
http://www.99soft.org/



On Sun, Feb 19, 2012 at 9:26 AM, Raffaele P. Guidi
<[email protected]> wrote:
> While I agree with michael, (it's my personal point of view on DM since
> even before bringing it in apache) I have to say that having a simple ready
> to use harness would have been useful and help adoption. That's why I
> adopted the Cache static facade, which well served the case. I also have to
> agree with daniel - this seems fluent but not so easy.
>
> My 2 cents.
>
> Ciao,
>   R
> Il giorno 19/feb/2012 05:24, "Michael André Pearce" <
> [email protected]> ha scritto:
>
>> Question I pose to all of this is seeing the word user, which i think
>> needs to be asked.
>>
>> What are you seeing DirectMemory as, a CacheStore which is the
>> OffHeapStore (which is what terracottas own big memory is to EHCache) which
>> you can plug into existing Cache Frameworks,, which the project provides
>> the modules to plug into such frameworks as JCR, JCache, OSCache, EHCache
>> etc or are you trying to write a cahce framework from scratch?
>>
>> If its a CacheStore then really i think each plugin module such as im
>> starting to write for EHCache and Mir is writing for JCR is where the
>> configuration style and method sit etc, as long as developers writing the
>> plugins know how to interact with the cache then that should be enough.
>> User will use the frameworks already existing for configuration and these
>> are well documented in those projects.
>>
>> If we're trying to write yet another Caching Framework, i don't personally
>> see the point, there are all ready good frameworks out there, and your
>> better off concentrating on the actual improvements of the OffHeap store,
>> e.g. bugs, speed, sizing needed on heap, concurrency etc.
>>
>> Mike
>>
>>
>> On 19 Feb 2012, at 00:43, Daniel Manzke wrote:
>>
>> > Hey guys,
>> >
>> > Simone asked me to write my 2 cents for it. ;)
>> >
>> > Due to the fact, I had a lot to do with Simone and Guice, I just can say
>> > that I also really like fluent apis. But reading this apis feels like
>> > trying to use fluent, where it doesn't fit.
>> >
>> > I often have problems with fluent apis, which have really long names for
>> > the method, because without them, it is not clear what they are doing.
>> > (because to complex?!)
>> >
>> > {code}
>> >       /* Basic configuration APIs */
>> >       Cache cache = DirectMemory.createNewInstance( new
>> > CacheConfiguration()
>> >       {
>> >
>> >           @Override
>> >           public void configure( CacheConfigurator cacheConfigurator )
>> >           {
>> >               cacheConfigurator.buffers().count( 1 );
>> >               cacheConfigurator.allocate(MemoryUnit); //(or what was your
>> > class? ram?)
>> >               cacheConfigurator.dispose().every(TimeUnit);
>> >
>> >               cacheConfigurator.bind(Map.class).to(..); //use guice ESDL
>> -
>> > Default Implementations should
>> >
>> >    //never
>> > have to be bound manually ;)
>> >               cacheConfigurator.bind(MemoryManager.class).to(..);
>> >               cacheConfigurator.bind(Serializer.class).to(..);
>> >           }
>> >
>> >       } );
>> > {code}
>> >
>> > {code}
>> >       Pointer a = cache.allocatePointer( 1 ).Gb().identifiedBy( "Simo" );
>> >       Pointer b = cache.put( "Strored!" ).identifiedBy( "Raf" );
>> //expiring
>> > depends on the default of the cacheConfiguration
>> >       Pointer b2 = cache.put( "Strored!" ).identifiedBy( "Raf"
>> > ).expires().in(TimeUnit);
>> >
>> >       Pointer d = cache.update( new Object() ).identifiedBy( "Olivier" );
>> >       Pointer e = cache.updateByteArray( new byte[0] ).identifiedBy(
>> > "TomNaso" )
>> > {code}
>> >
>> >
>> > Do you really have a put and putByteArray? That's hard. ;) And for my
>> > cases, I would add a putStream(..)?   :)
>> >
>> > (we have created a cache implementation with focus on files/document
>> cache
>> > (onheap, offheap, ...))
>> >
>> > 2012/2/19 Simone Tripodi (Created) (JIRA) <[email protected]>
>> >
>> >> Adopt fluent APIs for bootstrapping the Cache and manage stored objects
>> >> -----------------------------------------------------------------------
>> >>
>> >>                Key: DIRECTMEMORY-62
>> >>                URL:
>> https://issues.apache.org/jira/browse/DIRECTMEMORY-62
>> >>            Project: Apache DirectMemory
>> >>         Issue Type: New Feature
>> >>           Reporter: Simone Tripodi
>> >>           Assignee: Simone Tripodi
>> >>
>> >>
>> >> Hi all guys,
>> >>
>> >> as discussed some days ago, I started prototyping an EDSL embedded in DM
>> >> to make Cache APIs more "sexy" :)
>> >>
>> >> So, influenced by the past experience with Commons Digester - influenced
>> >> by Google Guice - I tried to identify the 2 phases in the Cache
>> lifecycle
>> >>
>> >> * the "bootstrap" phase - where settings are used to instantiate the
>> >> Cache;
>> >>
>> >> * the object store management.
>> >>
>> >> Current codebase has a mix of both and users have to be aware of correct
>> >> sequence of operations call, I mean, calling {{Cache.retrieve( "aaa" )}}
>> >> without having previously called {{Cache.init( 1, Ram.Mb( 100 ) )}} at
>> the
>> >> beginning of the program, would cause an error. That is why I recurred
>> to a
>> >> kind of "configuration" pattern to make sure Cache instance have to be
>> >> configured first and then can be used:
>> >>
>> >> {code}
>> >>       /* Basic configuration APIs */
>> >>       Cache cache = DirectMemory.createNewInstance( new
>> >> CacheConfiguration()
>> >>       {
>> >>
>> >>           @Override
>> >>           public void configure( CacheConfigurator cacheConfigurator )
>> >>           {
>> >>               cacheConfigurator.numberOfBuffers().ofSize( 1 );
>> >>               cacheConfigurator.allocateMemoryOfSize( 128 ).Mb();
>> >>               cacheConfigurator.scheduleDisposalEvery( 10 ).hours();
>> >>
>> >>
>> >> cacheConfigurator.bindConcurrentMap().withJVMConcurrentMap();
>> >>
>> >>
>> cacheConfigurator.bindMemoryManagerService().withDefaultImplamentation();
>> >>
>> >> cacheConfigurator.bindSerializer().fromServiceProviderInterface();
>> >>           }
>> >>
>> >>       } );
>> >> {code}
>> >>
>> >> Hoping that the code itself is clear enough, the {{DirectMemory}} class
>> >> accepts a {{CacheConfiguration}}, users have to override the {{
>> configure(
>> >> CacheConfigurator )}} method, where describing the basic Cache behavior,
>> >> and will return a Cache instance.
>> >>
>> >> According to the DRY (Don't Repeat Yourself) principle, repeating
>> >> "cacheConfigurator" over and over for each binding can get a little
>> >> tedious, so there is an abstract support class:
>> >>
>> >> {code}
>> >>       cache = DirectMemory.createNewInstance( new
>> >> AbstractCacheConfiguration()
>> >>       {
>> >>
>> >>           @Override
>> >>           protected void configure()
>> >>           {
>> >>               numberOfBuffers().ofSize( 1 );
>> >>               allocateMemoryOfSize( 128 ).Mb();
>> >>               scheduleDisposalEvery( 10 ).hours();
>> >>
>> >>               bindConcurrentMap().withJVMConcurrentMap();
>> >>               bindMemoryManagerService().withDefaultImplamentation();
>> >>               bindSerializer().fromServiceProviderInterface();
>> >>           }
>> >>
>> >>       } );
>> >> {code}
>> >>
>> >> Once obtained the Cache instance, users can now start storing objects:
>> >>
>> >> {code}
>> >>       Pointer a = cache.allocatePointer( 1 ).Gb().identifiedBy( "Simo"
>> );
>> >>       Pointer b = cache.put( "Strored!" ).identifiedBy( "Raf"
>> >> ).thatNeverExpires();
>> >>       Pointer c = cache.putByteArray( new byte[0] ).identifiedBy( "Izio"
>> >> ).thatExpiresIn( 1 ).days();
>> >>
>> >>       Pointer d = cache.update( new Object() ).identifiedBy( "Olivier"
>> );
>> >>       Pointer e = cache.updateByteArray( new byte[0] ).identifiedBy(
>> >> "TomNaso" );
>> >> {code}
>> >>
>> >> WDYT?
>> >>
>> >> --
>> >> This message is automatically generated by JIRA.
>> >> If you think it was sent incorrectly, please contact your JIRA
>> >> administrators:
>> >>
>> https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
>> >> For more information on JIRA, see:
>> http://www.atlassian.com/software/jira
>> >>
>> >>
>> >>
>> >
>> >
>> > --
>> > Viele Grüße/Best Regards
>> >
>> > Daniel Manzke
>>
>>

Reply via email to