Imre said:
> -----Original Message-----
>
> In my opinion, you are making a strong case *against* using
> large-granularity beans. Here is why:
> 1) since you must synchronize with the database at both the beginning and
> the end of every transaction, at least you should minimize the work this
> entails. That is difficult with large beans, unless you transfer large
> blocks of data blindly. Which leads to complicated code, fragile,
> hard-to-maintain datastructures and cumbersome error handling on the
> bean's
> end.
>
I disagree. If your OR mapping layer knows what has changed, then only the
deltas must be flushed at commit time.

> 2) the larger your beans are, the more "server" logic you have to
> duplicate
> into them to support selective updates. Note: it is very difficult to do
> en
> masse storing of state with JDBC.
>
The OR mapping layer needs to have a "unit of work" concept which can track
the deltas. I agree this would be diffilcult to take on in the application
layer, but that doesn't necessarily mean it Has to be in the entity bean
infrastructure of the container. Many OR mapping products support this sort
of thing. I am not familiar with all of them, but both TopLink and Cocobase
support such constructs.

I also agree with a point Hal made a while back that this stuff ought to be
standardized.

> 3) the larger your beans are, the more likely they will be accessed often
> and, consequently, locked often and for longer periods of time by
> transactions. This fact single-handedly will eliminate the scalability of
> your system.
>
It is not necessary that the lock applies to the whole bean object graph.
Again good OR mapping layers allow for locking only what changes, and even
optimiztic concurrency checking based on stamps or changed value checking...

> 4) the larger your beans are, the more likely you will lock more database
> objects (tables, rows etc.) during calls. Excellent way to reduce the
> scalability of your database.
>
Same point as #3?

> 5) it is said frequently on this list that transactions should be short.
> The
> shorter your transactions are, the faster your claimed benefits of large
> beans evaporate.
>
Please substantiate this claim. Just because a bean is coarse grained does
not imply that all of its state must be loaded at the ejbLoad time. Loading
of sub-objects can be lazy fetching, so if the transaction does not touch a
sub-object, it need never contribute overhead.

> 6) the larger, more complicated your beans are, the less chances you have
> to
> find a CMP implementation to support your scheme. <vendor>Ejipt CMP will
> let
> you do it  :-)</vendor>
>
Given the OR mapping vendors have solved most of these issues previous to
EJB, and all EJB vendors are either OR mapping vendors themselves, or
working in close partenership with OR vendors, and reasonable EJB server
should provide you a solution in this area. If it were me this would be core
to my selection criteria.

> 7) and then you should consider the impact of large beans on the resources
> of your EJB server (see my virtual memory page size rant in the archives).
>
I still think the oppsosite is true. An entity bean requires more
infrastructure around it than a regular Java object (e.g. its EJB Objects,
stubs, skeletons, book keeping by the container...).

> Clearly, I am not convinced about the benefits of large-granularity beans.
> I
> still maintain that you should use the *right* size bean that fits your
> conceptual model, usage, load and tools. Go figure...
>
Concur on "right size". I don't buy broad brush generalizations either. Do
the design work, think it through... There ain't no silver bullets.

-Chris.

GemStone

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff EJB-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to