Hi Bill,
From: Bill Moseley
On Tue, Jan 10, 2012 at 1:18 PM, Jason Galea <[email protected]>
wrote:
hehe.. you want layers, I got layers..
I just got out of yet another meeting about this architecture redesign.
(I'd like to see the graph that relates productivity to the number of people
involved some day...)
Jason, this is probably a question best for you and your experience, but I
(ignoring that graph above) would like to hear other's opinions and reasoning.
My goal was to put a layer between Catalyst and DBIC for a few reasons,
including:
1.. To have a place to put common model code that cannot be represented in
DBIC (e.g. data from other sources)
2.. To be able to split up the model into logical units that can be tested
and used independently.
3.. To abstract out the physical layout of the database -- be able to
change data layer w/o changing the API.
I also needed that flexibility for exactly the same reasons, but the "bad"
thing is that Catalyst does so many things automaticly and DBIC the same that
it would imply a decrease in productivity if the app would do those things.
...At least if the app is not very big and complex.
Some actions, like getting some records from a db, is surely the job of a
model, but that model could use records as simple hashrefs (as returned by
DBI's fetchrow_hashref), or it could use DBIC records, or other models could
offer the data with another structure. But there is no standard structure
defined for a model that should unify the data from more such models and offer
it to the view. I guess that it could be hard to define such a structure
because it would differ among different apps and it might also imply a
performance degradation.
But some other actions are considered to be the job of the controller, for
example the authentication/authorization, or anyway the job of the web
framework, however sometimes that authentication/authorization should be made
in other ways, not by web, but by a simple command line script, or by a GUI
interface.
I guess that for beeing able to totally decouple the web interface from the
app, that app should offer a certain interface which would be compatible with
Catalyst and the developer would just need to configure Catalyst part to handle
the app foo at /foo, and another app bar to /bar and another app baz to /.
And the interface of all those apps should accept an
authorization/authentication object with a standard format, and the
authentication should be made by Catalyst or the GUI app, or the CLI script...
And the apps used by Catalyst could offer their authentication/authorization
and the developer could configure Catalyst to use the authentication offered by
the app foo, or the app bar, or the app baz, or an external authenticator that
uses the database of another app, authenticator that should do the validation
and offer the authentication object in that standard format accepted by the
apps.
This way would be more simple to create adapters for existing apps and
combine them in a single web site, or change the authentication...
Anyway, the question regarding the common format of the data returned by the
model to the view remains, and because it could imply performance degrading to
change the data structures returned by the underlying modules, it might not be
a good way. I am also thinking that there are many developers that like the
very limited style of other web frameworks which accept a single ORM, a single
templating system and don't even think to decouple the app from the web
framework...
Just thoughts.... Yeah I know, patches welcome. :-)
My idea was that Catalyst would call a method in the new model layer and
possibly get a DBIC object back. There is concern from some at my meeting that
we don't want to give the Catalyst app developer a "raw" DBIC object and that
we should wrap it (as it appears you are doing, Jason) in yet another object.
That is, we want to allow $user->first_name, but not $user->search_related or
$user->delete.
That requires writing new wrapper classes for every possible result -- not
just mirroring DBIC's result classes but possibly many more because the new
model might have multiple calls (with different access levels) for fetching
user data. That is, $user->email might work for some model methods that return
a user but not methods called on the model.
Frankly, to me this seems like a lot of code and work and complexity just to
prevent another developer from doing something stupid -- which we cannot
prevent anyway. And smart programmers can get at whatever they want,
regardless. Seems more risky to make the code more complex and thus harder to
understand. The cost/benefit ratio just doesn't seem that great.
**
Yep, for not allowing the developer to do something stupid, but also for
making the application not depend so much on the underlying model... DBIC for
example.
So if the team will decide to change DBIC with something else, they should be
able to continue to use $user->email without changing the controller or the
views.
But in this model of work (using fat models and thin controllers), most of
the code is in the model anyway, so no matter if the DBIC model or the business
model would use the biggest part of the code, changing DBIC with something else
would imply a lot of work if the new underlying module uses a totally different
interface than DBIC.
So it becomes less important if the developer would need to change just a few
lines of code in the controller or and/or templates.
And this is theory, but I am wondering how many times a team decided to
change DBIC with another ORM or another source/destination of data in practice.
I guess that if they decide to do that, it would be easier to rewrite the
entire application.
As I shown above, making an app with the interface totally decoupled would be
wonderful but this only if there will be not much performance degradation which
I doubt, and it should be also a standard interface defined for Perl programs
that should be largely accepted, interface that will allow the developer to
choose to publish it with Catalyst, or with another web framework that will
accept that interface, but this will be complicated because that interface
would depend on the app, would be less flexible and might imply performance
degradation.
Am I missing something?
I suppose this is not unlike the many discussions about what to pass to the
view. Does the controller, for example, fetch a user object and pull the data
required for the view into a hash and then pass that to the view? Or does the
controller just fetch a user object and pass that directly to the view to
decide what needs to display?
***
As its name implies, the controller should control things. So it should
decide what should be presented, not the view. The view should just present the
data offered by the controller.
The view should not be able to present something which is not allowed. But if
many things are allowed, than the controller could offer all those things and
don't restrict the user object by creating and offering another object which is
more limited. The controller should be in control even if that control is very
limited sometimes.
I prefer just passing the object to the view. The controller code is much
cleaner and then when the view needs to change don't need to also change the
controller. And when there's a different view (like an API or moble ) the same
controller action can be used.
***
Yes, I also prefer that way, because I usually don't need too many
restrictions. But sometimes the view should not get too much data, because the
view could be say a WxPerl app which is in a remote location, and it couldn't
receive locally an object and execute methods on it, but it should receive a
serialized string, which shouldn't be too big for a faster transfer, and in
that case the controller should choose to offer a smaller serialized object.
Octavian
_______________________________________________
List: [email protected]
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/[email protected]/
Dev site: http://dev.catalyst.perl.org/