On Friday, January 13, 2017 at 8:21:48 AM UTC-8, Anthony wrote:
> You can use the DAL from modules as well. As Niphlod suggested, you can
>>> also use the scheduler to schedule and run a task in the context of another
>>> app, though that might not be as fast as you'd like, as even setting
>>> immediate=True, it could take up to "heartbeat" seconds for the worker to
>>> pick up and execute the task, and then you have to check for the results.
>> I can use the DAL, but can I use the "db" object defined in my models
>> file? Or do you mean I could use the DAL if I rewrote my other app to use
>> a "model-less" approach where the models were defined in some other manner?
> Yes, the point was that you can put whatever you want into modules, which
> means the code can be shared across apps. If you want to share the code to
> create a database connection object, then put that code in a module. Here
> is the basic idea:
> import os
> from gluon import *
> def dal(**kwargs):
> web2py_folder = current.request.env.applications_parent
> db_folder = os.path.abspath(os.path.join(web2py_folder, 'applications'
> 'app1', 'databases'))
> return DAL('sqlite://storage.sqlite', folder=db_folder, **kwargs)
> def define_tables(db):
> db.define_table('table1', Field('field1'))
> from applications.app1.modules.db import dal, define_tables
> app1_db = dal()
> # Now use app1_db as usual.
> Of course, the module defining the database connection and tables doesn't
> have to be in the /modules folder of any particular application, nor does
> the folder specified as the "folder" argument -- that can all be
> centralized somewhere if desired.
Okay. So I've been experimenting with different approaches here.
In your example above, it's one model defining tables from another model.
But in my case, what I have is a controller function that needs access to
the db, which is defined in the model file. But I found the hard way (and
later also from an old thread:
that exec_environment does not load models. So this puts me kind of back
to where I was.
The simplest solution I found was to put, in the controller file, code that
looks like this:
db = db_definer.define_db()
Then every time my controller is run, it will redefine the DB, using the
code set up in the other app's module to do this in a consistent way (along
the lines you suggested).
What I don't like about this is that it means the DB-setup code is run
multiple times on a "normal" request. That is, if I have code to set up
the DB in my model file, and I visit my db-accessing controller, it runs
the code in the model file to set up the DB, then runs the code again when
it runs the controller file. I imagine this is not a huge penalty but it
I tried the suggestion from that old thread to use gluon.shell.env, but
that did not work for me. gluon.shell.env seems to have two main
problems. First, it stomps on the "current" object, causing the
newly-created environment to have a new request/response/etc. and losing
the old ones from the original request. Second, the new request apparently
isn't considered as an HTTPS request, which caused problems for me because
my models file has request.requires_https().
I do have a couple questions about this: First, am I right that the
performance impact of re-running the DAL object creation is negligible? Is
it something I should be concerned about? Second, is there anything wrong
with using request.requires_https() in the top-level code of a model file?
This part of the codebase was done by someone else, so if that is a misuse
of web2py I could potentially try to get it changed. I think the intent
was to ensure that every single request must be HTTPS. The problem is that
I don't want these internal cross-app calls to be considered "requests"; I
just want them to be calls to controller functions, so anything like
requires_https should be ignored (or just proxied to the "real" request
which is making this sub-call to another app).
The upshot of all this appears to be that there is no real way to get
web2py to give me access to the model/controller combination without an
HTTP request. In web2py the models and controllers are tightly coupled to
the HTTP request. I find this somewhat irritating, as to my mind in an MVC
framework the models and controllers shouldn't be so closely tied to the
transport mechanism. That is, I should be able to say "use this model and
this controller and give me the data", without having to involve HTTP at
all. It seems that, in web2py this coupling is encouraged because
controller functions directly access objects like request and response to
get their arguments, rather than having a separate "routing" mechanism that
maps HTTP request parameters to Python functions. (This is what the
"Service" function does, but it requires a level of indirection with a
"call" entry point.)
In essence, the way I think of it, when a request comes in, it is routed to
a controller function. That controller then runs along with its associated
models. and the controller function is called. It returns some data, which
is passed to a view. What I would like is the ability to do just the
middle part of that: take a controller, load its models, and run a
function. No HTTP. No request. No response. No network. No view. No
nothing. Just the controller and the model. For now I have a way to do
it, but it involves duplicating the model-controller linkage that web2py
already does (by "manually" recreating the DAL object from within the
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
For more options, visit https://groups.google.com/d/optout.