Hi,
I've discovered what I believe to be a strange behaviour, but it might be
my mistake.
So, given two tables, for example:
db.define_table('group_of_people',
Field('name', 'string'),
)
db.define_table('person',
Field('name', 'string'),
Field('age', 'integer'),
Field('group_of_people', 'reference group_of_people'),
)
If I do a select with a join and not include the "id" field in the query
and then convert to a dict like so:
def data():
persons = db((db.person.id > 0) & (db.group_of_people.id ==
db.person.group_of_people)).select(db.person.name, db.person.age,
db.group_of_people.name)
parsed_people = {'data': []}
for person in persons:
p = {}
logger.debug("tests::data - person: %s" % pformat(person))
for table in person:
logger.debug("tests::data - table: %s" % pformat(table))
logger.debug("tests::data - table data: %s" %
pformat(person[table]))
for field in person[table]:
logger.debug("tests::data - %s: %s" % (field,
pformat(person[table][field])))
p[field] = person[table][field] if person[table][field] is
not None else ''
logger.debug("tests::data - p: %s" % pformat(p))
parsed_people['data'].append(p)
return json.dumps(parsed_people)
I get a normal row with only those fields in the group, something like:
{"data": [{"age": 1, "name": "person 1"}, {"age": 2, "name": "person 2"},
... }
And in the log, everything is normal:
DEBUG:tests:tests::data - person: <Row {'group_of_people': {'name':
'group2'}, 'person': {'age': 1L, 'name': 'person 1'}}>
DEBUG:tests:tests::data - table: 'group_of_people'
DEBUG:tests:tests::data - table data: <Row {'name': 'group2'}>
DEBUG:tests:tests::data - name: 'group2'
DEBUG:tests:tests::data - table: 'person'
DEBUG:tests:tests::data - table data: <Row {'age': 1L, 'name': 'person 1'}>
DEBUG:tests:tests::data - age: 1L
DEBUG:tests:tests::data - name: 'person 1'
DEBUG:tests:tests::data - p: {'age': 1L, 'name': 'person 1'}
But if I add the db.person.id field in the query, this doesn't work,
because suddenly I get a bunch of other records in the row, like
update_record, and even references to rows that reference the table and the
json part fails as it's including Row objects which are not serializable
(does not show up in this simple example but it shows in more complex
joins, with multiple tables - I can try to reproduce that error in a test
to include here if necessary).
The log becomes:
DEBUG:tests:tests::data - person: <Row {'group_of_people': {'name':
'group2'}, 'person': {'age': 1L, 'id': 1L, 'name': 'person 1'}}>
DEBUG:tests:tests::data - table: 'group_of_people'
DEBUG:tests:tests::data - table data: <Row {'name': 'group2'}>
DEBUG:tests:tests::data - name: 'group2'
DEBUG:tests:tests::data - table: 'person'
DEBUG:tests:tests::data - table data: <Row {'age': 1L, 'id': 1L, 'name':
'person 1'}>
DEBUG:tests:tests::data - update_record:
<pydal.helpers.classes.RecordUpdater object at 0x10e8e3810>
DEBUG:tests:tests::data - age: 1L
DEBUG:tests:tests::data - id: 1L
DEBUG:tests:tests::data - delete_record:
<pydal.helpers.classes.RecordDeleter object at 0x10e8e3850>
DEBUG:tests:tests::data - name: 'person 1'
DEBUG:tests:tests::data - p: {'age': 1L,
'delete_record': <pydal.helpers.classes.RecordDeleter object at
0x10e8e3850>,
'id': 1L,
'name': 'person 1',
'update_record': <pydal.helpers.classes.RecordUpdater object at
0x10e8e3810>}
Is this expected? Do I have to force my loops to only go after the columns
I need, despite having already requested them in the select?
This was tested with Version 2.14.6-stable+timestamp.2016.05.10.00.21.47 on
Python 2.7.13.
TIA,
Ricardo.
--
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
---
You received this message because you are subscribed to the Google Groups
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.