Not sure how in-depth you want to go with this, but here is an O(log
n/log log n) generalised solution to your query [pardon the pun]:

Jørgensen, Allan Grønlund. “Data Structures: Sequence Problems, Range
Queries, and Fault Tolerance”. Aarhus University, 2010.
http://cs.au.dk/fileadmin/site_files/cs/PhD/PhD_Dissertations__pdf/thesis_Allan_G_J%C3%B8rgensen.pdf.

Specifically read Chapter 4.

On Sat, Dec 15, 2012 at 12:42 AM, paolo.vall...@gmail.com
<paolo.vall...@gmail.com> wrote:
> Hi Niphlod, thanks for clarifying me this. I will go for the aggregation by
> hand, which is more portable rather than database view. However, we could
> provide something like that in the web2py library or somewhere, to speedup
> the work for future developments.
>
>
>
> 2012/12/14 Niphlod <niph...@gmail.com>
>>
>> there are no "native" functions on databases to group by, e.g., a quarter
>> hour.
>> For those kind of things, either you aggregate by hand in python or code a
>> view directly in the database to exploit its functions (depending on the
>> engine, could be either an impossible or a semi-complicated task :-P) and
>> access it with migrate=False within web2py.
>>
>>
>> On Friday, December 14, 2012 12:53:43 PM UTC+1, Paolo wrote:
>>>
>>> Dear all,
>>> I am looking for some advice in how to perform time aggregations, so far
>>> I have been able to aggregate only by hours or minutes but nothing between,
>>> for example every 10m, 15m and so on. For example this query count the
>>> number of fields aggregated by hours:
>>> c = db.record.id.count()
>>> s = db.record.created_on.year() | db.record.created_on.month() |
>>> db.record.created_on.day() | db.record.created_on.hour()
>>> rows = db(db.record.id > 0).select(db.record.created_on, c, groupby=s)
>>> while in this case:
>>> c = db.record.id.count()
>>> s = db.record.created_on.year() | db.record.created_on.month() |
>>> db.record.created_on.day() | db.record.created_on.hour() |
>>> db.record.created_on.minutes()
>>> rows = db(db.record.id > 0).select(db.record.created_on, c, groupby=s)
>>> How I can make the above queries more flexible? Namely, in aggregating
>>> for different time frames, i.e., 10minutes, 400seconds and so?
>>> So far, I've solved this issue by making a simple query but aggregation
>>> and then, counting and aggregating by hand. I hope there is a better and
>>> linear approach for that.
>>>
>>> Regards
>>> Paolo
>>>
>>>
>>
>> --
>>
>>
>>
>
>
>
>
> --
>  Paolo
>
> --
>
>
>

-- 



Reply via email to