On Sun, Jan 12, 2014 at 3:32 PM, Sameer Verma <sve...@sfsu.edu> wrote:
> On Sun, Jan 12, 2014 at 6:33 AM, Walter Bender <walter.ben...@gmail.com> 
> wrote:
>> On Fri, Jan 10, 2014 at 3:37 PM, Sameer Verma <sve...@sfsu.edu> wrote:
>>> On Fri, Jan 10, 2014 at 3:26 AM, Martin Dluhos <mar...@gnu.org> wrote:
>>>> On 7.1.2014 01:49, Sameer Verma wrote:
>>>>> On Mon, Jan 6, 2014 at 12:28 AM, Martin Dluhos <mar...@gnu.org> wrote:
>>>>>> For visualization, I have explored using LibreOffice and SOFA, but 
>>>>>> neither of
>>>>>> those were flexible to allow for customization of the output beyond some 
>>>>>> a few
>>>>>> rudimentary options, so I started looking at various Javascript 
>>>>>> libraries, which
>>>>>> are much more powerful. Currently, I am experimenting with Google 
>>>>>> Charts, which
>>>>>> I found the easiest to get started with. If I run into limitations with 
>>>>>> Google
>>>>>> Charts in the future, others on my list are InfoVIS Toolkit
>>>>>> (http://philogb.github.io/jit) and HighCharts (http://highcharts.com). 
>>>>>> Then,
>>>>>> there is also D3.js, but that's a bigger animal.
>>>>>
>>>>> Keep in mind that if you want to visualize at the school's local
>>>>> XS[CE] you may have to rely on a local js method instead of an online
>>>>> library.
>>>>
>>>> Yes, that's a very good point.  Originally, I was only thinking about 
>>>> collecting
>>>> and visualizing the information centrally, but there is no reason why it
>>>> couldn't be viewed by teachers and school administrators on the 
>>>> schoolserver
>>>> itself. Thanks for the warning.
>>>>
>>>>
>>>
>>> In fact, my guess would be that what the teachers and principal want
>>> to see at the school will be different from what OLE Nepal and the
>>> government would want to see, with interesting overlaps.
>>
>> You left out one important constituent: the learner. Ultimately we are
>> responsible for making learning visible to the learner. Claudia and I
>> touched on this topic in the attached paper.
>>
>
> Thanks for the paper. While we did point out to Portfolio and Analyze
> Journal activities in our session at OLPC SF Summit in 2013, I didn't
> include it in the scope of the blog post. I'll go back and update it
> when I get a chance.
>
>> Just to place all my cards on the table, as much as I hate to suggest
>> we head down this route, I think we really need to instrument
>> activities themselves (and build analyses of activity output) if we
>> want to provide meaningful statistics about learning. We've done some
>> of this with Turtle Blocks, even capturing the mistakes the learner
>> makes along the way. We are lacking in decent visualizations of these
>> data, however.
>>
>
> I haven't had a chance to read the paper in depth (which I intend to
> do this afternoon), but how much of this approach would be shareable
> across activities? Or would the depth of analysis be on a per activity
> basis? If the latter, then I'd imagine it would be simpler for
> something like the Moon activity than the TurtleBlocks activity.
>
>> Meanwhile, I remain convinced that the portfolio is our best tool.
>>
>
> I think the approaches differ in scope and purpose. In the RFPs I've
> been involved in, the funding agencies and/or the decision makers
> either request or outright require "dashboard style" features to
> report frequency of use, time of day, and in some cases even GPS-based
> location in addition to theft-deterrence, remote provisioning, etc.
> The same goes for going back to an agency to get renewed funding or to
> raise funds for a new site expansion. In a way, the scope of the
> "learner<->teacher" bubble is significantly different from that of the
> "principal<->minister of edu". One is driven by learning and pedagogy,
> while the other is driven by administration. Accordingly, the reports
> they want to see are also different. While the measurements from the
> Activity may be distilled into coarser indicators for the MoE, I think
> it is important to keep the entire scope in mind.

Don't get me wrong: satisfying the needs of funders, administrators,
etc. is important too. They have metrics that they value and we should
gather those data too. My earlier post was just to suggest ultimately
we need to consider the learner and how making learning visible can be
of use. That theme seemed to be missing from the earlier discussion.

>
> I am mindful of the "garbage in, garbage out" problem. In building
> this pipeline (which is where my skills are) I hope that the data that
> goes into this pipeline is representative of what is measured at the
> child's end. I am glad that you and Claudia are the experts on that
> end :-)
>
> cheers,
> Sameer
>
>> regards.
>>
>> -walter
>>
>>
>>>
>>> cheers,
>>> Sameer
>>> _______________________________________________
>>> Sugar-devel mailing list
>>> Sugar-devel@lists.sugarlabs.org
>>> http://lists.sugarlabs.org/listinfo/sugar-devel
>>
>>
>>
>> --
>> Walter Bender
>> Sugar Labs
>> http://www.sugarlabs.org



-- 
Walter Bender
Sugar Labs
http://www.sugarlabs.org
_______________________________________________
Sugar-devel mailing list
Sugar-devel@lists.sugarlabs.org
http://lists.sugarlabs.org/listinfo/sugar-devel

Reply via email to