December 10, 2013
New Publications in November
Even though I’m no longer working at the university, November was an excellent month publication wise. Two conference papers written in the Spring were published at Koli Calling 2013 whereas a long journal paper process finally ended in a published paper in ACM Transactions in Computing Education. The journal paper is titled “A Review of Generic Program Visualization Systems for Introductory Programming Education” (ACM, pdf) and was written with Juha Sorva and Lauri Malmi:
This article is a survey of program visualization systems intended for teaching beginners about the runtime behavior of computer programs. Our focus is on generic systems that are capable of illustrating many kinds of programs and behaviors. We inclusively describe such systems from the last three decades and review findings from their empirical evaluations. A comparable review on the topic does not previously exist; ours is intended to serve as a reference for the creators, evaluators, and users of educational program visualization systems. Moreover, we revisit the issue of learner engagement which has been identified as a potentially key factor in the success of educational software visualization and summarize what little is known about engagement in the context of the generic program visualization systems for beginners that we have reviewed; a proposed refinement of the frameworks previously used by computing education researchers to rank types of learner engagement is a side product of this effort. Overall, our review illustrates that program visualization systems for beginners are often short-lived research prototypes that support the user-controlled viewing of program animations; a recent trend is to support more engaging modes of user interaction. The results of evaluations largely support the use of program visualization in introductory programming education, but research to date is insufficient for drawing more nuanced conclusions with respect to learner engagement. On the basis of our review, we identify interesting questions to answer for future research in relation to themes such as engagement, the authenticity of learning tasks, cognitive load, and the integration of program visualization into introductory programming pedagogy.
First of the conference papers is “How to study programming on mobile touch devices: interactive Python code exercises” (ACM) written with Petri Ihantola and Juha Helminen:
Scaffolded learning tasks where programs are constructed from predefined code fragments by dragging and dropping them (i.e. Parsons problems) are well suited to mobile touch devices, but quite limited in their applicability. They do not adequately cater for different approaches to constructing a program. After studying solutions to automatically assessed programming exercises, we found out that many different solutions are composed of a relatively small set of mutually similar code lines. Thus, they can be constructed by using the drag-and-drop approach if only it was possible to edit some small parts of the predefined fragments. Based on this, we have designed and implemented a new exercise type for mobile devices that builds on Parsons problems and falls somewhere between their strict scaffolding and full-blown coding exercises. In these exercises, we can gradually fade the scaffolding and allow programs to be constructed more freely so as not to restrict thinking and limit creativity too much while still making sure we are able to deploy them to small-screen mobile devices. In addition to the new concept and the related implementation, we discuss other possibilities of how programming could be practiced on mobile devices.
Second Koli paper was written by the same trio and is titled “Recording and analyzing in-browser programming sessions” (ACM):
In this paper, we report on the analysis of a novel type of automatically recorded detailed programming session data collected on a university-level web programming course. We present a method and an implementation of collecting rich data on how students learning to program edit and execute code and explore its use in examining learners’ behavior. The data collection instrument is an in-browser Python programming environment that integrates an editor, an execution environment, and an interactive Python console and is used to deliver programming assignments with automatic feedback. Most importantly, the environment records learners’ interaction within it. We have implemented tools for viewing these traces and demonstrate their potential in learning about the programming processes of learners and of benefiting computing education research and the teaching of programming.
As soon as these become available in the ACM Author-Izer service, you’ll be able to download the PDFs from my publications page.