vendredi 13 avril 2012

Science 2.0, is it a new practice or the leveraged version of an old one?

Retrieved from a post of  Nicolas Balacheff on the SOA scientific portal on September the 25th, 2009

Since the beginning of its history, Scientific research has been a social activity with a large place given to communication, debates and collaborations. There are many evidences of this social dimension of the scientific activity, including famous and extensive exchanges of letters. The development of the publishing industry and business has leveraged the capacity of disseminating research results, questions and debates. The impact of IT-based tools has accelerated the phenomena, but has not changed its meaning and scientific raison d'être. The main revolution is in the capacity nowadays to not only share theories and results, but also instruments and data at a point never experienced before. This is the core of Science 2.0, as emphasised by Barbara Kieslinger and  Stefanie Lindstaedt in an analysis of Science 2.0 practices, it means in our field "the possibility for researchers to share lab results, protocols, class activities, etc." (p.2). We fully agree.

But there is one point worth to discuss. The authors mention (p.1) the effort invested in "publishing one's ideas". The word "idea" deserves some attention. The scientific activity, in my opinion, is less about sharing ideas than agreeing on results which could be turned into shareable knowledge shareable. What means that the issue is to discuss the rational and the argumentation (if not proof) supporting the claimed results. Sharing ideas leads to an over emphasis on social interactions and concern about ownership, sharing results would call for paying more attention to the data, the theories and the methods we use and down play the issue of ownership (which actually can never be avoided, e.g. polemics about anteriority). If sharing and publishing ideas is our core business, then I understand those who fear theft (plagiarism) and vandalism. If we privilege the sharing of results then the risk is less important, but the challenge more difficult to take up because it means some consensus on the theoretical frameworks and the related methods. Indeed, I mean results coming from work advanced enough so that it makes sense to share it (and not unfinished work, see p.2 sec. 3.).

The discussion on TEL Science 2.0 is actually a discussion about our scientific practice (whatever is the technology): What do we publish? What does it mean to publish "ideas" and "unfinished" (and sometimes not started) projects? Data cannot be published without being documented, here what can we say, then what are the conditions for sharing data in our domain?

In the end the problem is less to open up our research, than demonstrating by reaching reasonable theoretical and technological consensus that it produces something tangible, and not only discourse. The risk of Science 2.0 is exactly that: increasing and accelerating the production of discourses at the price of forgetting the production of high quality results and developing the TEL knowledge base.

A note after the reading of: Kieslinger B., Stefanie N. Lindstaedt S. N. (2009) Science 2.0 Practices in the Field of Technology Enhanced Learning. In: Science2.0 for TEL Workshop. ECTEL, Nice, France

dimanche 8 avril 2012

The TEL Dictionary initiative at the MEI spring school

Jointly held with the first Medical Education Informatics (MEI2012) conference, the Medical Education Content Sharing Technologies Spring School included in its programme a presentation of the TEL Dictionary initiative. The following slide-show introduced the project, then participants were invited to react and comment (see the report here).

vendredi 6 avril 2012

Questions from the MEI2012 Spring school about the TEL Dictionary initiative

About 20 PhDs and senior researchers from different disciplines participed in the TEL Dictionary session of the Medical Education Content Sharing Technologies Spring School held  Thessaloniki on April 5th. After a short presentation of the TEL dictionary initiative, participants were invited to scan the current lists of terms and expressions included in the TEL Thesaurus, in order to make remarks and suggestions and express their own priority. Here are the results and some comments.

Participants express their wish to see in the list terms and expressions from disciplines which provide TEL research with important concepts. Here they are: Connectionism, Connectivism, Case based learning, Community of practice, Active learning, Interactive learning, Worked examples, Digital literacy. They are from the learning science. Only one term from computer science was suggested: Intelligent agents. What may be emphasized is that there are no terms specific to TEL research, but terms pointing to concepts and theories from education and psychology that researchers need. So here is the needed extension for the next release of the thesaurus.

Four expressions from the thesaurus were pointed as deserving priority: Distributed learning, Game-based learning, Ubiquitous learning, Collaborative learning. This corresponds well to one of the prominent stream of communication of the MEI 2012 conference: internet as the place were to content is shared and learning communities are emerging.

Then, three questions:

Why is "constructionism" in the thesaurus and not "constructivism"?
Both terms are used as keywords to tag paper in the TeLearn open archive, hence both could have been in the first version of the thesaurus. However, "constructivism" is one of the big concepts in psychology,  for which it is rather easy to find well documented definitions. Since the strategy is to develop the thesaurus in an incremental way, this term has not been included at the first stage. "Constructionism" is a term which has been coined by S. Papert as a response within the Logo framework to the limitation found in referring only to "Constructivism" (one of the foundational reference of Logo). This is then a term specifically introduced in TEL research, and hence we took it (see the definition prepared by Richard Noss).

Why is "Virtual campus" in the thesaurus? It seems to be a direct translation of a French expression (campus virtuel) and not a genuine English keyword.  
It is right that "campus virtuel" is a keyword in the French TEL research area. However, "virtual campus" is an entry of wikipedia where it is defined as "the online offerings of a college or university where college work is completed either partially or wholly online, often with the assistance of the teacher, professor, or teaching assistant." A quick look at Scholar shows that this expression is rather popular internationally and for quite a long while. As suggested by the participant, there is also the expression "Digital campus", which looks rather close and possibly more English. But may be we have to be cautious with such feelings and to take the time to come back to the literature to check the claim against evidences. 

One should notice that "teaching" is not in the thesaurus, why?
To some extend we can consider as a curious fact that the word "teaching" is not present. There is the word "tutor", what suggest that teaching is not completely foreign to the TEL research area, as it were. But it is right that the word is not in the set of keywords from which we started -- those of TeLearn repository and also a questionnaire to the community. One reason may be that the focus on learning and the learned tended to push aside teaching if not the teacher. And this is reflected in the keywords chosen by researchers, even if they use the word in their writings. Another reason  may be that in English there is some "teaching" in the meaning of learning (as it is the case in French where you can say "les élèves apprennent l'anglais", but also "j'apprends l'anglais à mes élèves"...

What would make a social software a science 2.0 tool?

Retrieved from a post of  Nicolas Balacheff on the SOA scientific portal on September the 25th, 2009

Moving "away from managing generic individual networks to managing contextual shared spaces", Graaasp seems to be a smart tool to shape a learning community, be it in TEL or in an other domain. If I understand well, its key characteristic is the richness of the support to social interactions on top of content, the proactive support to establishing links and the dynamic of the roles within the community.

Reading a scenario of use of Graaasp, I wondered what would be the specific added value of this environment for a researcher in TEL. I came to thinking that it is not its versatility in resonance with a young domain which rapidly change, move, evolve. It is not its openness to the variety of disciplines and competences in a multidisciplinary domain. It might be its capacity to dynamically create a common knowledge base as a side effect, if I may say so, of the creation of trusted community and working groups. Indeed, there are other domains which are young, not well established and multidisciplinary. But I wonder whether there are other domains in which there is so little agreement on the theoretical and methodological frameworks, uncertainty on what is known and accepted, reluctance to build a common knowledge base -- if not sometimes a serious doubt about he fact that there can be "results" in a scientific sense in TEL.

We know how to build FAQ from the analysis of a flow of questions and answers, can we build a knowledge base from the analysis of the queries of the students and the feedback and support from knowledgeable others -- supervisors, senior researchers or peers? Would Graaasp be instrumental in doing that? If so, I would see it as a fully Science 2.0 infrastructure, what is more than being a software supporting the construction and shaping a community from a social perspective.

A note after the reading of: Gillet D., El Helou S., Joubert M., Sutherland R. (2009) Science 2.0: Supporting a Doctoral Community of Practice in Technology Enhanced Learning using Social Software. Science2.0 for TEL Workshop. EC-TEL

lundi 26 mars 2012

They play. So far, so good. But, what do they learn?

Retrieved from a post of Nicolas Balacheff on the SOA scientific portal on February the 26th, 2010
 
Martin Oliver and Caroline Pelletier contribution to the edited book "Digital Generations" is quite interesting and stimulating, taking up the challenge of contributing to our effort to understand "what, if anything, people are learning by playing games" (p.69) Their contribution is based on activity theory, referring primarily to Vygotsky, built on the system formed by the Tool as a mediator between the Subject and the Object (the latter meaning the intention of the subject) and its contemporary extension by Engeström and others which takes into account the social determination of both the Subject and (his/her) Object(ive). The authors make the relevant remark that taking a systemic perspective means that properties which may be identified cannot be ascribed to the Subject as an isolated part of that system. What raises a theoretical and methodological difficulty when the problématique is to understand learning (or Subject semantic/meaning attached to a behaviour). Meeting this difficulty with the cK¢ model [*], I solved it (if I may say so) by considering what could be seen as the projection of the system model onto one of its components, the learner (or onto the Tool). In the case of cK¢ it leads me to propose the (P, R, L, Σ) quadruplet to model the learner conception (what could be mirrored by a quadruplet of the same kind to model the Tool). So, it is clear that I am interested in the method of analysis which the authors propose in order to operationalize the theory.

Then, looking precisely at the proposed methodology, I see a few issues which may be interesting to discuss: contraction, action/operation and in the end the reference to learning and the related question "what is learned?"

Contradiction is a difficult concept to manipulate from a methodological point of view. As Piaget analysed it, contradiction exists if there is a witness of its existence and it can be noticed only if there is an explicit awareness of an objective or an expectation. So there may be a contradiction from the point of view of the observer and not from the point of view of the Subject. How to decide on that? Which observed behaviours can inform the observer? These are difficult questions but critical ones when learning is at stake (as pointed by the authors). So we cannot diagnose a contradiction if there is not an evidence that it is the case for the Subject and hence if we cannot state what is the Object from the Subject point of view. This points a new question: is the Object what the designers or the researchers or the observers claim to be? This question which is important to model the game-playing activity is indeed critical from a learning perspective (it is directly related to identifying learning outcomes). The authors identified in the discussion section, in relation to the interpretation and classification of observed behaviours, the "such claim are difficult to justify without assuming (rather than knowing) the intention of the player" (p.83). My own position is that this is a central issue for learning and that our research effort must start from that point : an explicit hypothesis on the learner intention.

The delicate distinction between action and operation could be better addressed if it was contextualised by such a claim about the intention of t he Subject or the Object of the activity. The authors express their expectations of a progress if a finer grained reading of the actions or the behaviours (eg eye tracking) was possible. My claim is that it may be of no help if the observer cannot relate it to an intention or an objective. Actually it is the identification of the Object in the system and/or the intended learning outcomes (at least as research hypotheses) which will determined the reasonable level of granularity we have to reach.

Eventually, in my opinion, the question "what is learned?" cannot be answered without responding to the question of the objective, intention, aim of the game and the situation which contextualises it. If we do not start from that point, we will progress as blind researchers and in the end respond "they learn how to play" (p.86), which may be a disappointing and quite unhelpful answer. We may agree that this applies also to the problem of understanding the Subject intention, then the nature of the Object and in the end the whole question of learning in a game environment. This issue may be peripheral from a strict game-play perspective, where whatever is learned the motivation and the interest in the game is the thing which counts, but it is critical from an educational point of view.

Note: - Piaget et al. (1974) Recherches sur la contradiction Paris: Presses univ. de France, 2 volumes. - (P, R, L, Σ) stands for Problem, Operators (in French "règles"), Representation (semiotic system), Control structure

A note after the reading of: Oliver M., Pelletier C. (2006) Activity theory and learning from digital games: developing an analytical methodology. In: Burckingham D., Willett R. (eds)  Digital generations (pp. 67-92). Mahwah, NJ: Lawrence Erlbaum.

vendredi 23 mars 2012

Retrieved from a post of Nicolas Balacheff on the SOA scientific portal on February the 27th, 2010

I recently read an article from Begoña Gros on the use digital games in education which offers a general overview of video-games and their contribution to learning, with an interesting discussion of their use in a school context. While focused on instructional design and not on computer science design, it still touches a few technological issues.

After a short history of the area from the research perspective, Begoña Gros reports on what we can learn from research on the contribution of games to learning. Several general cognitive competences are mentioned: improved spatial skills, iconic and spatial representations, ability to read images, divided visual attention, keeping track of events at multiple location on the screen, better developed attentional skills including metacognitive competence enhanced by the collective game play (sharing strategy, knowledge, sharing resources). "However, there is no research that actually documents a link between video games playing, attentional skills, and success in academic performance or specific occupations" (p.30). So it is not surprising that while most teachers acknowledge the contribution of games to the development of a variety of skills thay witness a resistance in adopting them in their everyday practice. One reason is the time needed to become familiar enough with a game so that a significant activity can be engaged. Another reason is that "it is difficult for teachers to identify how a particular game is relevant to some component of the curriculum, as well as the appropriateness of the content within the game" (p.35). This resonate with the remark that "game designers are not concerned with the accuracy of contents of the games and, on occasions, they are capable of producing contradictions or erroneous concepts with respect to the function of particular games used in learning activities" (p.36) This time, "design" means computer science design of game-based learning environments.

The main concern which is transversal to this paper is that of the challenge of adapting computer games to school and curricula. I would suggest an other challenge which is that of a closer collaboration between researchers in computer-science and education to design learning games not only adapted to the use in schools but also coherent with the game of knowledge.

Blog post after the reading of: Begoña Gros (2007) Digital Games in Education: The Design of Games-Based Learning Environments. Journal of Research on Technology in Education 40 (1) 23-38

mercredi 21 mars 2012

A conversation on "debriefing", a key stage in the use of learning games

Based on a post on the SOA Science corner blog , originally published on Tuesday 23rd, February 2010 (18:58)

What may be the differences between games and simulations? A paper by Sara de Freitas and Martin Oliver [*] suggests that there is not much, and hence that it is quite natural that many of the learning issues that are relevant for simulations are also relevant for games. If there is one difference to mention, it comes mainly from the entertainment characteristic which is attached to games, and it is exactly this dimension which makes both of them appealing to education and difficult to use. This difficulty rests in the fact that "in educational contexts, there is a need not only to enter the 'other world' of the game or simulation, but also to be critical about that process in order to support reflective processes of learning as distinct from mere immersion in a virtual space" (p.255). The authors notice that the apparent mismatch between the game and the curriculum may be due to “the omission of a clear debriefing session” (p.260). Then, the key question of evaluation: what should be the characteristics of a game (more generally a simulation), so that the debriefing is made possible? This implies that we can tell what the game-simulation is vis-à-vis the knowledge at stake (i.e. the expected learning outcome). This dimension of the analysis which is, in my opinion a prerequisite, is not considered in the paper. Should we add it as a fifth dimension to the four already proposed (context, learner, internal representational world, processes of learning)? Or is it subsumed in a way that I didn’t catch in my reading?

Martin Oliver responded (February the 26th 2010) that...
Sara de Freitas is certainly interested in the kinds of games that resemble simulations - she likes to use the portmanteau "gamesim" to denote this category.
Personally, I think that attempting to draw clear definitions that distinguish games from simulations would be problematic - my opinion is that what makes them useful or not is how they get used. A game can be treated as a simulation, and a simulation can be played with; it's a matter of convention which side of the definitional line they are placed on.
The discussion in relation to the "other world" experience of the game reflects that Higher Education (rather than, say, training) values the ability to reflect upon and critique experience, not just improve it. (Obviously that's a value statement, and not universal, but I'd refer people to Ron Barnett's work for a more general discussion of this kind of issue.)
The debriefing session is an example of a pedagogic technique intended to help bridge differences between play and curriculum performance - in some ways, this could be understood as just one more example of the classic problem of learning transfer. What is learnt from play is unlikely, in itself, to map neatly onto the goals of the curriculum; the debriefing simply recognises that a process of reinterpretation or renegotiation may be necessary. I don't think that "debriefing" describes a well-defined pedagogic interaction - more a class of conventions about asking people to make sense of the experience they have just had. To this extent, all that's required of a game (or simulation) is that people have an experience to reflect upon. We haven't tried to engage with what makes some debriefings better than others; this is, I'm sure, a fruitful area to consider but it's not one we looked at. Matching the game design to that debriefing is then an obvious and sensible approach - but again, it was outside the scope of this particular paper (which focused on evaluation rather than design).
And the conversation continued (Nicolas Balacheff, March the 1st, 2010)

"Debriefing" is a concept worth to be discussed a bit further. In order to explain why I think this way, I will start from the idea that inviting students or trainees to play a game is always (I use this word on purpose), a teaching/training-learning context, with an agenda in mind. This agenda may be hidden to the learners, but it is a key reason why to choose a game and invite them to play it. This agenda can be described in terms of learning outcomes (from a piece of knowledge to a specific behaviour -- possibly at a meta level like in problem solving or socialisation). Even if the game is successful it is unlikely, because of the richness of the game-play and the short time given for the genesis of whatever mental construct, that the learners will realize what was important, new, worth to be made explicit, put in a certain form and kept for further use. It is even more difficult to imagine that they will be able to relate any interesting outcome to knowledge socially or culturally shared by the community they will join after this teaching/training-learning period. So, from an educational perspective, debriefing is critical. Within the frame of the theory of didactical situations this phase is called "institutionalization". Indeed, this is even stronger than "debriefing", it means that the teacher-trainer has a special voice and responsibility in acknowledging the learning outcome and the value of a learning game.

A note after the redeading of: de Freitas, Sara and Oliver, Martin (2006). How can exploratory learning with games and simulations within the curriculum be most effectively evaluated? Computers and Education 46 (3) 249-264.