lundi 27 février 2012

Teaching counts

  Retrieved from the TEL opinion blog, December the 22th, 2005

The reasons why the learner, either a child or an adult, needs "teaching inputs" are very often hidden as a corollary of the emphasis on—and possibly the misunderstanding of—the constructivist principles of design of learning environments. I would like to suggest here that these needs are especially important in the case of modern environments which are largely distributed and provide a potential access to a huge amount of knowledge and information. The following questions illustrate some of the issues that learners may have to face when left on their own in the wild web of digital resources: "How to look for something you don't know? ", "How to know that what you have found is what you were looking for? ", "How to know that you have learned?". Here are some of the issues that a teaching assistant should help to address. Another crucial question is: "How will others know that you know?"

It is not enough that learners have solved problems for them to understand that they have learned. Creative problem-solving which is at the core of the constructivist approach is so rich in new intellectual constructs that it is even a problem for the learner to realise what is worth remembering. Here again is a specific task for a teaching assistant. There is no general teaching model which could be implemented to equip a learning environment with the corresponding functionalities.

The nature of complex knowledge (as opposed to basic skills) is another reason to seriously refocus the design of learning environments on teaching issues. One of the main characteristics of such knowledge is, first that to master it requires to master several different pieces of knowledge organised in the form of a system, and second that its use depends on methods which are not mere algorithms. Such knowledge cannot be constructed spontaneously even when learners are provided with an adequate problem-situation, and actually in some cases such situations are even still unknown (e.g. linear algebra). As a result, complex knowledge requires specific learning environments and content specific teaching strategies. The complexity of such knowledge also comes from the fact that the corresponding learners' conceptions (i.e. learners' cognitive constructs), can be very different the one from the other and rather complex to understand and to model. The current research on students' understanding of the concept of "function" in mathematics or of the concept of "energy" in physics witnesses this complexity. The development of technological tools aiming at supporting the use of these knowledge (formal computation, simulation, etc.) even increases the difficulty by modifying within a kind of systemic loop the nature of the users' conceptions.

We cannot expect one single universal agent to be able to handle the complexity of supporting the learning process in the case of complex knowledge. On the contrary, there is a need for specialised agents, either artificial or human, able to cooperate and to coordinate their actions in order to provide the best support to the learner—indeed, one could remark at this point that the situation might not be so different for the so called "basic skills"…

My claim is that: the educating function of a system is an emerging property of the interactions organised between its components, and not a functionality of one of its parts.

The fascination of research

Retrieved from the TEL opinion blog, December the 17th, 2005

The word “Research” evokes the fascination of knowledge as well as the expectation of the mastery of the unknown. Research outcomes are expected to be innovative by nature and reliable by construction. Everything works as if being based on research, actions and decisions should be less risky than being based on any other grounds; namely, opinions and beliefs. Indeed, “opinion” is an intellectual category hazardous and anything but reliable, while “belief” is as contingent as opinion with the worse characteristic that facing failures it does not leave room for much revision.

The strength of research results lies in their justification, ruled argumentation (proof) or systematic empirical evidence, and their accessibility to revision under the pressure of refutation. Research results have the epistemic characteristic of knowledge; they are products of a human activity which transcend the historical and anecdotal context marking their origin. However, from a scientific perspective, a piece of knowledge is not a statement, but the complex “object” shaped by the relations between a statement, a proof and a theory—all framed by an accepted problématique that informs about the relevance of a question. The return of investment in research is the reliability, universality and openness of its outcomes, its cost is theory, proofs and dealing with refutations. This has two meanings: (i) research is not about the so called “reality”, but phenomena identified through the lenses of a problématique; (ii) the dialectic of proofs and refutation is not empirical but of a theoretical nature, possibly addressing not a result but its rationale or even its underlying problématique.

Nothing new there, but something to bring back to the fore when we question the role and the contribution of research to the development of TEL. Something which has been forgotten (or lost) with the emergence of “acadustry”!

One postulate, three refutations: a discussion on CSCL design

Retrieved from the TEL opinion blog, August the 22th, 2006  

Pierre Dillenbourg, in his contribution to the book on “Barriers and biases in computer-mediated knowledge communication ”did choose an interesting angle to address design issues in CSCL research. The idea is to organise the discussion in the form of the confrontation of a postulate to possible refutations. Actually, this misses a point that a postulate has not to be refuted, but to be accepted, rejected or replaced (it is in this way that you open the way to non-Euclidean geometry). But, it does not matter, what is gained from the organisation of the discussion is much more interesting that this nuance, and eventually it does not prevent the emergence of an alternative postulate which could open the way to a new research agenda. From this perspective I would claim that this text worth to read for our PhD students.
 
The postulate discussed states that “the more a system would be able to reproduce face-to-face interaction features, the better it would be?”. What is sometimes shortened into the richer the interface, the better. The refutations presented by Dillenbourg (the commercial failure of WAP compared to the great success of SMS communication, the limitation of the added value of video communication) evidence that enhancing computer-mediated communication (CMC) is not a guaranty of either a better adoption by users or of a greater efficiency, but more interesting it suggests that CMC has it own specificity.
 
Here is the new key idea, the new postulate!, to keep from Dillenbourg contribution:
The purpose of CMC tools is not to perform better than face-to face interactions but to augment social interactions (in the sense of augmented reality)
Three examples of CMC specificities are presented to demonstrate that “computer-mediated communication is definitely less rich than face to face interaction but also possesses interesting feature worth exploring.” Dillenbourg team projects provide an illustration of these feature which are: persistency, context reification and mirroring the group activity. Interestingly, it is emphasised that these features should not be considered as productive enhancement per se: “what augmentation means is of course specific to each task: what facilitates one task may not be useful for another.” However, it is difficult to see what is here specific to learning. So, let’s investigate this apparent absence of our common problématique.
 
We are interested in activities which can stimulate, support and validate learning. Even if in they could look like working or entertainment activities, learning activities require still some specific features which come from the fact that performing them is not enough if there is not the acknowledgement in some form of the related learning outcome. This means that a learning activity is not “known” until we know the “learning what’. To go straight to the point: a learning activity is always content specific because the type of interactions (actions and feedback) needed depend on the type of knowing targeted. 

Then Dillenbourg is right, our research agenda should be “to determine which interactions are desired and how they can be induced by the interface”. But to carry out such a programme, I don’t agree that “the main bottleneck here is our imagination”, in my opinion the main obstacle is our lack of knowledge of the best conditions for the learning of a content be it academic or practical, elementary or complex.

Pierre Dillenbourg : Designing biases that augment socio-cognitive interactions. In: Reiner Bromme, Friedrich W. Hesse and Hans Spada (eds.) Barriers and biases in computer-mediated knowledge communication (pp. 243-261). Berlin: Springer.

PS: about the imitation bias, I would be very happy if you could have a look at the ornithoptère video:


To make it short: if would be great if we could imitate good teachers, but it proves to be extremely difficult. So it may be better to search for another way, however this does not mean that this one is wrong, silly or irrelevant. An other point (indeed the Dillenbourg point) is that ICT could allow us to explore other avenues, epistemologically more relevant. This is a challenge which is interesting in itself and not as an alternative to an imitation that we are not able to afford.

Continuity, a political condition for sustainability

  Retrieved from the TEL opinion blog, June the 30th, 2006
 
Continuity! I think that the main challenge for the ECs technology-enhanced learning (TEL) research policybut it might not only be the case of TELis ensuring a continuity of its policy that will be directly in line with the sustainability challenge that the Commission offered to the new FP6 instruments. It is clear that if the policy doesnt have a long-lasting vision of the development of the field, researchers - because of their need for financial support - will just try to surf the wave of the always-changing priorities. As I suggested somewhere else, it will stimulate the development of the Acadustry, a chimera of industry and academia that will indeed be sterile. On the contrary, a policy informed by a long-lasting vision of what I deem necessary for the development of the European research area will be a strong and productive support to research. Ahead of that, academic research and R&D have the responsibility for developing a research domain that is both scientifically robust and productive.
Among the priorities I see for us, is the responsibility to organise the fight against reinventing the wheel and developing technologies that are all-but-forgotten soon after their development by PhD students or projects. A stable EC policy would be a real incentive to make this effort. In particular, the challenge will be less a question of seeing the future twenty years ahead, but rather one of understanding what we know, where the current problems and barriers are, and in which areas we can make real breakthroughs. I would like to suggest that if we engage this direction, we will be more efficient in supporting the development of SMEs in the field, offering real solutions to them, and methods to issues they have to face now, in todays market.

Kaleidoscope has already shaped elements to support the EU efforts to set a productive TEL research area; a good example of this is the Kaleidoscope virtual doctoral school. Soon the Kaleidoscope open archive initiative will demonstrate the capacity of researchers to share and document their production properly and at an international level. However there are difficulties that come from the fragmentation of TEL on a regional basis. The obstacle raised by this fragmentation is quite difficult to overcome because the research needs are not expressed in the same way by all the European nations and the needs are not shared; learning is not yet a global market. This has an impact on the relations with users and SMEs, whose markets are in general quite local and specific. However by setting up European research teams on concrete and precise topics, Kaleidoscope has initiated a movement to build a European research force with a sustainable scientific agenda. Moreover, while building the network, a fragmentation of the research field itself appeared. We are now reducing it, though, with initiatives like the convergence workshop to be held next December to bridge research on collaborative, mobile, and inquiry learning.
 
Beate had some more questions, our discussion continues on elearningeuropa.info

samedi 25 février 2012

When the space is the interface

  Retrieved from the TEL opinion blog, April the 12th, 2008
 
Making accessible phenomena by means of simulations is one of the added values of computer-based learning environments. To make it simple, let's say that the key feature of simulations is to have a good mathematical model plugged on an efficient visualisation of the  targeted phenomena. However, such simulations are processed within the limited space of the screen of the computer over a short period of time. The development of virtual reality and the so-called full scale simulations allow the access to spaces beyond the limits of the screen. However, one is still immerged in an artificial world with time and persistence  constraints (notably, this is not the case in MMOs). The idea of embedded phenomena coined by Tom Moher opens smart ways to overcome several of these limitations.  An embedded phenomenon  is the emergent property of  the behaviours of a set of "distributed media located around the classroom representing 'portals' into [the] phenomenon depicting local state information corresponding to [its mapping onto the physical space of the classroom]." The space of the class becomes the interface with the model which has been implemented; but it is more than that since a simulation can run  "continuously over weeks and months, creating information channels that are temporally and physically interleaved with, but asynchronous with respect to, the regular flow of instruction."  This approach opens new significant possibilities for the simulation of phenomena where space and time count. Migration of  bugs, movement of the planets or earthquake find with embedded phenomena a much more relevant framework to challenge learner modelling, requiring an effective conceptualisation of space and time.   
But  the essential contribution may be not  at the level of the acquisition of the concepts themselves but at the level of the acquisition of the methodology and the organisation of the scientific work. Students have to organise the space and the time to collect data, then gather and analyse what they have obtained individually to build a collective knowledge. Given the role of time, the experiment cannot be replicated at will. Close to what happen with on the field studies , observations have to be planed, showing may be more accurately the relation between observation and theory.    Moher emphasises the positive effect of his approach, its "affective impact" (more emotional interest in the phenomena) and its impact of productive social interactions. And indeed one must recognise that  this smart idea provides students with an unprecedented experience. However, there is not much conceptual analysis, and it is difficult to assess how far this will be manageable and robust enough under the classical practical constraints in school. It is said at the beginning of the paper that "the [embedded phenomena] framework does not prescribe an instructional design per se, not does it provide any direct scaffolding to support learning", but few lines later it is claimed that "phenomena are made accessible and responsive to the needs of learners through the novel uses of classroom time and space". How are needs of the students, specifications of the environment and orchestration required from the teacher taken into account in the design and the implementation of embedded phenomena? There is inherently a simplification in the design of this framework and, at the same time, a complexification of the teaching and learning context. How far does this count? How does it impact the learning outcome? And the teaching task? Embedded phenomena have a huge learning and teaching potential, but it also opens the way to quite difficult and stimulating research questions. No doubt that we will be eager to discuss these with Tom Moher when he comes to the Learning science conference next summer in Utrecht . Moher, T. (2006). Embedded Phenomena: Supporting Science Learning with Classroom-sized Distributed Simulations. Proceedings ACM Conference on Human Factors in Computing Systems (CHI 2006) (April 2006, Montreal, Canada), 691-700. (Best Paper award)

mercredi 22 février 2012

Scripts, games and situations

Retrieved from the TEL opinion blog, July the 28th, 2006
Recently issued, the book entitled “Barriers and Biases in Computer-Mediated Knowledge Communication” contains among several chapters concerning CSCL , all very stimulating, one about the design and evaluation of the use of scripts which seems to me rich of lessons for our research agenda.  The design, implementation and use of scripts is a topic largely addressed in Kaleidoscope (See CoSSICLE or CAVICoLA ), understanding their benefits and limits is surely critical. More precisely this chapter, written by a group of five leading researchers in the domain, addresses the case of social and epistemic scripts. 
What stimulated my curiosity is that the results of the research presented demonstrate that if social scripts seems to have a positive effect, it is not the case for the latter which have “no or negative effects on learning outcomes”.  
The authors suggest that by decreasing the cognitive demand of the learning tasks the epistemic scripts may lower the level of the knowledge construction.  Actually, when looking at the detail of the epistemic script, one may think that not only the level of cognitive demand is (possibly) less important, but that it may be the task itself which is completely modified. Or better said, the situation in which the students are involved is modified by the fact that there is this possibility to get hints to achieve the proposed task. Definitely, instead of “task” it may be “situation” which is here the right word to make sense of what is happening. While the social script to some extend forces mutual attention and learners commitment without any reference to the content at stake, the epistemic scripts do impact the content explicitly reducing the problem solving space of the learners. 
 
In the end, the question which comes after this reading could be: what is the game played by the learners? The difference then between the social and the epistemic scripts, is that the latter do de facto define the situation (they specify what the game is about) while the former stimulate the learners independently of the characteristics of the situation.
 
Then the question becomes: What is the role of the scripts in framing this knowledge game?
 
The book editors in their introduction to this chapter express a doubt that we will need scripts when CSCL will “have become an every day occurrence, like group work in the classroom”. The question I suggest here above, may show that the answer to this doubt is: yes we will need them, and they will be one of the best features of the CSCL environments. As we know, learning is not a natural characteristic of group work, and if there is any learning there is no evidence of the relevance of the outcome. CSCL scripts may reduce the contingent nature of learning outcome, especially the epistemic script as long as they will not be views as tools to facilitate the achievement of a task, but as means to frame stimulate the construction of the relevant learning game (situation) by the learners. The next step might be to characterise CSCL scripts epistemologically valid against a certain learning stake. Quite a challenge…
 
Armin Weinberger , Markus Reiser, Bernhard Ertl, Frank Fischer , Heinz Mandl: Facilitating collaborative knowledge construction in computer-meidated learning environments with cooperation scripts. In: Reiner Bromme, Friedrich W. Hesse and Hans Spada (eds.) Barriers and biases in computer-mediated knowledge communication (pp. 15-37). Berlin: Springer.

Bad news from the West!

Retrieved from the TEL opinion blog. Originally published on April the 14th, 2007

The Washington Post has recently drawn our attention to the release of a Report to Congress by the US Department of Education , which --­shortly said--demonstrates that educational software have no significant impact on student performance. This is a good news, to some extend, since it seems that in general no one can demonstrate whatever in our domain. Counter-examples are much more common than proofs and here again some have found that “other research trials have proven that the technology works”… Indeed, this is not a response but at best one possible argument to fight the conclusions of the report. Other responses could be either that the quality of the software might be questioned, or that all this is due to the fact that “teachers were not prepared or properly trained to use the technology”. However, having a look at the report itself shades a different light on the results the Washington post comments on. It appears that the use of the software represents about 10% of the instructional time, what is quite limited. It is also said that teachers felt trained enough, that the software didn’t present major problem of use, and eventually that teachers tended to be more supporting the students than lecturing, and the students were more likely to engage in individual practice. By the way, some may conclude also that supporting students instead of lecturing, and engaging individually in learning have no significant impact on students performance… indeed it is a joke. The lessons to be learned may be more complex that those the media would jump on. The scope of a statement like “products did not affects test scores” is, in my opinion, limited if first we don’t know what was the use of these products, second if we don’t inform the way the scores relate to what the said products may impact. For the first issue, this is not easy: how to characterize the use of an educational software and its impact? And moreover, once we have such a characterization can we ensure that the impact can be reproduced? In other words: is the claim about the software impact (or non impact) based on a serious knowledge of the learning and teaching phenomena at stake? These questions are of a theoretical nature. Only once researchers will have reached a consensus about them, will we be able to engage in comparative studies and seriously address issues like those raised by the Report to Congress. Otherwise our discussions and arguments will only be offering stories against stories (even academic stories as we publish so many). For the time being the report is serious and deserve attention. Its strength comes from the rigour of the tools used and the size of the sample, at least at first glance.  Its weakness is that we don’t know the meaning of what is measured, and we have every difficulty to discuss the claimed results. This is not a new situation. We have the same difficulty with many research reports we read in academic journals and we hear in conferences. It is time to engage in the building of a common scientific quality reference in our field so that (i) we can support our claims about technology enhanced learning, (ii) we can inform properly research results and (iii) we can reproduce them where ever they come from, and eventually (iv) we can critically and scientifically evaluate research outcomes. Until now we are more often ascertaining that proving (validating). Before closing this short note, I would like to draw your attention to an article published by El Pais , the Spanish newspaper, reporting that thanks to the use of technology students (a sample of 1800) have much better achievement  than without in geometry. They score 25% higher…
En este artículo se presenta una investigación sobre la incorporaci´on de las Nuevas Tecnologías de la Información y Comunicación en el aula de Matemáticas en los niveles de ESO y Bachilleratos. La metodología que hemos seguido consiste en formar a los profesores con los mismos materiales y de la misma forma que ellos lo aplicarían en el aula. El proyecto, entre la fase investigación y la de generalización, ha tenido una duración de 6 cursos, desde el año 2000 al actual, en la que han participado más de 400 profesores y 15000 alumnos. Los resultados de la investigación confirman una mejora en el rendimiento de los exámenes “tradicionales” escritos del alumnado, del 11,2%, con una mejora global del 24,39%. Teniendo en cuenta que el informe Pisa 2003 otorga a España el puesto 23 con 485 puntos en capacidades matemáticas, si sobre esta puntuación aumentamos un 11,2%, tendrámos 539 puntos que nos elevaría al tercer puesto.
The complete version of the Report to Congress has been made available on the TeLearn Open Archive  [click here ]