mercredi 22 février 2012

Bad news from the West!

Retrieved from the TEL opinion blog. Originally published on April the 14th, 2007

The Washington Post has recently drawn our attention to the release of a Report to Congress by the US Department of Education , which --­shortly said--demonstrates that educational software have no significant impact on student performance. This is a good news, to some extend, since it seems that in general no one can demonstrate whatever in our domain. Counter-examples are much more common than proofs and here again some have found that “other research trials have proven that the technology works”… Indeed, this is not a response but at best one possible argument to fight the conclusions of the report. Other responses could be either that the quality of the software might be questioned, or that all this is due to the fact that “teachers were not prepared or properly trained to use the technology”. However, having a look at the report itself shades a different light on the results the Washington post comments on. It appears that the use of the software represents about 10% of the instructional time, what is quite limited. It is also said that teachers felt trained enough, that the software didn’t present major problem of use, and eventually that teachers tended to be more supporting the students than lecturing, and the students were more likely to engage in individual practice. By the way, some may conclude also that supporting students instead of lecturing, and engaging individually in learning have no significant impact on students performance… indeed it is a joke. The lessons to be learned may be more complex that those the media would jump on. The scope of a statement like “products did not affects test scores” is, in my opinion, limited if first we don’t know what was the use of these products, second if we don’t inform the way the scores relate to what the said products may impact. For the first issue, this is not easy: how to characterize the use of an educational software and its impact? And moreover, once we have such a characterization can we ensure that the impact can be reproduced? In other words: is the claim about the software impact (or non impact) based on a serious knowledge of the learning and teaching phenomena at stake? These questions are of a theoretical nature. Only once researchers will have reached a consensus about them, will we be able to engage in comparative studies and seriously address issues like those raised by the Report to Congress. Otherwise our discussions and arguments will only be offering stories against stories (even academic stories as we publish so many). For the time being the report is serious and deserve attention. Its strength comes from the rigour of the tools used and the size of the sample, at least at first glance.  Its weakness is that we don’t know the meaning of what is measured, and we have every difficulty to discuss the claimed results. This is not a new situation. We have the same difficulty with many research reports we read in academic journals and we hear in conferences. It is time to engage in the building of a common scientific quality reference in our field so that (i) we can support our claims about technology enhanced learning, (ii) we can inform properly research results and (iii) we can reproduce them where ever they come from, and eventually (iv) we can critically and scientifically evaluate research outcomes. Until now we are more often ascertaining that proving (validating). Before closing this short note, I would like to draw your attention to an article published by El Pais , the Spanish newspaper, reporting that thanks to the use of technology students (a sample of 1800) have much better achievement  than without in geometry. They score 25% higher…
En este artículo se presenta una investigación sobre la incorporaci´on de las Nuevas Tecnologías de la Información y Comunicación en el aula de Matemáticas en los niveles de ESO y Bachilleratos. La metodología que hemos seguido consiste en formar a los profesores con los mismos materiales y de la misma forma que ellos lo aplicarían en el aula. El proyecto, entre la fase investigación y la de generalización, ha tenido una duración de 6 cursos, desde el año 2000 al actual, en la que han participado más de 400 profesores y 15000 alumnos. Los resultados de la investigación confirman una mejora en el rendimiento de los exámenes “tradicionales” escritos del alumnado, del 11,2%, con una mejora global del 24,39%. Teniendo en cuenta que el informe Pisa 2003 otorga a España el puesto 23 con 485 puntos en capacidades matemáticas, si sobre esta puntuación aumentamos un 11,2%, tendrámos 539 puntos que nos elevaría al tercer puesto.
The complete version of the Report to Congress has been made available on the TeLearn Open Archive  [click here ]

Aucun commentaire:

Enregistrer un commentaire