Pages

Monday, August 08, 2016

A quiet life

Uphill struggle "Students collude with academics in a ‘disengagement compact’, which is code for ‘I’ll leave you alone if you’ll leave me alone’... This promotes a cautious culture of business-as-usual. In most institutions, internal quality assurance departments prefer a quiet life and reinforce the status quo. In the end, the challenge of motivating students to undertake formative tasks surmounts the potential value of those tasks. The idea that well-executed formative assessment could revolutionise student learning has not yet taken hold."



The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education 02 Aug 2016 doi: 10.1080/02602938.2016.1217501
Evidence from 73 programmes in 14 U.K universities sheds light on the typical student experience of assessment over a three-year undergraduate degree. A previous small-scale study in three universities characterised programme assessment environments using a similar method. The current study analyses data about assessment patterns using descriptive statistical methods, drawing on a large sample in a wider range of universities than the original study. Findings demonstrate a wide range of practice across programmes: from 12 summative assessments on one programme to 227 on another; from 87% by examination to none on others. While variations cast doubt on the comparability of U.K degrees, programme assessment patterns are complex. Further analysis distinguishes common assessment patterns across the sample. Typically, students encounter eight times as much summative as formative assessment, a dozen different types of assessment, more than three quarters by coursework. The presence of high summative and low formative assessment diets is likely to compound students’ grade-orientation, reinforcing narrow and instrumental approaches to learning. High varieties of assessment are probable contributors to student confusion about goals and standards. Making systematic headway to improve student learning from assessment requires a programmatic and evidence-led approach to design, characterised by dialogue and social practice.







Friday, July 29, 2016

You are who you learn: authenic assessment and team analytics?

Forget facts - that's the learning of the past. The learning of the future is soft skills. But how do you assess it? Peter Williams argues that we can approach the problem though learning analytics. Along the way he also provides the best theoretical introduction to authentic assessment that I've read yet. He also flags Lombardi's definition of authentic learning:

  1. Real-world relevance: the need for authentic activities within a realistic context.
  2. Ill-defined problem: confronting challenges that may be open to multiple interpretations.
  3. Sustained investigation: undertaking complex tasks over a realistic period of time.
  4. Multiple sources and perspectives: employing a variety of perspectives to locate relevant and useful resources.
  5. Collaboration: achieving success through division of labour and teamworking.
  6. Reflection (metacognition): reflection upon individual and team decisions.
  7. Interdisciplinary perspective: encouraging the adoption of diverse roles and thinking.
  8. Integrated assessment: coinciding the learning process with feedback that reflects real-world evaluation.
  9. Polished products: achieving real and complete outcomes rather than completing partial exercises.
  10. Multiple interpretations and outcomes: appreciating diverse interpretations and competing solutions.

So this paper is worth reading for the above reasons. But am I convinced about his pitch for learning analytics as the way forward? No - it's completely fanciful and unsupported by any evidence. Which makes me feel better - we agree on the problem and it's not just me being thick because I can't quite figure the solution.




Assessing collaborative learning: big data, analytics and university futures. Assessment & Evaluation in Higher Education 28 Jul 2016 doi: 10.1080/02602938.2016.1216084
Assessment in higher education has focused on the performance of individual students. This focus has been a practical as well as an epistemic one: methods of assessment are constrained by the technology of the day, and in the past they required the completion by individuals under controlled conditions of set-piece academic exercises. Recent advances in learning analytics, drawing upon vast sets of digitally stored student activity data, open new practical and epistemic possibilities for assessment, and carry the potential to transform higher education. It is becoming practicable to assess the individual and collective performance of team members working on complex projects that closely simulate the professional contexts that graduates will encounter. In addition to academic knowledge, this authentic assessment can include a diverse range of personal qualities and dispositions that are key to the computer-supported cooperative working of professionals in the knowledge economy. This paper explores the implications of such opportunities for the purpose and practices of assessment in higher education, as universities adapt their institutional missions to address twenty-first century needs. The paper concludes with a strong recommendation for university leaders to deploy analytics to support and evaluate the collaborative learning of students working in realistic contexts.








Tuesday, July 26, 2016

Teacher feedback or peer feedback - which is better?

Teacher feedback, obviously.


Fostering oral presentation performance: does the quality of feedback differ when provided by the teacher, peers or peers guided by tutor? Assessment & Evaluation in Higher Education 21 Jul 2016 doi: 10.1080/02602938.2016.1212984
Previous research revealed significant differences in the effectiveness of various feedback sources for encouraging students’ oral presentation performance. While former studies emphasised the superiority of teacher feedback, it remains unclear whether the quality of feedback actually differs between commonly used sources in higher education. Therefore, this study examines feedback processes conducted directly after 95 undergraduate students’ presentations in the following conditions: teacher feedback, peer feedback and peer feedback guided by tutor. All processes were videotaped and analysed using a coding scheme that included seven feedback quality criteria deduced from the literature. Results demonstrate that teacher feedback corresponds to the highest extent with the majority of the seven identified feedback quality criteria. For four criteria, peer feedback guided by tutor scores higher than peer feedback. Skills courses should incorporate strategies focused on discussing perceptions of feedback and practising providing feedback to increase the effectiveness of peer feedback.



Wednesday, July 13, 2016

Thanks but no-thanks for the feedback

AJC ELF A timely article which chimes with discussions yesterday in our departmental ELF (Enhancing Learning Forum) meeting. Forsythe and Johnson's paper leans on Carol Dweck's Growth Mindset theory, so it's immediately attractive to me. It confirms my biases by having bad things to say about anonymous marking:
"The push towards anonymous, online marking can mean that personal feedback sessions are an incompatible part of the assessment and feedback loop. Anonymous marking is disruptive to the process because it prevents the tutor from giving connected guidance to students on their progress..."

Worth a read then.




Thanks, but no-thanks for the feedback. Assessment & Evaluation in Higher Education, 05 Jul 2016 DOI: 10.1080/02602938.2016.1202190
Feedback is an emotional business in which personal disposition influences what is attended to, encoded, consolidated and eventually retrieved. Here, we examine the extent to which students’ perceptions of feedback and their personal dispositions can be used to predict whether they appreciate, engage with and act on the feedback that they receive. The study is framed in psychological theories of mindset, defensive behaviours and new psychometric measures of the psychological integration of assessment feedback. Results suggest that, in this university population, growth mindset students were in the minority. Generally, students are fostering self-defensive behaviours that fail to nurture remediation following feedback. Recommendations explore the implications for students who engage in self-deception, and the ways in which psychologists and academics may intercede to help students progress academically by increasing their self-awareness.





Tuesday, July 05, 2016

Dissecting the assessment treadmill

We over assess students because it is difficult to motivate them to engage without frequent deadlines. But what are the the true effects of frequent assessment? This new paper describes a well conducted study of frequent assessment on Dutch Engineering students (n=219). Using principal component analysis the authors identified and analyzed four elements of assessment:

  • Value - how much value students attribute to frequent assessments: assessment is popular with students (= "value for money"?)
  • Formative function - no evidence that frequent testing had any formative value!
  • Positive effects and Negative effects - no strong cohort wide evidence for either of these (although they may affect individuals).

Summary: Assessment is popular with students but has no demonstrable educational value!


Students’ perception of frequent assessments and its relation to motivation and grades in a statistics course: a pilot study. Assessment & Evaluation in Higher Education 03 Jul 2016 doi: 10.1080/02602938.2016.1204532
This pilot study measures university students’ perceptions of graded frequent assessments in an obligatory statistics course using a novel questionnaire. Relations between perceptions of frequent assessments, intrinsic motivation and grades were also investigated. A factor analysis of the questionnaire revealed four factors, which were labelled value, formative function, positive effects and negative effects. The results showed that most students valued graded frequent assessments as a study motivator. A modest number of students experienced positive or negative effects from assessments and grades received. Less than half of the students used the results of frequent assessments in their learning process. The perception of negative effects (lower self-confidence and more stress) negatively mediated the relation between grades and intrinsic motivation. It is argued that communication with students regarding the purpose and benefits of frequent assessments could mitigate these negative effects.




Monday, July 04, 2016

Xenotransplantation and the future of human organ transplants

Figshare It's always a pleasure when a student working on a final year project produces a piece of work which is worthy of a wider audience beyond examiners. I was in this fortunately position this year. Unfortunately, I was not in the position of being able to spend months of my time on the tortuous process of negotiating a paper into a traditional journal, so we decided to go down the Open Access route. My first thought was to try the relatively new bioRxiv, but the paper was rejected by them because they do not publish theses (so only partly open access then). After that it was back to the trusty figshare:

Morgan, Owen; Cann, Alan J. (2016): Xenotransplantation and the future of human organ transplants. figshare. https://dx.doi.org/10.6084/m9.figshare.3467228.v1






Tuesday, June 21, 2016

Microsoft Forms


I'm still trying to unpack the multitude of components of Office 365 and map them onto a vision of a loosely-joined VLE. Last week we figured out that OneNote Class Notebook (an administrative tool for OneNote) is an LMS add-in rather than a freestanding product. It's not clear whether the new Microsoft Forms is the same or not, but I'd certainly like to get my hands on it and kick the tyres.


Microsoft Forms - a new formative assessment and survey tool in Office 365 Education