OBJECTIVES

1.  To develop a plan for Workplace Deployment and evaluation of the developed electronic portfolio and LA-system.
2.  To deploy the work-based assessment with e-portfolios with the just-in-time-feedback module in practice to give students feedback and to monitor their progress.
3.  To deploy the work-based assessment with e-portfolios with visualisation tools in practice to give students feedback and to monitor their progress.
4.  To evaluate the quality and practicability of the e-portfolio and LA-system as a whole.

DESCRIPTION OF WORK AND ROLE OF PARTNERS

Description of work: In accordance with the design-based research tradition, a cyclical approach is employed to deploy and evaluate the developed LA system.  This WP is divided into three deployment and evaluation sets of activities.  Two evaluation sets of activities represent the first milestone and the third evaluation is the second milestone of this WP.  Firstly, the just-in-time-feedback module will be deployed during M18 – M23.  In the course of this implementation cycle, formative assessment of the just-in-time-feedback module will be carried out.  Secondly, visualisation tools will be deployed and formatively assessed during M25 – M36.  Based on these cycles, the LA system will be further developed.  Thirdly, the deployment and evaluation activities have a summative character.  During this phase the whole LA system will be implemented and evaluated employing a quasi-experimental design during M31 – M36.  At the end of this WP conclusions regarding the effectiveness of LA system and recommendation for further implementation will be provided.

Tasks will mainly be carried out by the educational institutes in the fields of: medical education, veterinary education and teacher education. UM and UR are involved for technical reasons.  TU is the WP-leader.

Task 6.1 General implementation and evaluation plan: In this task, a general deployment and evaluation plan will be developed (delivered in M15).  In addition specific plans per Partner are developed.  The general deployment and evaluation plan provides the methodological guide to activities for this WP.  Firstly, the precise research questions focusing on testing the efficiency, quality and effects on trainees’ learning for all evaluation activities will be formulated.  Secondly, the detailed research designs and data collection and analysis methods will be prepared.  Validity and reliability of all data collection instruments and related coding procedures will be evaluated during piloting, instruments with a high level of validity and reliability will be employed in the study.  A design-based research approach will be employed in the first two phases of implementation and evaluation.  To evaluate the results of the first two of Kirkpatrick’s (1994) levels of evaluation is used in addition to the UI-REF methodology already deployed for the requirements engineering which establishes effects-affects-side effects matrices to arrive at a comprehensive evaluation.  This will involve the establishment of the exact metrics and Key Performance Indicators for the WATCHME project.  Usability, efficiency and perceived impact on learning outcomes will be measured by querying user satisfaction (first level of the model) employing self-reported questionnaires and interviews as well as nested video approaches for deeper analysis.  Similarly, the measurements of the effects on learning perceptions (second level of the model) will be collected with interviews and self-reported questionnaires.  Data about the learners’ behaviour (third level) will be collected with think-aloud protocols and log files of LA system and performance outcomes of EPAs.  A quasi-experimental design will be employed in the third phase of the evaluation.  In seven of the Partner institutes, experimental and control groups of trainees will be formed following the matched pair sampling procedure that aims to form equal groups based on different characteristics.  In the current study trainees’ grade point average, gender and age will be used to compose similar samples.  Experimental groups will use the LA system and control groups will use regular forms of instruction available at the Partner institutes.  Learners’ behaviour will be focussed on (third level of the above mentioned model) in order to evaluate the overall LA system.  Therefore, in the course of the experiment think-aloud protocols, log file analysis and stimulated recall interviews will be employed.  The first dependent variable will be a function of variables based on data collected through log files and think-aloud protocols about the learning activities.  The second dependent variable will be a function of variables based on performance outcomes of EPAs.  Exact measures will be defined based on the outcomes of WP2 and a literature review on work-based learning in medical education, teacher education and veterinary education.

Task 6.2 Deployment and formative evaluation of work-based assessment with e-portfolios on just-in-time-feedback module: In this task, firstly, a concrete and detailed deployment plan will be developed, which will provide the practical guide of action for this deployment and evaluation phase.  The plan will define a concrete set of activities for deploying a just-in-time-feedback module in concrete settings of medical education, teacher education and veterinary education.  In addition, the plan will incorporate corresponding mechanisms for monitoring and corrective activities with the purpose of formative assessment.  The deployment activities will be centrally designed and coordinated, but deployed in close collaboration with the Institute Coordinators.  Following the deployment plan, deployment and formative assessment activities will be carried out during M18 – M23 among around 200 trainees and 32 supervisors from medical education, teacher education and veterinary education from different Partner institutes.  Based on this phase suggestions for further development of the just-in-time-feedback module will be provided.

Task 6.3 Deployment and formative evaluation of visualisation tools: Similarly to the first phase, a concrete and detailed deployment plan will be developed, which will provide the practical guide of action for this deployment and evaluation phase.  The plan will define a concrete set of activities for deploying visualisation tools in concrete settings of medical education, teacher education and veterinary education.  In addition, the plan will incorporate corresponding mechanisms for monitoring and corrective activities for the purposes of formative assessment.  The deployment activities will be centrally designed and coordinated, but deployed in close collaboration with the Partner institute Coordinators.  Following the deployment plan, deployment and formative assessment activities will be carried out during M25 – M36 among around 200 trainees and 32 supervisors from the medical education, teacher education and veterinary education departments of different Partner institutes.  The Partner institutions vary in the number of participants that they could provide to the project as some of the Partners have a larger student population than others.  For that reason and for the sake of enhancing the generalisability of the project, the Consortium will endeavour to maintain the maximum possible average of participants per professional field across the various testbed environments, each attempting to contribute the highest number of trainees within their structural limits.  For formative evaluation purposes a total number of at least 200 trainees and 32 supervisors might be involved, divided as follows over the professional fields:

  • Medical education: at least 67 trainees and 11 supervisors
  • Veterinary education: at least 67 trainees and 11 supervisors
  • Teacher education: at least 66 trainees and 10 supervisors

Based on this phase suggestions for further development of the visualisation tools will be provided.

Task 6.4 Summative evaluation: In order to assess the quality and practicability of LA system as a whole, a procedure of summative assessment will be carried during M31 – M36.  A quasi-experimental design will be employed to compare trainees’ learning activities and outcomes between the experimental (LA system) and control (alternative forms of instruction) conditions.  In total, around 400 trainees and 64 supervisors from the medical education, teacher education and veterinary education departments of all Partner institutes will participate in the study.  The Partner institutes vary in the number of participants that they could provide to the project as some of the Partners have a larger student population than others.  For that reason and for the sake of enhancing generalisability of the project, a minimum average of participants per professional field shall be maintained by each testbed environment working to maximise their trainee throughput within their respective structural limits.   For summative evaluation purposes a total number of around 400 trainees and 64 supervisors might be involved, divided as follows over the professional fields:

  • Medical education: strived for 134 trainees and 22 supervisors
  • Veterinary education: strived for 134 trainees and 22 supervisors
  • Teacher education: strived for 132 trainees and 20 supervisors

Regarding the summative evaluation (WP6) at least a total of 400 trainees and 64 supervisors shall be the target to be hopefully exceeded.
All testing activities will be centrally coordinated, but also managed locally by the Institute Coordinator, responsible for the local management and localisation of the project resources and activities.