Allow me to explain! Automatic and explainable interview training and feedback to support job seekers

Suppose you had access to affordable and scalable interview training that provides explainable and actionable feedback easy to incorporate and improve performance. Would you be more motivated to prepare for job interviews?

Background and Motivation

Rising unemployment is one of the critical issues faced by many countries of the world. The world’s economic downturn is one of the major causes, and the COVID-19 health crisis has worsened this issue. Unemployment has been linked to certain socio-emotional and interaction skill deficiencies [1]. Prior research has indicated that the mindset of the candidates coming into the interviews can have significant effects on the selection outcomes. Interview specific self-efficacy, anxiety and motivation has the potential to influence interview performance [2]. While interviews are one of the most critical steps in putting an end to unemployment, most of them are looking at online options [3][4]. Conducting job interviews online poses various obstacles to applicants’ job prospects [5]. Job seekers’ online interviewing skills and chances on the job market can benefit from online interview training.

While some existing research studies address the problem of interview skills training [6], [7], it usually concerns investigating the effects of training on improving interview performance. Interview motivation could influence the extent to which interviewees are driven to succeed in their interviews. Given the belief that performance is a function of ability and motivation [8], it warrants an investigation into the influence of interview training on motivation and, in turn, on performance. Furthermore, these training methodologies are usually manual and involve a great deal of human effort. They can be costly and dependent on the availability of trained practitioners in the field. This solution is not scalable, with limited counsellors availability and scheduling conflicts making it difficult to get more (or any) training sessions. These problems can be solved by automatic interview training and feedback with self-reflection mechanisms that need little human intervention. Such a training platform can be achieved through the advanced techniques of Machine Learning, Natural Language Processing and Computer Vision.

While there are works that perform an automatic assessment of different skills using automated techniques and provide feedback [9][10], the evaluation is less intuitive and difficult to be incorporated by participants. Moreover, there has been growing evidence that a lack of transparency in such complex systems might negatively influence the user’s trust concerning AI choices [11]. This lack of confidence can also degrade the overall user experience. This study aims to investigate how automated feedback compares to human feedback in increasing interview motivation and self-efficacy. Providing feedback that is explainable (and as a result actionable) on interview performance to candidates and investigating its influence in improving interview specific motivation and self-efficacy would be the study’s primary aim. We are interested in exploring more on the information aspect of the feedback through explainability [12].

Research Questions

We want to address the following research questions.

RQ1. How does automated feedback from a virtual agent compare to human feedback in increasing the interview motivation?

RQ2. Does explained automatic feedback has more effect on interview motivation than the unexplained feedback predictions?


To capture the effect of the online training platform, we would like to conduct a longitudinal study. We are interested in finding out the impact of longitudinal exposure of the online training platform for four months on job seekers’ interview motivation, self-efficacy, and overall interview performance. We would like to compare a control condition of job interview training with access to human feedback with an experimental version of job interview training involving a video interview training platform with unexplained and explained feedback groups. All the participants (in control and experimental groups) will have pre-treatment interview baselines to give a mock interview. The pre-treatment will also have a questionnaire to measure motivation and self-efficacy. These recorded interviews will be assessed by trained interviewers on various job attributes. They will also have a similar questionnaire and post-treatment assessment which will be again rated by experts. The participants can practice interviews and get feedback as many times as possible and any time they need. All participants will also be tasked to maintain a diary for the study period to track their weekly job interview related tasks.

We will follow the guidelines suggested by Prolific for the longitudinal studies and conduct the study in multiple parts. The pre and post-treatment questionnaire will be achieved through two different Prolific studies. The interview training itself will be tracked in a custom-built online web app.

Sample Size

Our target population are job seekers who are actively looking for jobs. With the desired power of 0.90 and an effect size of 0.45 (based on the result in [13]), we calculated a required sample size of 65 using the F-test for one factor balanced ANOVA. Accounting for the inclusion criteria, incomplete data and drop-outs, we would like to target 300 participants in total, with 100 in each condition.

Benefits for the Study Participants

Along with the monetary compensation, the job seekers will obtain free interview training during the period of 4 months. The actionable feedback from the training platform can help improve their job interview performance.

Study Costs

The pre and post-questionnaire will take 15 mins each, and interviews can take up to 25 mins per session. We would like to offer 7.5 ÂŁ/hr as recommended for the questionnaire and interview baselines. Bonus payments would be made for the interviews depending on the group. Considering 300 participants, we would like to ask for 2437.5 + 4500 (questionnaire + bonus) + 33% service fee = 9226.37

When we have written up all the hypotheses, protocol, questionnaire and templates will be made available through the Open Science Foundation (An example of our previous research can be found here.) The custom automatic training and feedback platform itself will be made available publicly on GitHub. We have preregistered key aspects of our study here.

Thank you for your interest! Please consider voting!


[1] R. MacDonald, Disconnected youth? social exclusion, the `underclass’ economic marginality," Social Work and Society, 01 2009.

[2] A. Huffcutt, C. Van Iddekinge, and P. Roth, \Understanding applicant behavior in employment interviews: A theoretical model of interviewee performance," Human Resource Management Review - HUMAN RESOURCE MANAGEMENT REV, vol. 21, pp. 353{367, 06 2011.

[3] R. Maurer, \Job interviews go virtual in response to covid-19," Mar 2020, Job Interviews Go Virtual in Response to COVID-19

[4] Conducting interviews during the coronavirus pandemic,

[5] A. Joshi, D. A. Bloom, A. Spencer, K. Gaetke-Udager, and R. H. Cohan, Video interviewing: A review and recommendations for implementation in the era of covid-19 and beyond," Academic Radiology, 2020.

[6] C. S. Stocco, R. H. Thompson, J. M. Hart, and H. L. Soriano, Improving the interview skills of college students using behavioral skills training," Journal of Applied Behavior Analysis, vol. 50, no. 3, pp. 495-510, 2017.

[7] J. G. Hollandsworth, M. E. Dressel, and J. Stevens, Use of behavioral versus traditional procedures for increasing job interview skills," Journal of Counseling Psychology, vol. 24, no. 6, pp. 503-510, 1977.

[8] J. P. Campbell, Modeling the performance prediction problem in industrial and organizational psychology." 1990.

[9] M. E. Hoque, M. Courgeon, J.-C. Martin, B. Mutlu, and R. W. Picard, Mach: My automated conversation coach," in Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. ACM, 2013, pp. 697-706.

[10] K. Anderson, E. Andre, T. Baur, S. Bernardini, M. Chollet, E. Chryssafidou, I. Damian, C. Ennis, A. Egges, P. Gebhard et al., The tardis framework: intelligent virtual agents for social coaching in job interviews," in International Conference on Advances in Computer Entertainment Technology. Springer, 2013, pp. 476-491.

[11] D. Gunning, “Explainable artificial intelligence (xai),” Defense Advanced Research Projects Agency (DARPA), nd Web, vol. 2, no. 2, 2017.

[12] E. L. Deci and R. M. Ryan, The empirical exploration of intrinsic motivational processes," in Advances in experimental social psychology. Elsevier, 1980, vol. 13, pp. 39-80.

[13] “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design,” Journal on Multimodal User Interfaces, vol. 15, no. 2, pp. 87–98, Jun. 2021.