Last updated
If you're running a mentoring programme, or involved in mentoring, you're probably wondering whether AI mentoring and even peer AI mentoring or mentorship is possible.
After all, the news constantly bombards us with stories on Artificial Intelligence (AI) and how it will impact on many different areas of our lives.
Well, the quick answer is that AI mentoring lacks key ingredients to be effective. Even in its simplest form, AI uses 'best fit answers' that are most likely to be true and lacks real world experience to illustrate and guide mentees. Almost equally importantly, AI mentoring lacks the personal connections needed for introductions within a workplace, specialism or industry, which can form a key part of the mentoring process. With peer mentoring, this is equally important, as much of mentoring is relationship and experience based.
The PLD mentoring software is a form of AI according to the definition provided by the Stanford Encyclopedia of Philosophy: AI is "any kind of artificial computational system that shows intelligent behaviour, ie complex behaviour that is conducive to reaching goals." So will there be a time when AI actually delivers the mentoring?
Firstly we will consider whether AI should deliver mentoring, or AI mentorship from an ethical perspective. In their article, "The Ethics of Mentoring," Moberg and Velazquez (2004) set out a number of ethical responsibilities and obligations for mentors:
- Beneficence: to do good, specifically to provide knowledge, wisdom and support to the mentee
- Nonmaleficence: to avoid harming a mentee through the exercise of power
- Autonomy: to inform the mentee about the actions they undertake on their behalf and to ask for their consent
- Confidentiality: to keep information about the mentee confidential, to respect the mentee's right of privacy, and to give them control over their information
- Fairness: to avoid discrimination
- Loyalty: to avoid conflicts of interests
- Concern: to exercise a caring but fair partiality toward a mentee and their interests
So the question is can AI demonstrate these ethical obligations. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems published a list of eight principles for a fair, transparent, and human AI. Their perspective addresses developers in the field of AI and names eight aspects to consider within the development of AI:
- Human Rights: AI shall be created and operated to respect, promote, and protect internationally recognised human rights.
- Well-being: AI creators shall adopt increased human well-being as a primary success criterion for development.
- Data Agency: AI creators shall empower individuals with the ability to access and securely share their data, to maintain people's capacity to have control over their identity.
- Effectiveness: AI creators and operators shall provide evidence of the effectiveness and fitness for the purpose of AI.
- Transparency: The basis of a particular AI decision should always be discoverable.
- Accountability: AI shall be created and operated to provide an unambiguous rationale for all the decisions made.
- Awareness of Misuse: AI creators shall guard against all potential misuses and risks of AI in operation.
- Competence: AI creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
Although some of the language is different, the eight ethical principles for the development of AI mirrors much of the Moberg and Velazquez ethical obligations for mentors. However, there are some elements missing from the ethical principles of AI, namely loyalty and concern, both of which are true human characteristics and a vital part of mentorship. In terms of concern, AI cannot show empathy to a mentee; it won't know when to push the mentee out of their comfort zone and when they should take a more gentle approach to the mentee. AI is surely not capable of demonstrating such emotional intelligence.
Secondly we will consider the skills which a mentor should have:
- Actively listening
- Encouraging
- Identifying goals and current reality
- Instructing/developing capabilities
- Providing corrective feedback
- Managing risks
- Opening doors
- Inspiring
- Building trust
AI may be able to demonstrate a number of these skills. However, as discussed above, it seems inconceivable that AI mentorship in its current form can demonstrate at least three of these skills – opening doors, inspiring and building trust. For a mentor to be able to open doors for a mentee, they need to hold a range of relationships. AI doesn't hold relationships – it holds data. For a mentor to be inspiring, they need to have achieved what the mentee is looking to achieve, so by definition the mentor needs to be human. Finally, in terms of building trust, we may trust AI to undertake data-driven tasks, but do we trust AI to understand our goals, our motivation and our fears, such that they can support our development and growth? This has to come from a human mentor and from spending time with them developing a strong relationship.
So in an answer to the question can AI replace a mentor – we say a firm "no" – humans still need humans.