Leading a team or organization in the era of artificial intelligence doesnât just call for technical chops â what is necessary is business sense, empathy, foresight, and ethics.
The key is to understand what AI means â- and that doesnât mean AI-powered solutions dropped into the organizations with expectations of overnight miracles. AI is ânot humans plus machines, but humans multiplied by machines, augmenting and assisting our capabilities across the existing and emergent tasks performed in our organizations,â according to Paul McDonagh-Smith, senior lecturer at MIT, quoted in MIT Sloan Management Review.
People outside the AI technical bubble need to have a solid understanding of how AI works, and what is required to make it responsible and productive. That requires training and a forward-looking culture. People need to be brought up to speed on the latest technologies that is, in one way or another, redefining their jobs â and organizations. âAI skills are not overly abundant, so organizations need programs to upskill and train employees on technical skills and technical decision-making,â says McDonagh-Smith. âCulture is a big part of the equation: organizations will need to create silo-busting cross-functional teams, make failure permissible to encourage creativity, and encourage innovative ways to combine human and machine capabilities in complementary systems.â
Industry leaders from across the spectrum echo the sentiment highlighted in the Management Review article. Along with business savvy and a culture of innovation, ethics also needs to be top of mind. This calls for more diverse leadership of AI initiatives â by business strategists, users, technologists, and people from the humanities side.
Essentially, we are experiencing a âonce-in-a-decade moment,â and the way we lead people to embrace this moment will shape the AI revolution for years to come, says Mark Surman, president and executive director of Mozilla Foundation.
Currently, the people running AI currently tend to be âproject and product managers running delivery, data engineers and scientists building data pipelines and models, or DevOps and software engineers building digital infrastructure and dashboards,” says Mike Krause, an AI startup founder and formerly director of data science for Beyond Limits.
The risks of confining AI to the technology side âinclude compromising the AI toolsâ ability to adapt to new scenarios, alignment with the companiesâ objectives and goals, accuracy of responses due to data hallucinations as well as ethical concerns including privacy,â says Pari Natarajan, CEO of Zinnov. âBut the risks go beyond data hallucinations or a lack of nuanced understanding. Technologist-only led AI initiatives risk optimizing for the wrong objectives, failing to align with human values like empathy and compassion, lacking checks and balances, and exacerbating bias.â
Everyoneâs goal should be to develop and assure nothing short of trustworthy AI, Surman urges. This means creating systems and products âthat prioritizes human well-being and gives users an understanding of how it works. For example, imagine a personal assistant that kept all your personal data locally, getting better as you use it but keeping you private. Or a social media app whose algorithms you could tune to help you improve your mental health rather than erode. These things are possible.â
This means collaborating closely with end users, customers, and anyone else who will be relying on the output of these systems. âBuild out AI in a way that puts humans in the driving seat,â Surman says. âThis is paramount for circumventing the risks we are already seeing â such as consumers receiving bad medical advice from a chatbot, or algorithms sending technology job openings to men but not women.â
Bringing non-technical, humanities-oriented players into the AI management mix should be organic and encouraged by the culture â and not forced by management edict. In other words, a balancing act, and for those organizations with rigid, hierarchical cultures, it could be an uphill climb.
Users â both inside and outside the organization â need to be leading this effort. Dr. Bruce Lieberthal, vice president and chief innovation officer at Henry Schein, sees the risk of lack of user collaboration arising within the healthcare sector. âCreators of technology for use by health care professionals and their patients sometimes work in a black box, making decisions that donât factor in the user adequately,” he warns.
âAI is best developed collaboratively â software teams need to meet with users and those most affected by the product to stay honest, Lieberthal adds. âThe software should constantly be vetted with the same people to make sure that what was imagined by users hits the mark when developed and deployed.”
On the humanities side, there are efforts to âbring ethicists into the conversation at some level,â says Krause. âBut itâs often not clear what authority or role they actually play.â The challenge, he continues, is âimposing a philosopher into a technical delivery team is not going to be well received or lead to a net positive outcome for the business or user.â
Such humanities-oriented roles would need to be more advisory than actual decision-making, Krause advises.
Still, he points out, highly diverse AI teams will help add safeguards that may help organizations avoid headaches â or wasted investments. âAnalyzing a problem from an ethical or philosophical perspective would bring a lot of questions a purely technical team would likely not have asked,â he says. âMore importantly, technical teams are not empowered, asked, or expected to think of the implications of what theyâre building, rather they are expected to focus on and execute what theyâre tasked with. There is risk in not exploring unintended side effects of how AI is ultimately used.â
The growing democratization of AI â thanks to offerings such as GPT and Google â will help open AI decision-making to more diverse leadership. âWe must be wary of dilettantism âinterdisciplinary and multidisciplinary collaboration is needed,â says Natarajan. âPeople with backgrounds in ethics, cognition, sociology, decision theory and other humanities or social sciences fields can provide vital diverse perspectives, along with subject matter experts to ground AI in practical, real-world context.â
The potential emergence of new roles âsignify a new maturity and caution in integrating AI, learning from problems like algorithmic bias,â Natarajan observes. âThey help retain human oversight and control as AI grows more capable and autonomous.â Such new roles may include chief AI ethics officers, AI ombudsmen, AI compliance officers, AI auditors, and AI UX designers or curators.