Introducing transformative technology to an organization’s established processes requires careful consideration. The organization’s leadership needs to ask not only what types of new technology should be incorporated but also how the technology will be positioned and precisely integrated.
What specific challenges do we anticipate solving with this technology? Based on those answers, a strategy needs to be decided, parameters and policies for use set, staff trained, many concerns or fears minimized, and perhaps most importantly, a strong governance plan put into place. With new technology as powerful as AI, each of these considerations becomes increasingly important to address in order to achieve the desired positive return on investment.
Should AI Be Part of Your Business Strategy?
A recent Forbes article discussing the benefits of hiring a chief artificial intelligence officer (CAIO) put it bluntly, saying that “AI is rapidly transforming businesses of all sizes and industries” and that “AI is so important that it could be disastrous not to recognize it as the core business strategy of every organization.” While this certainty could be the case in the long-term, today’s pace and dynamic nature of AI, its nascent regulations, and growing pressure to adopt it means business leaders face tough decisions.
Who should lead in this area, and are they able to ask the right questions?
It’s clear that a successful AI strategy requires input and collaboration from team members across the organization — including IT/information, business development, and legal. But who has their finger on the pulse of other stakeholders, including your investors, customers, and employees?
As CEO of a professional association serving corporate counsel, I’ve long been an advocate of embracing innovation and, more recently, generative AI. However, while legal teams champion the technological advances that increase productivity, they do so cautiously. This is supported by the 2024 ACC Chief Legal Officers Survey, which found that 67 percent of CLOs believe AI will have a “mostly” or “somewhat” positive impact on the profession.
Caution from the legal team is warranted, given the unpredictability of generative AI and its ever-evolving body of regulations. Yes, there is a risk of not embracing AI fully and falling behind competitors. But there is also the risk of embracing a constantly evolving technology that has the potential to cause significant reputational and operational damage if poorly adopted. Clearly defining the business problem for AI to help mitigate and seeking input from across the organization to limit surprises can help crystalize the return on investment (ROI).
Defining the Future of Our Teams and AI Integration
When we talk about strategy, AI governance must be at its center, both internally and at the board level. Developing the approach for the integration and automation of AI demands a holistic and strategic analysis from every corner of the organization, spearheaded by key leaders across various departments.
Among these roles, the Chief Human Resources Officer (CHRO) stands out as a central figure in facilitating this transition, alongside the Chief Technology Officer (CTO) or Chief Information Officer (CIO), among other department heads. The CHRO, with their deep-seated trust and credibility among the staff, can address the workforce’s apprehension about AI, fostering an environment of open communication and reassurance.
According to a December 2023 Gallup survey, 22 percent of workers based in the United States admit to the fear of becoming obsolete (FOBO). This has grown more in the last two years (concurrent with advancements like generative AI) than at any other time since 2017.
The concern is valid — the World Economic Forum predicts new technologies will disrupt 85 million jobs globally between 2020 and 2025. The institution also predicts the creation of 97 million new jobs. Further, executives surveyed in an IBM Institute for Business Value report examining AI’s effect on how we work and HR’s role estimate that 40% of their workforce will need to reskill due to implementing AI and automation over the next three years. Emphasizing the importance of communication and clearly establishing the extent and manner of AI implementation is critical.
By setting clear guidelines for AI use, you aim for precise implementation and ensure that staff understand what to expect. Lead with the opportunities. We know some jobs may eventually be phased out, so be honest. Use words like “adaptation” and make options to upskill or reskill potentially affected staff.
Championing professional development demonstrates that staff are valued. This article from the HR Exchange Network offers additional ways that HR can help the people within our organizations transition. Staff fears, coupled with already high employee burnout rates, require organizations to take a measured approach to communicating and involving staff in any AI integration plans.
Finally, does the organization’s current leadership have the expertise and bandwidth needed to sufficiently chart a path forward? With the Chief AI Officer (CAIO) role emerging in corporations like Intel, eBay, UnitedHealth Group, and even the United Nations, AI’s impact is undeniable. It may require new perspectives and perhaps an entirely new org structure.
Ethical Considerations and Public Perception
As organizations eagerly integrate AI into their operations, acknowledging the ethical implications is critical. Ensuring ethical AI use means crafting and applying technologies that honor human rights, privacy, transparency, and fairness.
It’s essential that organizations involve their legal teams from the outset to establish ethical guidelines for AI use that reflect their core organizational values and the expectations of their stakeholders. The legal department, guided by the general counsel or chief legal officer, should play a pivotal role in crafting these guidelines. This includes conducting rigorous tests (that may include third-party audits) for bias in AI algorithms, ensuring data privacy, and being transparent with users about AI technologies’ applications.
The public’s perception of AI can dramatically affect an organization’s reputation and success. Hence, it’s crucial for organizations to engage in open dialogues with their stakeholders, clarifying the advantages of AI, addressing concerns, and illustrating how AI initiatives are in line with broader social values.
This engagement should build trust and demystify AI for the general public. Have plans been developed to ensure transparency about AI’s failures and that lessons learned are communicated to reflect an organization’s dedication to responsible AI use and ongoing improvement?
With the fervor around generative AI and pressure to act, organizational leaders must carefully consider the specific use cases, ROI, governance, implementation, and communications roles. While AI presents vast opportunities, it also carries a considerable risk of causing significant internal and external harm if mismanaged.
That’s why within our enterprises, it’s vital to draft the necessary policies, establish a governance plan — shaped by input from staff, organizational leaders, and, crucially, the legal department — and develop an AI strategy that addresses our own unique situations and helps propel us into the future.
Next, I will discuss how legal teams can serve as pivotal links between the C-suite and the board, ensuring AI implementation aligns with both strategic vision and regulatory compliance.