The Chief Analytics Officer at JPMorgan Chase, Derek Waldron, suggests that “digital employees” is a helpful model to help those of us in business to think about how AI tools will used. Meanwhile, Bank of New York Mellon now employs dozens of AI ‘digital employees’ that have company logins so they can work alongside its human staff.
Digital Employees Are Here
In James Boyle’s fascinating book “The Line”, he argues that a debate over the personhood of AI agents is inevitable and suggests that our existing legal frameworks will come under strain trying to resolve the complex social, political and legal issues that attend such debate. He is right, of course, and recent developments must make us think more carefully about what these development mean for the evolution of the enterprise.
A digital employee is an agent designed to autonomously perform tasks that are traditionally performed by trained people. Unlike traditional rule-based software that follows simples instructions, agents can learn and adapt to a complex environment and make decisions similar to humans. As enterprises begin to deploy these kinds of systems, how we are going to resolve such as issues as:
- Do digital employees remain property (like software tools), or do they become eventually become legal persons?
- If they are not legal persons, will we need some intermediate category such as “electronic agents” with defined but limited rights and liabilities?
- Will there be legal requirements for ethical training for digital employees, as there are for human workers?
The issues are not new. I remember reading Jerry Kaplan’s “Humans Need Not Apply” a few years ago and this helped me to develop some of my own thoughts about personhood (including the ability to own assets) for AIs at a time when I’d been thinking about issues around reputation management (and management of reputation in the context of punishing AIs for misbehaving). This naturally led me ask myself whether Kaplan calls “forged laborers” would need digital identities linked to legal personhood, or whether they would be the property (in some way I can’t think through, because I’m not a lawyer) of real-world legal entities of one form or another?
I rather thought then that they would in the future have to have some kind of digital identity. My reasoning was that interactions in the virtual world are interactions between virtual identities (personas) and in my specific worldview, virtual identities need underlying digital identities. Whether the underlying digital identities of robots need to be bound to real-world legal entities is then a regulatory issue, not a technical one and either possibility could work. As of now, I tend to think that we will have both solutions in place but in different timescales.
BNY’s Chief Information Officer Leigh-Ann Russell says that their digital employees have their own logins and can directly access the same apps as human employees so that they can work autonomously and that soon they will have their own email accounts and may even be able to communicate with colleagues in other ways like through Microsoft Teams (I wonder if they will be required to keep their cameras on during calls!). Each “persona” (a digital employee with a specifc job, such as cleaning up code) can exist in a few dozen instances, and each instance is assigned to work narrowly within a particular team.
These digital employees are property—tools operated by or on behalf of the business entity that deploys them—and the legal and economic relationships around output quality, liability and so on are all assigned to the company.
I asked Jo Levy, a partner at The Norton Law Firm in California and Chair of the Alliance for Responsible Data Collection (ARDC), who was speaking in Geneva this week at the UN’s “AI for Good” Global Summit, about this and she agreed that legal entity status for AI Agents and robot-workers is inevitable but explained that the property vs personhood issue is not real a dichotomy: AI Agents can be “owned” and be persons under the law, just as corporations are owned by shareholders but can sue and be sued, own property and be held criminally liable for crimes.
She also pointed out that agency law already exists and hundreds of years of jurisprudence could easily be applied to digital employees so it seems to me that as most jurisdictions already have such well-developed laws governing agents acting on another’s behalf and there is no obvious reason that these could not be extended or adapted to AI agents relatively quickly.
Agents as agents, so to speak, is clearly the right way to think about digital employees now, but as these agents (or more likely, networks of agents) make more complex decisions, so assigning liability, to choose an obvious hard care, might be come even more complex. This is why, in the longer term, some kind of limited legal personhood, like the electronic agent personhood mentioned above, might simplify accountability (and insurance) so that these digital employees could become parties to litigation.
Digital Employees Have Rights
The 2017 European Parliament report on “Civil Law Rules on Robotics” made just such a suggestion about creating a specific legal status of “electronic persons” as a pragmatic legal tool to address the liability gaps that could arise when highly autonomous systems act without direct human control. The suggestion was not taken up at the time (the European Commission was not obliged to act on the recommendations) but I think it has some merit. Limited personhood for advanced AIs as a means of assigning liability, but not full legal personhood seems a good next step. Levy pointed out that such a new legal status could be used to provide disclosures to others about the parameters of the AI agent’s behavior: A digital employee might be required to act in the best interests of the corporation, to choose an obvious example.
This isn’t only about AI by the way. Similar approaches have been suggested for “smart” “contracts” as well. Not because of any moral presumptions about the persistent script’s immortal sole, but for economy expedience when constructing distributed autonomous organisations (DAOs). Wyoming has already passed laws giving limited liability company (LLC)-like status to DAOs and some people see this as a step towards more distinct recognition of AI-driven collectives.
(Both real corporations and DAOs might be required to pay registration fees for digital employees that are assigned legal identities. I had not thought about the idea of a digital payroll tax, but it seems a suggestion that deserves consideration.)
I cannot help but agree with Alon Jackson, CEO and Co-Founder of Astrix Security, when he says that companies at the forefront of the new era of agentic business will be those that understand that managing digital employees is not “a niche IT issue but as a strategic board-level imperative“. Personally, I suspect that the Non-Human Resources Department will find life much easier than the Human-Resources Department because the digital bankers can be programmed not to get drunk, not to take stupid risks and not to engage in insider-trading or interest rate-fixing!