“Will salaries that accrued to junior managers now accrue to select a few?”
Speculation about what AI will do to the workforce continues unabated. I am not qualified to talk about what AI will do to coding jobs and the like, but I can say something useful about the domains I am familiar with- valuation, fundamental analysis, scenario planning around future financial risks, integrating sustainability considerations into financial statements and valuation and in the domain of corporate governance.
Bob Eccles and I are in the middle of writing a book on integrating sustainability considerations into financial statements. The working title is “Integrated Reporting 2.: Here’s How to Make Integrated Reporting Real.” We rely heavily on AI both for writing the book and for giving users AI tools to work with. These tools are being used to access all of our relevant work from 2019 to the present. Not to boast, but this is a lot. These tools are very helpful in pulling together in a concise way all the work we have done. And that’s just for the first two chapters which set the stage for the main content of the book, as I’ve discussed here and Bob has talked about here.
Bob has written 100 plus page list of prompts to use in Claude in kind of “one fell swoop” black box analysis, whereas I have around 25 odd pages of prompts for ChatGPT5 which is done in a “step-by-step” and more human/AI engaged style. Claude is a great writer and suits Bob’s narrative style. ChatGPT5, at least in finance and accounting, is much more forensic and evidence-based and suits my style better. Bob uses ChatGPT5 more for editorial input. We also find Perplexity is brilliant at checking all our facts and figures and citations. Chapters are going through multiple iterations involving multiple agents and this is before we have a complete draft of the book where we’ll do the same. In short, this is a complex multi-agent exercise being driven by two human co-authors. Boundaries are so blurry about who’s writing what
We are discovering some amazing things along the way. For instance, we intend to self-publish this book via Kindle Publishing. Bob has Claude working as a highly motivated but underpaid editor who reads chapters I write and then suggests places where I could liven up the prose a bit and get less technical. Claude is also great at suggesting how to seamlessly align the tone and voice I use with that of Bob’s, who wrote most of the introductory chapters to set the stage for the technical work to follow.
In essence, without these 125 odd pages of prompts, writing the book would be impossible. But the analyst’s job is not done. It perhaps begins in earnest after the AI outputs come out. To give you an example, many of the outputs involve scenario planning that converts KPIs related to carbon taxes or human capital turnover to dollar numbers of both future earnings and stock price impacts.
GPT5 can run Monte Carlo simulations spanning 10,000 runs of some of these assumed parameters without breaking a sweat. But the human in the loop has to work out whether the range of dollar estimates makes sense. Hence, skills for the future might center around two questions
- Does the executive have the domain knowledge to write an appropriate set of prompts to get AI to do what she wants; and
- Once the AI delivers the output, does the executive have the “sense making” skills to quality check what AI produces or misses.
The middle task of finding the source documents, hunting down the data points, inputting these into spreadsheets or running simulations with assumed distributions, compiling the results in a readable table and writing a narrative around these results are now done by the machine within the blink of the proverbial eye.
The larger question revolves around how an entry level executive, just out of school, will suddenly develop the ability to write detailed prompts or be effective at quality control. Bob and I have a cumulative 80 odd years of experience in these areas. Despite that, we are finding it difficult to cope at least with the quality control aspect of the job. How will an entry-level MBA, with barely months of experience, be effective at quality control?
I suspect that entry level MBAs in the future will simply be handed a set of prompts when they join investment banks and consulting firms. This is not that different from handing over template spreadsheets when you joined such institutions in the past. But the “sense making” skills that were learned by osmosis from all the hours of grinding through source documents, going down blind alleys for days and learning from a stray comment in the hallway or from making a connection that was not obvious before as we sift through the mountain of data and facts are potentially going to disappear with AI generated outputs.
How should schools of the future be designed? Perhaps we need intense bootcamps to somehow teach the entering MBAs sense-making skills. Can these skills even be taught in a classroom? Do we need a new type of classroom environment or a new type of internship to teach applied domain specific skills rapidly in a year or two? Should the MBA become a three-year degree: first year to learn the basics and vocabulary of business, the second to simulate the grind in the real world, and the third to simply learn how to run quality control in your domain where the student combines awareness of real-world experience with academic frameworks learned in the first year?
In essence, how does one teach the future entering workforce the tacit knowledge that workers like Bob and I learned through the hard grind and quasi-apprenticeships in the first five or seven years of our careers?
I’ll let Venky Nagar, my friend from the University of Michigan have the last word, “I’ve always thought the manager’s job since time immemorial has always been to convert an unpromptable problem such as “make Starbucks profitable” into a series of promptable problems.
In the past, fresh MBAs started with promptable problems because someone had to get those tasks done. Now AI can do those tasks. Even in the past, experience or skill with promptable problems was no guarantee that this person could transition into unpromptable problems. Recall the Peter principle which states that people in a hierarchy are promoted until they reach their level of incompetence. This happens because promotions are based on success in a previous role, not on the skills needed for a new one. In other words, promotion to a leadership role was always a crapshoot. AI is just making all this obvious. Salaries that accrued to junior managers will accrue to select a few.”
That is a sobering thought to end this piece with. Constructive comments welcome as always.
