Machine learning, human Investing
This is a great article for weekend reading. The future, by the time it arrives, is seldom as compelling as when first envisioned. Artificial intelligence (AI), especially as a tool for investment, is undergoing extremely swift technological advancement, even as some would-be adopters are hesitant. Part of that reticence is due to cybersecurity concerns, and uncertainty about whether autonomous systems create data risk greater than that of current systems. Likewise, a fund generating alpha on a consistent basis sounds good to clients, but if it is AI-based, it will likely prompt some investor discomfort about the “black box” nature of its decision making.
The London-based Man Group Plc hedge fund
is among the most prominent early adopters of AI and its “machine learning”
dynamic, yet managers have fretted openly about the fact that their engineers
can’t fully grasp or explain all those profitable trades the software has
generated for one of Man’s largest funds.
Either way, AI, as it reshapes fund
management and other areas of the capital markets, has broad appeal, but the
trick is to not lose sight of AI’s transformative power over the long term.
This situation strikes a chord with Pippa Malmgren, an economist and
robotics-industry entrepreneur whose career has connected her closely to
scientific communities, both as a U.S. government policy analyst and
in her business forays.
“I go back 60 years to C.P. Snow’s famous
essay ‘The Two Cultures,’ and the dichotomy between science and humanities that
Snow warned about,” says Malmgren. “Academia was divided into people who could
quote Shakespeare and people who understood the Second Law of Thermodynamics,
but the two sides couldn’t talk to each other.”
The scientific specialization she refers to
now extends to financial investing, with data-crunching computers empowered to
steer vast sums of money from one asset to another. The computers, however,
don’t have thousands of pension checks to send out each month, or lease
payments due on their G450s, so they’re off the hook when valuations tank.
Illustrating a partial solution, Malmgren
cites a friend who was hired by NASA with the job title of Chief Storyteller
and given the mission to make clear how scientific advancement leads to
progress for society. “The people at NASA realized it was no good if you can’t
explain it to non-science people,” Malmgren says. “We have an intense need for
storytellers, because that’s how we’ll continue the kind of human inventiveness
that spurs hope and imagination.” This is a pivotal moment, in her view, for anyone
who works with institutional investors and high-net-worth individuals – one
that calls for lots of non-financial skills.
“I’ve spent my whole life dealing with
asset managers, and they are beginning to realize how skillful they’ll need to
be as they interface with human beings in the AI age”, says Malmgren. It’s her
belief that the investor-manager relationship will become progressively more
layered. “They’ll want a multifaceted relationship with you,” she predicts.
“They’ll say, ‘Here’s the money, and of course you have to perform, but along
with that, what else will come out of this relationship? What will the fund
manager teach me? What networks will I gain access to through knowing a fund
manager? Some of the answers will be AI-led and some will be human-led.”
Chris Duggan, Vice President, Dart
Enterprises, and Director, Kenneth B. Dart Foundation, agrees that human
managers will want to double down on the functions that can’t be performed by
some machine plugged into a wall. “There will always be a role for humans in
the investment management industry,” says Duggan. “Instead of spending
countless hours pulling data and building financial models, human traders and
analysts can focus their energy on more value-added activities, like meeting
with investors and building the business.”
Duggan believes that AI’s effect on the
investment game is already extensive, including its disruption of an old
barrier. “Where before, the giant firms could just outmuscle smaller players,
today the playing field has leveled,” Duggan asserts. “Any firm with a computer
and a sophisticated algorithm can compete.”
Especially regarding alternative assets,
it’s important to ponder the value of patience and discipline for money
management going forward, compared to in pre-AI times. Does non-human decision
making about how to best deploy capital factor this in? Anthony Cowell, Head of
Alternative Investments for KPMG in the Cayman Islands,
appreciates the machine version of prudence and patience.
“Discipline will be built into machines,”
explains Cowell. “Bots are a first phase, but with time, what we’ve seen and
will continue to see is deep learning generating its own version of discipline.
A very nuanced balance of restraint and risk will be part of machine learning’s
contribution.” He speculates further to depict a machine hierarchy of sorts,
envisioning “the machine that will keep its head when around it all other
machines are losing theirs, so to speak, in events like flash crashes or
momentum trading that keeps accelerating.”
Naturally, that type of breakdown will be
traceable to human error in a machine’s strategic architecture, according to
many AI experts. For Cowell, it’s no stretch to imagine how “a machine will
copy a machine that will copy a machine, so that all arbitrage is suddenly gone
from a market, at least at a certain given moment.”
Tom Chatfield, the British author and tech
philosopher, believes AI could and should, in fact, have the effect of bringing
far greater clarity to humans’ emotional life and behavior. As machines swiftly
improve their human-like powers of reason, logic, and judgment in the name of
utilitarian productivity, the distinctly human characteristics that won’t ever
be built into software—what Chatfield calls our “broiling biological pot of
emotion, sensation, bias, and belief”—will be exposed to a completely new
method and level of analysis.
Partly with the help of AI, he believes,
humans can “start talking far more richly about the qualities of our
relationships,” and study “how precisely our thoughts and feelings and biases
operate.” An intriguing paradox emerges from Chatfield’s analysis: When the
artificial form of intelligence is fully developed, humanity’s chance to
perfect the natural version may finally arrive.
This article was originally published on InstitutionalInvestor.com