High above the New York City streets, Princeton alumni and guests heard from an expert panel on how artificial intelligence and alt data are playing an increasingly large role in fintech during Princeton Entrepreneurship Council’s TigerTalks in the City: AI, Alt Data and the FinTech Frontier on May 9, co-sponsored by Princeton’s Bendheim Center for Finance.
At IEX’s offices on the 58th floor of 3 World Trade Center, Hamid Biglari *87, Ben Connault *15, Kathleen DeRose ’83, Laura Kornhauser ’03, and moderator Margaret Holen *95 discussed a myriad of issues around this landscape. (The following has been condensed and edited for this article. Complete audio is available below.)
Holen, Lecturer in the Operations Research and Financial Engineering department at Princeton: Big data and AI are transforming sectors from retail to transportation. From a modelling perspective, what are the challenges that are unique to finance?
Connault, from the IEX Group Quantitative Research team: There are great successes in machine learning and AI techniques in things like computer vision, image processing, and playing games. Good reasons to be excited, and people want to bring those techniques to finance. There are specific challenges to applying those techniques in finance. One example is that the thing that people want to forecast the most is prices. Will price go up? Will price go down? There are very good reasons, theoretical reasons why we cannot do that. The most interesting reason that machine learning has challenges in finance is that markets are about humans.
Humans are special, humans are forward-looking, they have expectations about the future and they bring these expectations to decision-making. Humans are strategic when it comes to markets – they spend a lot of time thinking about what other people in the markets are going to do and try to take advantage of that. It is very challenging to account for how people react to incentives.
Holen: How much does bias and transparency impact how AI is being deployed?
Kornhauser, CEO and Co-Founder of “explainable AI” startup Stratyfy: These problems of bias and not understanding what's going on in the algorithm are really huge and can lead to some pretty disastrous impacts, and very costly impacts on the bottom line for many folks out there. You don't need to look farther than the recent news out of nice, small technology companies like Facebook and Amazon and the huge hits that they've taken recently regarding not understanding exactly how their algorithms are working. And then as a result, they are having biased results from those algorithms, even in the case where they're not actually necessarily making financial predictions, where the stakes are even higher.
It's very hard to find a complete data set for the problem that you are trying to get. At very least, oftentimes data that you're training off of is biased based on past human decisions. And even though many times algorithms were introduced to try to help cure these biases, these human biases that we all know, have been out there for a long, long time, is the data reflects that same human decision making, chances are the data is also biased. That doesn't mean that there's nothing that can be done to solve these problems, it just means that we have to be extraordinarily thoughtful, when using any sort of algorithmic approach, recognizing the algorithm is going to at best be as accurate as the data that you are training on.
Holen: What are the challenges that you see in capturing the promise of machine learning?
Biglari, Managing Director and Global Co-Head, Central Liquidity Group at Point72 Asset Management: Let me try to narrow it to what is often thought of as the holy grail of hedge funds and investment management, which is the search for alpha. The holy grail of investment management is to look for returns in excess of market for a certain asset class. The promise of machine learning is that by analyzing millions of pieces of data, these techniques will uncover new signals that have heretofore not been identified, and therefore provide new sources of alpha generation. That’s much easier said than done.
Price movements in financial markets are based on changes in human behavior. And in that sense, financial markets make it very difficult to apply machine learning. One reason is that financial markets are self-referential, meaning that market participants change their behavior when they see models of market behavior, thus negating the whole explanatory power of that model. If you’ve studied physics, you’ve seen it in Heisenberg’s Uncertainty Principle, where the act of observation changes the measurement itself.
Holen: Across industry, academia, startups and more established firms in the finance sector, how do you see the trends?
DeRose, Clinical Associate Professor of Finance at NYU Stern School of Business and Director of the FinTech Initiative at the NYU Stern Fubon Center for Technology, Business, and Innovation: In startup land, this question is all about what to automate. And the things you want to automate are the things where, if you're wrong, nobody dies. And then where does finance fall in the predictability? We also like to automate things that are relatively predictable.
What’s interesting is that startups are now starting to combine lots of different types of data. I was talking to an insurance fintech entrepreneur yesterday who said, “We’re already looking at social data to make decisions about how to price insurance products. Well, we look at your Facebook profile. And we'd like to know if you own a gun.” And I said, “Ooh, what else? What else you're looking at?” He said, “Do you have a fast car, are you hang gliding, are you parachute jumping, etc., etc.” One thing to be kind of cautious about, or interested in, is not making financial decisions about how you might price an insurance policy or get a loan based solely on your FICO score or these old-fashioned traditional elements, but now looking at social data and who you’re hanging out with. This bumps up against some perimeters of whether these are things we really, really want to do.
An audience member asked, “How do we keep the data process rigorous? Is it just a hiring process to get all these different perspectives on the table and think about these different biases?”
Holen: Looking at data and using it widely, and technologies and modelling tools that allow us to incorporate data into more types of decision-making should in some ways be unquestionably instruments of progress. On the other hand, there’s no data set that captures everything in the world. It’s something Laura put very eloquently when she came to speak to my class (at ORFE): “Your data is not your world.” When Jeannie Tarkenton (’92, founder of Funding U) wants to lend to a student no other lender will touch, a student who has never had mom co-sign a loan, she has to make an assumption and infer from experience that doesn’t exist yet. She has to posit something from the world that isn’t yet captured in data.
It’s thinking rigorously about data, then also understanding very much what the limits can be. You really need to respect your data and learn from it. You need to put a lot of thought into your process.
* * *
Princeton Entrepreneurship Council extends its thanks to Laurence Latimer *01 and staff at IEX for the generous use of IEX’s event space. PEC also thanks Bendheim Center for Finance for their support of TigerTalks in the City.
The complete audio of the panel discussion is below: