2020 AI Finance Year in Review

David Hall
8 min readJan 14, 2021

--

After another year of learning, challenge, and growth in the AI finance world, I am taking the time to recap some of the big trends and think about overall progress during a tumultuous year.

With overall IT investment growth slowing overall during the pandemic, COVID-19 has caused budgets to shift toward the bread and toilet paper of the tech world. 2020 investment has been primarily in cloud infrastructure to enable work from home scenarios, higher online traffic, and increased security and compliance needs. However, AI has still played a large role during 2020 as customers use more and more AI to help IT operators predict and avoid outages and ensure resilience. We are still seeing investment behind the increasing benefits of AI blending into the background of every app. In an uncertain market, AI improves the service for customer, but more importantly, AI integration into products has HELPED PEOPLE during a time of need. Below I highlight some of those scenarios as well as some of the interesting trends I have noticed in Finance for AI.

Finally, on top of the product and human wins, it’s been a year of continued progress in AI research which will have larger financial impacts into 2021 and beyond.

Here’s a recap of trends in AI finance throughout the year.

AI helps during COVID-19 times

First, vaccine makers were benefited and enabled by machine learning (ML) and AI. Two of the top developers of vaccines, Moderna and BioNtech (who worked with Pfizer), reference their reliance on machine learning tools to model the virus and accelerate vaccine production. These companies are very AI and tech focused (Moderna has invested 100M in AI/ML tech) and that has enabled a fast development and deployment timeline for the vaccine. Investments in this space will only accelerate as results from integrating deep learning and bioscience continue to show tremendous human benefit.

Here are two good writeups of the AI impact on virus innovation

A perfect example of the AI + Biotech trend is the success of a deep reinforcement learning model made by Google called Alpha Fold which developed a competition-winning model to predict the shape of proteins with state of the art accuracy. This could reduce the time and cost of protein and disease research. Hopefully, progress and investment in this field will only be spurred on by the pandemic and larger impacts can be made as AI + Biotech integration accelerates.

Second, there were a number of AI powered apps and dashboards which enabled better information and prediction of the virus impact. These enabled health organizational and local governments with the data needed to make better decisions to save lives.

While 2020 was a devastating year, it’s meaningful to see the human benefit of AI in saving and improving lives by using data for good. Hopefully, the AI investments made in this emergency situation this year will yield benefits than accrue value over the long-term.

AI fades away

Similar to the use of ML-based prediction and modeling becoming a core tool at drug companies, AI integration into tools is a continuing trend. A good measure of AI adoption and success is how little it sticks out. As the technology advances and integrates into every product, AI solutions will fade into the background, unnoticed.

This year, some notable examples include:

  1. All Google search is now run on their premium transformer model (BERT), which improves accuracy of search and results (Google Blog)
  2. TikTok’s recommendation algorithm draws you in and serves you more and more interesting content based on your behavior from the second you are on there (here’s a really good podcast from A16Z on this)
  3. My favorite — PowerPoint slide designer, which chooses your slides for you, has improved along with other machine-based features across Office including next word prediction, Teams background blurring, and improved Outlook search bar.

These are individual examples of a larger trend of AI and ML blending into the background. Financially, this is great because it is speeding up the rate of overall improvement and productivity. Having AI features as table stakes is also great because it will require a diffusion of AI talent throughout different industries (not just tech). Higher ed, automotive, and financial services are already seeing a large increase in demand for technological skills and AI talent (according to linkedin’s 2020 job report).

This trend will continue as AI features become more accessible to citizen developers and entry level professionals. Therefore, there is still tremendous value in improving your tech skills via Youtube, edX, LinkedIn Learning, Coursera, and other free online sources. You do not need to become a fully certified engineer, but understanding these tools will make you better at the things you are passionate about and improve your ability to make an impact.

This Apple Watch commercial always made me think of this “AI infusion” point.

Labeled data is out. Unsupervised is in.

Unlike a good quarantine holiday Zoom with family, which involves an increasing number of peppermint schnapps shots, AI researchers are looking for their models to take less and less “shots” this year. A significant trend this year has been the rise of unsupervised and “few-shot” learning. This is a methodology by which a model is trained on less labeled data (ex. those Captcha pictures where you need to label which picture has a lamp post), but using more unlabeled data (scrape all Wikipedia pages) without human input. Newer AI models coming out of the large research companies are seeing benefits from using less labeled data, and massive amounts of unlabeled data. The new models train themselves on large amounts of data and then take one, two, or a few shots at prediction. State of the art accuracy and improved model efficiency have been generated through this new methodology as more (general) data is winning out over less (but more accurate) data.

This is great as it is pushing the state of the art forward, but is also making the use of AI cheaper. Labeled data is expensive, so the data cost of unsupervised models is reduced, and while models are getting larger and more expensive to train, the “fewer shot” should be more cost effective and cheaper to operate long-term. Also, these large models continue to get more accurate with more data, so there will be additional payout as they scale up.

In addition to cost effectiveness, the added benefit of these models is that they are more general. As the field shifts from training task-specific models and becomes more general, there will be better cost scale. The payout of a translation model for 100 languages at 90% accuracy is better than training one language at a time to 95%+.

Example of the research trend

  • The best paper from Neurips (big AI conference) this year was Open AI’s GPT-3 and self supervised model, “Language Models are Few-Shot Learners”.
  • Facebook M2M “zero-Shot” model which translates 100 different languages between one another (write-up here).

Coolest thing I saw this year

Finally, one of these large unsupervised models, OpenAI’s GPT-3, generated one of the coolest practical applications of AI I have seen = self-writing code. This was presented at a Microsoft event this year, but there are broad financial implications of training an AI model to reduce the complexity of writing code. I think this type of feature could simplify and improve the developer experience for the people building the future.

And some fun

After a year of at home workout playlists (Taylor Swift) and work from home mood music (heavy metal), there is an AI based app that will analyze your Spotify and tell you how bad it was. How Bad Is Your Spotify? (pudding.cool)

I scored 17/100. Is your music worse than mine?

Finally, key trends to follow for 2021

Models will get larger — this one is a given, but it deserves a call-out. Given the unsupervised revolution, improved GPU hardware, and investments by large research organizations AI model size and accuracy will significantly increase in 2021. At the time of writing, the largest model is GPT-3 at 175B (10x increase vs 2019).

20/21 Vision: Vision systems continue to get more interesting — while natural language continues to be the key focus of large models, vision systems continue their slow progress to the front of the line. Because of the huge investment and huge payout from autonomous driving, there is a lot of money going into high quality computer vision systems to recognize movement and avoid people/objects. This tech investment will spill over into a continued slew of startups (“X + compute vision”.ai) which will do cool and helpful things.

  • Recognize if people are not wearing masks (here)
  • Smart workout mirrors recognize your form and help with fitness (here)

You find out your boss is a chatbot — I’m not saying that we’ve passed the Turing test, but chatbots have gotten a lot better with the help of improvements in the natural language field (large models referenced above). So, if you haven’t seen your boss in months, and all they ever reply to you with on Microsoft Teams are basic greetings and “what’s the long-term strategy here?”… they’re a bot. Stay on the lookout.

Wishing you a safe and productive 2021. I will keep posting about topics and trends I find interesting, but let me know if there are any you would like a deeper dive in. Also, for a nostalgic lookback, here is my 2019 summary with 2020 predictions.

Finance for AI Blog: 2019 Year in Review | LinkedIn

--

--

David Hall

Miami University Alum. Microsoft - Finance & Accounting.