The State of AI in 2021
We take a deep dive into the key issues of the annual State of AI report and the winners and challenges for the next year.
We know that technology tends to move in hype cycles, where certain topics and terminology are bandied around in press releases, news articles, research, and funding announcements. These include Blockchain, ICO, IoT, as well as AI, Machine Learning and Deep Learning. But what exists beneath the hype and how do we separate predictions from reality?
The State of the AI report is a yearly analysis by UK investors Nathan Benaich and Ian Hogarth which offers a 177 slides deep dive into all things AI based on invited contributions from well-known and up-and-coming companies and research groups. It’s a massive document, rich in rabbit-hole worthy topics and definitely worth a read. We’re taking a look at some of the key findings.
Did the State of AI’s predictions come true in 2019?
2019 Predictions:
- New natural language processing companies raise $100M in 12 months.
Yes: Gong.io ($200M), Chorus.ai ($45M), Ironscales ($23M), ComplyAdvantage ($50M), Rasa ($26M), HyperScience ($60M), ASAPP ($185M), Cresta ($21M), Eigen ($37M), K Health ($48M), Signal ($25M), and many more!
2. No autonomous driving company drives >15M miles in 2019.
Yes: Waymo (1.45M miles), Cruise (831k miles), Baidu (108k miles).
3. Privacy-preserving ML adopted by a F2000 company other than GAFAM (Google, Apple, Facebook, Amazon, Microsoft).
Yes: Machine learning ledger orchestration for drug discovery (MELLODY) research consortium with large pharmaceutical companies and startups including Glaxosmithkline, Merck and Novartis.
4. Unis build de novo undergrad AI degrees.
Yes: CMU graduates first cohort of AI undergrads, Singapore’s SUTD launches undergrad degree in design and AI, NYU launches data science major, Abu Dhabi builds an AI university.
5. Google has a major quantum breakthrough and 5 new startups focused on quantum ML are formed.
Sort of: Google demonstrated quantum supremacy in October 2019! Many new quantum startups were launched in 2019 but only Cambridge Quantum, Rahko, Xanadu.ai, and QCWare are explicitly working on quantum ML.
6. Governance of AI becomes a key issue and one major AI company makes substantial governance model change.
No: Nope, business as usual.
Research
AI research is less open than you think: only 15% of papers publish their code.
The research notes that traditionally academic groups are more likely to publish their code than industry groups such as OpenA and DeepMind, who only published a limited amount. OpenAI and DeepMind.
“For the biggest tech companies, their code is usually intertwined with proprietary scaling infrastructure that cannot be released. This points to a centralization of AI talent and computation as a huge problem.”
What tools and resources are researchers using?
Benaich and Hogarth used conference papers to determine the most used framework. Of those that specify the frameworks used, 75% cite the use of PyTorch but not TensorFlow. However, TensorFlow, Caffe and Caffe2 are still the workhorse for production AI. PyTorch offers greater flexibility and a dynamic computational graph that makes experimentation easier. JAX is a Google framework that is more math friendly and favoured for work outside of convolutional models and transformers.
They note that traditionally performance is driven by bigger models, datasets and compute budgets, especially in big enterprises. For example, they estimate OpenAI’s 175B parameter GPT-3 could have cost tens of millions to train. Experts suggest the likely budget was $10M.
However, Benaich and Hogarth also contend that for some use cases like dialogue, small, data-efficient models can trump large models. An example is PolyAI, a London-based conversational AI company, who open-sourced their ConveRT model (a pre-trained contextual re-ranker based on transformers). Their model outperforms Google’s BERT model in conversational applications, especially in low data regimes, suggesting BERT is far from a silver bullet for all NLP tasks.
A new generation of transformer language models are unlocking new NLP use-cases
GPT-3, T5, BART are driving a drastic improvement in the performance of transformer models for text-to-text tasks like translation, summarization, text generation, text to code. For example, an unsupervised machine translation model trained on GitHub projects with 1,000 parallel functions can translate 90% of these functions from C++ to Java, and 57% of Python functions into C++ and successfully pass unit tests. No expert knowledge required, but no guarantees that the model didn’t memorize the functions either.
Biology and Health sciences are repeating the benefits
Biology is experiencing its “AI moment” with over 21,000 papers in 2020 alone involving AI methods (e.g. deep learning, NLP, computer vision). Papers published since 2019 account for 25% of all output since 2000.
Deep learning on cellular microscopy accelerates biological discovery with drug screens. AI-based system for mammography screening has also reduced false positives and false negatives in two large, clinically-representative datasets from the US and UK.
The COVID Moonshot project
Drug discoveries have gone open source to tackle COVID-19. Benaich and Hogarth assert this is a rare example of where AI is actively used on a clearly-defined problem that’s part of the COVID-19 response. An international team of scientists are working pro-bono, with no IP claims on a project called COVID Moonshot to crowdsource a COVID antiviral.
PostEra’s synthesis technology allowed the consortium to design ‘recipes’ for 2,000 molecules in under 48 hours. Human chemists would have taken 3–4 weeks to achieve the same task.
Moonshot has received over 10,000 submissions from 365 contributors around the world, testing almost 900 compounds and identifying 3 lead series.
Federated learning is booming
Kicked off by Google in 2016, federated learning research has experienced almost 5x growth in the number of papers that mention federated learning from 2018 to 2019. More papers have been published in the first half of 2020 than in all of 2019.
OpenMined is an open-source community whose goal is to make the world more privacy-preserving by lowering the barrier-to-entry to private AI technologies. They extend major Machine Learning frameworks (PyTorch, Tensorflow, etc.) with techniques for privacy such as: Federated Learning, Homomorphic Encryption, Secure Multi-party Computation, and Differential Privacy. They also build integrations allowing for training across Cloud, mobile OSs such as Android and iOS, CPU, GPU, IoT, and Javascript (Web) technologies. This year they demonstrated the first open-source federated learning platform for web, mobile, server, and IoT.
Improved validity in AI research
A review of 20,000 recent AI-based medical imaging studies found that less than 1% of these studies had sufficiently high-quality design and reporting. Studies suffer from a lack of external validation by independent research groups, generalizability to new datasets, and dubious data quality. This year new international guidelines were drafted for clinical trial protocols and reports that involve AI systems in a bid to improve both quality and transparency. New requirements include:
- “State which version of the AI algorithm will be used.”
- “How was input data acquired and selected.”
- “How was poor quality or unavailable input data assessed and handled.”
- “Was there human-AI interaction in the handling of the input data, and what level of expertise was required?”
- “Describe the onsite and offsite requirements needed to integrate the AI intervention into the trial setting.”
- “How can the AI intervention and/or its code be accessed, including any restrictions to access or re-use.”
AI and Machine learning have gone mainstream
Benaich and Hogarth offer compelling evidence as to the mainstreaming of AI ideas and technologies. They cite the example of the rise of MLOps (DevOps for ML) as a signal of an industry shift from technology R&D (how to build models) to operations (how to run models).
RPA and computer vision are the most common deployed techniques in enterprise, while speech, natural language generation and physical robots are the least common.
BERT language model has come out on top with use to upgrade Google and Microsoft’s Bing search query understanding.
Rasa’s libraries and tools have clocked >2 million downloads and have open-source 400+ contributors.
Ethics on the front line
AI Ethics remains a hot topic in research, media and social commentary. 50% of the world currently allows the use of facial recognition. Only three countries (Belgium, Luxembourg, and Morocco) have partial bans on the technology that only allow it in specific cases.
There are two (known) examples of wrongful arrests due to erroneous use of facial recognition algorithms. In response to criticism, Microsoft deleted its database of 10 million faces. However, Apple was asked by New York’s MTA to enable FaceID for passengers while they wear a mask to avoid the spread of COVID-19. Legal cases in China and the UK are are creating precedent to legally challenge the use of facial recognition technology by law enforcement and in public spaces.
Benaich and Hogarth also rate the military investment in AI and the evolution of AI use cases in defence.
8 AI predictions for the next 12 months
Before closing this article, here are the 8 most significant prediction that we should expect to happen in 2021:
- The race to build larger language models continues and we see the first 10 trillion parameter model.
- Attention-based neural networks move from NLP to computer vision in achieving state of the art results.
- A major corporate AI lab shuts down as its parent company changes strategy.
- In response to US DoD activity and investment in US-based military AI startups, a wave of Chinese and European defence-focused AI startups collectively raise over $100M in the next 12 months.
- One of the leading AI-first drug discovery startups (e.g. Recursion, Exscientia) either IPOs or is acquired for over $1B.
- DeepMind makes a major breakthrough in structural biology and drug discovery beyond AlphaFold.
- Facebook makes a major breakthrough in augmented and virtual reality with 3D computer vision.
- NVIDIA does not end up completing its acquisition of Arm.
You can read the original version of this article at Codemotion.com, where you will find more related content. https://www.codemotion.com/magazine/dev-hub/machine-learning-dev/state-of-ai-2021/