#R0identifier="1f817c477f60e642359eef6ca151c986"

🔘 – Principal

https://estadosia.files.wordpress.com/2020/08/the-deep-learning-revolution-and-its-implications-for-computer-architecture-and-chip-design.jpg

🔘 Paper page: arxiv.org/abs/1911.05289

Abstract

The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today.


Author

Jeffrey Dean. Google Senior Fellow in the Research Group, where he lead the Google Brain project. His areas of interest include large-scale distributed systems, performance monitoring, compression techniques, information retrieval, application of machine learning to search and other related problems, microprocessor architecture, compiler optimizations, and development of new products that organize existing information in new and interesting ways. (Source: research.google/people/jeff/)