Yale University
abhishek at cs.yale.edu
-
AMD-
Huawei-
NVIDIA-
NVIDIA-
NVIDIA-
NVIDIAI build computer systems that balance machine programmability and efficiency via innovations in computer architecture, operating systems, compilers, and chip design. I am currently building these machines for data center servers and brain computer interfaces.
My group has led the way in calling attention to the rising overheads of memory address translation, and has pioneered optimizations to mitigate these overheads. AMD has shipped over a billion Zen CPU cores using coalesced TLBs. NVIDIA and RISC-V have shipped millions of GPUs and CPU cores, respectively, with support for translation contiguity. Billions of Linux operating systems integrate our large page migration code, and support folios, motivated by our translation contiguity work. Our work on memory tiering has influenced Meta's server deployments. This, and more, is summarized in my book on virtual memory and appendix to the classic Hennessy & Patterson textbook.
My group is also at the forefront of imbuing brain interfaces with the computational capabilities needed to effectively treat neurological disorders and shed light on brain function. Through our HALO and SCALO systems, we are taping out low power and flexible chips for brain interfaces. Check out my ASPLOS '23 keynote to learn more. I am also leading a CCC Visioning Workshop on this topic in April 2025.
I received the 2023 ACM SIGARCH Maurice Wilkes Award "for contributions to memory address translation used in widely available commercial microprocessors and operating systems". My research has been recognized with six Top Picks selections and two honorable mentions, a Best Paper Award at ISCA '23, a Distinguished Paper Award at ASPLOS '23, a visiting CV Starr Fellowship at Princeton Neuroscience, and more. My teaching and mentoring have been recognized with the Yale SEAS Ackerman Award.
Appendix L in "Computer Architecture: A Quantitative Approach" by Hennessy and Patterson