Beyond big data with Graphic Processor Units

Ian Buck, vice president of Accelerated Computing at NVIDIA joins host John Gilroy on this week's Federal Tech Talk to discuss Graphic Processor Units and how s...


Best listening experience is on Chrome, Firefox or Safari. Subscribe to Fed Tech Talk’s audio interviews on Apple Podcasts or PodcastOne.

Many listeners have heard about NVIDIA and associate them with fast graphics, typically for gamers.

Ian Buck is the vice president of Accelerated Computing at NVIDIA and he joins host John Gilroy on this week’s Federal Tech Talk to discuss how speed impacts continuous integration and deployment.

Buck got his Ph.D. from Stanford by writing about new ways to speed up processing.  During the discussion you will hear how he articulates how the CPU has reached it limits — Moore’s law has reached the end of the road.

Head shot of Ian Buck
Ian Buck, VP of Accelerated Computing, NVIDIA

The solution to gaining more speed in analyzing big data is to take advantage of the power of Graphics Processing Units (GPU).  Kind of hard to argue with Buck, especially if he has studied with the famous Dr. Pat Hanrahan of Pixar fame.

In the interview, Buck argues persuasively how this new approach will impact artificial learning (AI).

Unstructured data that is provide by millions of new sensors can only be handled with new approaches, especially uniting combinations of GPSs.  Today, some will argue that the GPU has become the heart of AI and machine learning.

For example, Buck talks about speed of a response that an autonomous vehicle needs.  Decisions must be made in thousandths of a second.

Buck further elaborates on speed’s impact on continuous integration and deployment.

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.

Related Stories