AI Needs Resource Efficiency



As we build out AI infrastructure and applications we need resource efficiency, continuously buying more horsepower cannot go on forever. This episode of the Tech Field Day podcast features Pete Welcher, Gina Rosenthal, Andy Banta, and Alastair Cooke hoping for a more efficient AI future. Large language models are trained using massive farms of GPUs and massive amounts of Internet data, so we expect to use large farms of GPUs and unstructured data to run those LLMs. Those large farms have led to scarcity of GPUs, and now RAM price increases that are impeding businesses building their own large AI infrastructure. Task-specific AIs, that use more efficient, task-specific models should be the future of Agentic AI and AI embedded in applications. More efficient and targeted AI may be the only way to get business value from the investment, especially in resource constrained edge environments. Does every AI problem need a twenty billion parameter model? More mature use of LLMs and AI will focus on reducing the cost of delivering inference to applications, your staff, and your customers.

Panelists

Alastair is a Tech Field Day event lead at the Futurum group, specializing in Cloud, DevOps, and Edge.

Storage Janitor – seasoned technology professional

Product Marketing leader who knows how to turn complex technology into stories that inspire and connect.

Early CCIE with broad knowledge of IT topics

Sign up for updates to
Tech Field day events

Thank you for being part of the Tech Field Day community! Our mailing list is a great way to stay up to date on our events and technical content, and we appreciate your signup.

We promise that we’ll never spam you, send ads, or sell your information. This list will only be used to communicate with our community about our events and content. And we’ll limit it to no more than one message per week.

Although we only need your email address, it would be nice if you provided a little more information to help us get to know you better!