Nvidia banking on TensorRT to expand generative AI dominance

Illustration of an Nvidia logo
Illustration by Alex Castro / The Verge

Nvidia looks to build a bigger presence outside GPU sales as it puts its AI-specific software development kit into more applications.

Nvidia announced that it’s adding support for its TensorRT-LLM SDK to Windows and models like Stable Diffusion. The company said in a blog post that it aims to make large language models (LLMs) and related tools run faster.

TensorRT speeds up inference, the process of going through pretrained information and calculating probabilities to come up with a result — like a newly generated Stable Diffusion image. With this software, Nvidia wants to play a bigger part in the inference side of generative AI.

Its TensorRT-LLM breaks down LLMs and lets them run faster on Nvidia’s H100 GPUs. It works with LLMs like...

Continue reading…



from The Verge - All Posts https://ift.tt/lXACQFV

Comments

Popular posts from this blog

The Twitter board is reportedly not interested in Elon’s takeover offer

Amazon is acquiring a podcast hosting and monetization platform