GenAICerts

NVIDIA-Certified Associate: Generative AI LLMs

Question 1 of 30
Core Machine Learning and AI Knowledge
In the context of Transformer-based Large Language Models, what is the primary purpose of Key-Value (KV) caching during inference?