GroqAI Blog

Groq: The King of Performance for AI Inference!

AI chip startup Groq launched the fastest large model inference chip LPU, which is a dedicated ASIC chip for large model inference developed by the original Google TPU team.