Liquid Foundation Model 40B running at high speed on Cerebras.
API Usage
1 curl -X POST https://api.vincony.com/v1/chat/completions \ 2 -H "Authorization: Bearer YOUR_API_KEY" \ 3 -H "Content-Type: application/json" \ 4 -d '{ 5 "model": "cerebras/lfm-40b", 6 "messages": [ 7 { "role": "user", "content": "Hello, LFM 40B!" } 8 ] 9 }'
Replace YOUR_API_KEY with your Vincony API key. OpenAI-compatible endpoint — works with any OpenAI SDK.
Compare with Another Model
Frequently Asked Questions
Try LFM 40B now
Start using LFM 40B instantly — 100 free credits, no credit card required. Access 801+ AI models through one platform.
More from Cerebras
Use ← → to navigate between models · Esc to go back
LFM2-24B
Text
Liquid Foundation Model running on Cerebras hardware for fast inference.
Llama 3.1 70B (Cerebras)
Text
Ultra-fast Llama 3.1 70B inference on Cerebras silicon.
Llama 3.3 70B (Cerebras)
Text
High-speed Llama 3.3 70B on Cerebras wafer-scale engine.
Llama 3.2 3B (Cerebras)
Text
Ultra-fast tiny Llama 3.2 3B inference on Cerebras silicon.