VR
VLM Run
VLM Run is a first-of-its-kind API dedicated to running Vision Language Models on Documents, Images, and Video. We’re building a stack from the bottom-up for ‘Visual’ applications of language models that we believe will make up > 90% of inference needs in the next 5 years.
Open Jobs - 2
Archived Jobs - 6
VR
Archived: Founding CV / ML Engineer
VLM Run
Date Archived: 04 July, 2025
CV
ML
AI
Infrastructure
VLMs
Santa Clara, CA
VR
Archived: Developer Relations
VLM Run
Date Archived: 04 July, 2025
Developer Relations
Remote
VLMs
Remote
VR
Archived: Member of Technical Staff, ML Systems
VLM Run
Date Archived: 04 May, 2025
Vision Language Models
LLMs
Temporal Models
Video Models
Model Training
Evaluation
Versioning
WnB
Huggingface
Python
Pytorch
Pydantic
CUDA
Torch.compile
Github CI
Docker
Conda
API Billing
Monitoring
Hybrid
VR
Archived: Member of Technical Staff, ML Systems, Developer Relations
VLM Run
Date Archived: 04 April, 2025
Vision Language Models
LLMs
Temporal Models
Video Models
Python
Pytorch
Pydantic
CUDA
Torch.compile
Github CI
Docker
Conda
Remote
VR
Archived: Founding Engineer
VLM Run
Date Archived: 04 April, 2025
ML Systems
Vision Language Models
AI
Unknown
VR
Archived: Developer Relations
VLM Run
Date Archived: 04 April, 2025
Developer Relations
ML Systems
Vision Language Models
AI
Unknown