Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Website
Tasks
HuggingChat
Collections
Languages
Organizations
Community
Blog
Posts
Daily Papers
Learn
Discord
Forum
GitHub
Solutions
Team & Enterprise
Hugging Face PRO
Enterprise Support
Inference Providers
Inference Endpoints
Storage Buckets
Log In
Sign Up
1
vlmrun
dinesh-vlmrun
Follow
nwaughachukwuma's profile picture
PhysiQuanty's profile picture
spillai's profile picture
4 followers
ยท
1 following
AI & ML interests
None yet
Recent Activity
replied
to
spillai
's
post
4 days ago
mm-ctx โ fast, multimodal context for agents. LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills. Try it interactively in Spaces: https://huggingface.co/spaces/vlm-run/mm-ctx Readme: https://vlm-run.github.io/mm/ PyPI: https://pypi.org/project/mm-ctx SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI. - mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches - mm cat <document>.pdf returns a metadata description of the file - mm cat <photo>.jpg returns a caption of the photo - mm cat <video>.mp4 returns a caption of the video A few things we obsessed over: โก Speed: Rust core for the hot paths ๐ Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V). ๐ Composable: stdin + structured outputs ๐ค Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw. Weโd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
reacted
to
spillai
's
post
with ๐ฅ
4 days ago
mm-ctx โ fast, multimodal context for agents. LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills. Try it interactively in Spaces: https://huggingface.co/spaces/vlm-run/mm-ctx Readme: https://vlm-run.github.io/mm/ PyPI: https://pypi.org/project/mm-ctx SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI. - mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches - mm cat <document>.pdf returns a metadata description of the file - mm cat <photo>.jpg returns a caption of the photo - mm cat <video>.mp4 returns a caption of the video A few things we obsessed over: โก Speed: Rust core for the hot paths ๐ Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V). ๐ Composable: stdin + structured outputs ๐ค Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw. Weโd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next.
authored
a paper
6 months ago
Reconstructing Animatable Categories from Videos
View all activity
Organizations
dinesh-vlmrun
's models
None public yet