tahnik@portfolio:~$ projects
A curated collection of MLOps, AI infrastructure, and open-source projects. From production inference engines to developer tools and cloud-native platforms — built with a focus on performance, scalability, and real-world impact.
SentinelOps - SRE CLI with Agentic Workflow and eBPF
A CLI-based AI SRE tool that combines eBPF telemetry with agent workflows to monitor, diagnose, and remediate issues in Kubernetes and cloud-native systems.
PUKU CLI
A terminal-based AI coding agent for Poridhi learners—plan, edit, refactor, and ship code from the CLI, with built-in DevOps/SRE workflows and guided learning tasks.
PUKU Editor
PUKU Editor is an AI-powered fork of VS Code that accelerates coding with intelligent predictions, semantic understanding, and context-aware suggestions, guiding Poridhi.io learners in vibe and platform engineering.
Tensorcode
Tensorcode (tensorcode.poridhi.io) is a hands-on learning platform for core AI/ML and performance engineering, featuring a GPU-backed CUDA/Triton playground, interactive exercises (LeetCode-style problems for AI/ML and GPU programming), and guided content for mastering foundational tools like PyTorch and NumPy—all packaged with interactive docs and game-like practice to build real implementation skill.
Poridhi AI Studio
Cloud-native AI development and learning platform (ai.poridhi.io) enabling developers to train, fine-tune, and deploy models with GPU-accelerated workspaces, integrated notebooks, and one-click model serving.
MicroCell - MicroVMs for AI Agents
A lightweight MicroVM runtime provisioning isolated sandboxes on-demand for AI agent code execution — sub-second cold starts, secure isolation, streaming log output, and clean artifact export. Designed for safe tool-use in agentic workflows.
ServeLoop - Minimal-Scale LLM Inference Engine
A compact open-source LLM serving engine implementing continuous batching, paged KV-cache, and GPU scheduling optimization with mixed precision (BF16/FP8) support — built to study scheduler behavior, memory efficiency, and tail-latency trade-offs in inference workloads.
FastParser - Efficient Indexing Engine
A high-performance context indexing and retrieval engine for large codebases — semantic chunking that respects code boundaries, embedding-based search, and intelligent context window packing for AI-assisted development tools.