tahnik@portfolio:~$ projects

A curated collection of MLOps, AI infrastructure, and open-source projects. From production inference engines to developer tools and cloud-native platforms — built with a focus on performance, scalability, and real-world impact.

$ cat ~/projects/selected.json
AI EngineeringEngineering Lead

SentinelOps - SRE CLI with Agentic Workflow and eBPF

A CLI-based AI SRE tool that combines eBPF telemetry with agent workflows to monitor, diagnose, and remediate issues in Kubernetes and cloud-native systems.

AI EngineeringProduct Owner

PUKU CLI

A terminal-based AI coding agent for Poridhi learners—plan, edit, refactor, and ship code from the CLI, with built-in DevOps/SRE workflows and guided learning tasks.

AI EngineeringProduct Engineer

PUKU Editor

PUKU Editor is an AI-powered fork of VS Code that accelerates coding with intelligent predictions, semantic understanding, and context-aware suggestions, guiding Poridhi.io learners in vibe and platform engineering.

AI EngineeringProduct Owner

Tensorcode

Tensorcode (tensorcode.poridhi.io) is a hands-on learning platform for core AI/ML and performance engineering, featuring a GPU-backed CUDA/Triton playground, interactive exercises (LeetCode-style problems for AI/ML and GPU programming), and guided content for mastering foundational tools like PyTorch and NumPy—all packaged with interactive docs and game-like practice to build real implementation skill.

MLOpsProduct Owner

Poridhi AI Studio

Cloud-native AI development and learning platform (ai.poridhi.io) enabling developers to train, fine-tune, and deploy models with GPU-accelerated workspaces, integrated notebooks, and one-click model serving.

GPU-accelerated AI workspacesOne-click model deploymentIntegrated notebook environment
MLOpsEngineering Lead

MicroCell - MicroVMs for AI Agents

A lightweight MicroVM runtime provisioning isolated sandboxes on-demand for AI agent code execution — sub-second cold starts, secure isolation, streaming log output, and clean artifact export. Designed for safe tool-use in agentic workflows.

Sub-second cold startsSecure MicroVM isolationStreaming logs & artifact export
AI EngineeringEngineering Lead

ServeLoop - Minimal-Scale LLM Inference Engine

A compact open-source LLM serving engine implementing continuous batching, paged KV-cache, and GPU scheduling optimization with mixed precision (BF16/FP8) support — built to study scheduler behavior, memory efficiency, and tail-latency trade-offs in inference workloads.

Continuous batching & paged KV-cacheGPU scheduling optimization layerMixed precision (BF16/FP8)
MLOpsEngineering Lead

FastParser - Efficient Indexing Engine

A high-performance context indexing and retrieval engine for large codebases — semantic chunking that respects code boundaries, embedding-based search, and intelligent context window packing for AI-assisted development tools.

AST-aware semantic chunkingEmbedding-based code searchContext window packing

© Tahnik Ahmed | 2026