tahnik@portfolio:~$ project-detail

FastParser - Efficient Indexing Engine

A high-performance context indexing and retrieval engine for large codebases — semantic chunking that respects code boundaries, embedding-based search, and intelligent context window packing for AI-assisted development tools.

Project Category

MLOps

Tech Stacks:

Efficient Indexing Engine is a retrieval system designed for the specific problem AI coding tools face at scale: given a large codebase (tens of thousands of files, millions of lines), how do you find and assemble the most relevant context for a given task — fast enough for interactive use, and precise enough that the LLM gets signal rather than noise?

Most naive approaches to code retrieval treat source files as flat text: split on token count, embed chunks, and do nearest-neighbor search. This works for small repos but breaks down at scale. Function boundaries get split mid-definition, import context gets separated from usage, and the retrieval ranking mixes structurally important code with incidental matches. The result is context windows filled with partially relevant fragments that waste tokens and confuse the model.

Semantic Chunking

The indexing engine uses AST-aware (Abstract Syntax Tree) chunking that respects code structure. Instead of splitting on arbitrary token boundaries, it parses source files and creates chunks aligned to meaningful units: functions, classes, methods, module-level blocks, and their associated documentation. This means a retrieved chunk is always a complete, self-contained piece of code — not a fragment that starts mid-function or cuts off before a return statement.

The chunker handles cross-references by tracking import/export relationships and type dependencies, so when a function is retrieved, the engine knows which other definitions it depends on and can optionally pull those into context as well.

Embedding & Search

Each chunk is embedded using a code-optimized embedding model that captures semantic similarity beyond lexical matching. The search layer supports hybrid retrieval: dense vector search for semantic queries ("find the authentication middleware") combined with sparse/keyword matching for exact identifiers ("where is validateJWT called?"). Results are ranked by a combination of semantic relevance, structural importance (e.g., exported functions rank higher than internal helpers), and recency of modification.

Context Window Packing

Retrieval alone isn't enough — the engine also handles context assembly. Given a token budget (e.g., 8K or 16K tokens for the context window), the packer selects and orders retrieved chunks to maximize relevance per token. This includes deduplication (avoiding near-identical chunks from similar functions), dependency resolution (including type definitions needed to understand retrieved code), and priority ordering (most relevant chunks first, with graceful degradation as the budget fills).

Performance

The engine is optimized for interactive latency: indexing runs incrementally on file changes (no full re-index on every edit), and search returns results in single-digit milliseconds for repos with 100K+ chunks. The index is stored efficiently with quantized vectors and compressed metadata to minimize the memory footprint.



The Efficient Indexing Engine powers context retrieval for multiple projects we're working on right now. The GitHub repository will be published soon.

SentinelOps - SRE CLI with Agentic Workflow and eBPF

A CLI-based AI SRE tool that combines eBPF telemetry with agent workflows to monitor, diagnose, and remediate issues in Kubernetes and cloud-native systems.

Project Category

AI Engineering

Tech Stacks:

PyTorch

Kubernetes

FastAPI

TypeScript

© Tahnik Ahmed | 2026