Since my school years, I’ve been passionate about building technology that people truly enjoy using. With over 20 years of experience in IT, I’ve grown from hands-on engineering into technical leadership roles, leading teams and projects at the intersection of cloud infrastructure, DevSecOps, and AI systems.

In recent years, my focus has shifted toward Generative AI, LLM deployment, and AI infrastructure engineering. I help organizations design, implement, and operate private, secure, and scalable AI platforms—bridging the gap between cutting-edge AI research and reliable, production-grade systems.

My expertise spans the full lifecycle of delivering AI-driven solutions:

  • AI/LLM Engineering: Private deployments of open-source and proprietary LLMs (LLaMA, Mistral, GPT, etc.), fine-tuning, and Retrieval-Augmented Generation (RAG).
  • AI Infrastructure: GPU-enabled clusters on AWS, Kubernetes-based orchestration, and automation with Terraform, Helm, and ArgoCD.
  • MLOps / AIOps: Integrating model training, deployment, monitoring, and governance into enterprise SDLC pipelines.
  • Security & Compliance: Building trustworthy AI systems with robust access controls, secrets management, and compliance validation.
  • Scalability & Reliability: Architecting self-healing, high-availability AI services for enterprise environments.
  • Leadership: Guiding cross-functional teams, mentoring engineers, and aligning AI strategy with business goals.

Technically, I combine my deep background in Cloud (AWS/OpenStack), Kubernetes, Terraform, and automation with hands-on experience in AI/ML frameworks, Python-based pipelines, and distributed systems.

My mission is to make AI practical, secure, and impactful—whether that means deploying a multimodal LLM on edge devices, scaling private GPTs in the cloud, or enabling businesses to adopt AI responsibly and effectively.