2025: Thoughts and Reflections | A Year of MCP, AI, Agents and Self-Introspection
My 2025 began with something I hadn't done in over a decade — I stopped. After being a frontend engineer since 2011 and leading teams since 2020, I took nearly six months off to evaluate what I truly wanted to do next. Content creation had been calling me for a couple of years, and I finally gave myself the space to listen.
During this break, I reconnected with my family, took on some freelance work — primarily React-based projects building AI wrappers (a script writer, a legal AI advisor, among others) — and did some brief consulting engagements. But the real work happening during this time was internal: introspection, strategising for the next chapter of my life!
The AI Transformation: From Prompts to Architecture
The second half of 2025 changed everything about how I work with AI.
I started extensively researching MCP (Model Context Protocol), Claude Code, Claude Skills and how I could use my Claude Premium Membership to actually vibe-code — not just generate snippets, but architect solutions. While giving LLMs higher flexibility, I realised something fundamental: I needed to provide the right amount of contextual information for them to be genuinely useful.
ChatGPT's V5 felt underwhelming, to say the least. I was much happier when Claude released Sonnet around May 2025, which started yielding noticeably better results. Hence, I made a decision to upgrade to a (Paid) Pro Plan.
The bigger shift in models in just last 6 months completely changed the way I used AI.
I went from being a prompt programmer — someone who uses prompts to structure content and code — to someone who strategises, gives layered context to LLMs, and architects the direction of prompts. The better context I gave, the better the output became. It sounds obvious when you say it out loud, but experiencing it over months of iteration was a different thing altogether. My frustrations with AI hallucinations reduced considerably.
In the last six months of 2025 alone, I changed from being a prompt engineer to a context-aware engineer to what I'd call an agentic engineer. That's three distinct phases between May and December 2025:
Before 2025, my workflow involved a separate window for my code editor, another window for Claude, and a lot of copy-pasting code between the two. That was it. That was the entire "AI-assisted development" experience.
May to August 2025 was when I started using Claude Projects and contextual prompting. Instead of repeating myself in every prompt, I could give Claude persistent context about a project. I also started my first experiments with MCP.
September to December 2025 is when things got properly interesting. Claude Integrations completely changed how I interacted with Claude and GitHub. I no longer needed to copy-paste code from my editor to the Claude website. I started giving context directly through the user interface — adding repositories, files, and code right within the conversation. The friction disappeared, and the quality of output went up dramatically.
CodeRabbit and AI-Powered PR Reviews
If you had told me in 2024 that I would use an AI tool for pull request reviews, I would have been highly sceptical. I've always believed code reviews need human judgement — understanding business context, team conventions, the "why" behind decisions.
But since the launch of more powerful models (especially from March 2025 onwards), PR review tools like CodeRabbit became surprisingly capable. For the first time, I used an AI tool for PR reviews in one of my freelance projects. Although it was not absolutely flawless, but it did catch important things many times and made the review process much simpler. That's worth something.
NotebookLLM: Research Without Watching Hours of Video
I have known this tool since January 2025 but hadn't used the tool extensively until September 2025, When I started working on research for my YouTube channels. I needed to find specific information buried in long YouTube videos — and watching them end to end wasn't practical. That's when the tool was super helpful in summarising long videos and give me context if the video was worth watching.
Instead of sitting through entire videos, I could ask for specific information within a long recording, and NotebookLLM would surface the relevant parts. It became my go-to research tool almost overnight.
But the real breakthrough came when I combined it with Claude.
I started recording my freelance client calls (of course, with their permission), creating contextually accurate transcripts through NotebookLLM, and then feeding those transcripts into Claude's Project feature to document plans and create project roadmaps. I was genuinely impressed with how Claude Sonet 4.5 delivered contextually relevant information through the knowledge base we built from Zoom call transcripts. What used to take hours of manual note-taking and documentation became a streamlined pipeline: Record → Transcribe → Contextualise → Document.
Where AI Still Falls Short: The Design-Code Gap
Towards the end of the year, I found myself reviewing more code than writing it. I was interpreting, evaluating, and analysing code based on first principles rather than blindly agree-ing on the code written by LLM(s). While all of my work revolved around React, Next.js and the broader React ecosystem, there was one domain where I wasn't entirely satisfied: the link between design and code.
LLMs understand code. They create excellent code. They review code well — provided you give them context-aware prompts. Hallucinations reduced, false assumptions reduced, and context windows increased throughout 2025. For documentation, life journalling, co-coding, and architectural decision-making, AI became my go-to tool.
But in terms of design-to-code integration — Figma-to-code workflows, design system generation, SVG animations — I found AI underwhelming. The gap between what a designer envisions and what an LLM can translate into functional, production-ready code is still significant. These are areas where understanding both visual design principles and code logic matters, and current models struggle to bridge that divide convincingly.
This is also where I see a growing demand in the industry. Developers who understand design — who can think visually and translate that thinking into code — are going to be increasingly valuable. Not because AI can't write code (it clearly can), but because the nuance of translating design intent into logic, maintaining visual consistency across systems, and making aesthetic decisions that code alone cannot capture... that's still a deeply human skill. As AI handles more of the routine coding work, the professionals who combine design sensibility with engineering rigour will become the ones organisations fight to hire.
Claude Skills: The Evolution Beyond Projects
When I first bought my Claude account, I created Projects to avoid repeating contextual information in every prompt. It worked well enough — I could set up persistent instructions and reference documents for different work-streams.
Then, in December 2025, everything changed when I created my first Claude Skill.
The difference is subtle but significant. Projects were about giving Claude additional context for a specific set of conversations. Skills are about teaching Claude how to do something — a repeatable set of instructions that retains the "how-to" across any conversation, not just within a single project.
I created a Skill for my YouTube scripts that instructs Claude exactly how I want my video scripts structured — the tone, pacing, sections, B-roll markers, everything. Prior to December 2025, I would have created a Project and updated repeated instructions in the instructions tab. Now, for every workflow where I needed Claude to follow a consistent process, I converted it into a Skill. It's a small shift in how the feature works, but a massive shift in how I think about working with AI: I'm no longer just prompting, I'm programming behaviour.
As I write this in early 2026, I'm already working on a Skill for Principal Frontend Engineering — essentially encoding how I would answer technical questions and review code, removing my dependence on a writing style I created back in May 2025.
Content Creation: Building the Infrastructure
2025 wasn't just about AI and code. A significant part of my break was spent creating strategy for my Youtube Channels.
I purchased the DJI Mic 2 in May 2025 and had made an unboxing video for the same.
More recently, I picked up the Insta360 Go 3S Ultra — both pieces of equipment chosen specifically for the kind of content I want to create: mobile, travel-friendly, and high quality without bulky setups.
By the end of the year, I had accumulated over 7 hours of toy store and collection footage and more than 3 hours of travel content. All of it sitting in my hard drive, waiting to be edited.
And that's where DaVinci Resolve came in. I spent the latter part of 2025 learning basic video editing through DaVinci Resolve — a professional-grade tool that felt like learning a new programming language. Currently, I am using only the free version to edit the videos. Surprisingly, I have also edited some of my Toy Channel videos using iMovie because of how simple, reliable and easy it is.
The timeline is my new DOM, colour grading is my new CSS, and fusion nodes are basically React component trees (I'm only half joking).
Event: Razorpay FTX
On February 20, 2025, I attended Razorpay FTX (Financial Technology X) — an event at the intersection of finance and technology. It was both a platform for discussing how AI and tech are reshaping payments and fintech (not just in India, but globally).
As with any good tech event, the real insights didn't come from the stage alone. Some of the most valuable conversations happened one-on-one with fellow attendees — exchanging perspectives on where fintech is headed and how AI is quietly transforming the payments infrastructure we all rely on.
Plans for 2026: Four Channels, One Creator
After researching extensively for almost six months, I've made a decision: 2026 is the year I become a full-time content creator. After months of exploring, I realised my core strength was always documentation and teaching. YouTube was the natural next step, not a random pivot.
Knowledge, when shared, becomes wisdom. I've always wanted to share what I've learnt with the world, and my core strength has always been documentation. Now, I plan to document everything — as a traveller, as a developer, and as a toy collector. I also plan to inspire old self of mine through lessons I have learnt in becoming more mindful and aware soul.
I have plans for four YouTube channels:
@prashantsaniofficial — my main channel, focused on Tech, AI, and Frontend Development. I've always loved creating documentation for teams, and this will be my playground to share the knowledge I've earned over 14+ years. While I briefly posted some videos on this channel before, I'm restarting it with a focus on the latest AI skills and tools — not rehashing things that are already outdated. I also plan to include my learnings from conducting 250+ technical interviews as a panelist.
Toy Channel — focused on sharing POV journeys through toy stores, unboxing experiences, reviews, and collection showcase videos. If you've ever wondered what it's like to hunt for that one specific Lego or Hot Wheels or Pop Mart across multiple stores, this channel will make sense to you.
Tech Gallivanter — my travel channel, documenting gallivanting journeys as a digital nomad. Food, places, cultures, and the occasional "working from a café in a new city" vlogs.
Gyani Sani — a spiritual channel about mindfulness, spirituality, and leading a happy and fulfilling life. This one is personal. I want to inspire the younger version of myself through lessons I've learnt in becoming a more mindful and aware soul.
Four channels. One person. A lot of content to create.
But if 2025 taught me anything, it's that the right tools, the right context, and a willingness to evolve can take you places you didn't think were possible six months ago. I went from copy-pasting code into a chat window to building agentic workflows with MCP servers and custom Skills. If that kind of transformation is possible in how I work with code, imagine what's possible when I apply the same mindset to content.
2026, let's go.
This is part of my annual Year in Review series. You can read previous editions: 2023 · 2020-21 · 2019 · 2018 · 2017