Hello fellow techies! 👋
In this week’s episode of Tech Unfiltered, I spoke with Shivani, a software engineer at Microsoft working in the world of security and cryptography. From battling legacy infrastructure to staying cool during critical production failures, Shivani gave us an inside look at what it means to secure systems at scale.
🔐 What It’s Like Working in Security at Microsoft
Security isn’t just a concern… It’s a mindset. Shivani’s current work focuses on ensuring systems remain secure at scale, even as older infrastructure and modern cryptographic requirements collide.
She explained what it’s like dealing with large-scale systems where small changes can have major ripple effects. One of the biggest lessons?
“It’s like changing the engine of an aeroplane while it’s flying.”
From backwards compatibility headaches to strict upgrade requirements, keeping systems secure without breaking functionality requires deep domain knowledge, long-term planning, and nerves of steel.
⚔️ AI Is Here… But So Are the Risks
While Shivani has long been interested in machine learning and AI, she’s recently taken a more hands-on approach… Building projects, sharing learnings online, and studying how to best use large language models (LLMs) in real-world scenarios.
But she’s quick to point out that not everything is rosy in the AI world:
“LLMs are just generative through and through, so whatever they know, they just repeat. So you have to ground it in reality. ”
She cautioned against treating AI as a magic solution—especially in production environments where hallucinations, outdated APIs, and lack of personalisation can introduce serious issues. Security concerns, too, are front and centre:
“ So I think with AI and LLMs coming into the space, it's more important to make sure that you are keeping in mind everything about security. ”
💡 Lessons from the Field
Here are a few of Shivani’s key takeaways:
Don’t trust logs blindly – Secrets, tokens, and personal data should never end up in your logs. Seems basic, but it happens more than you’d think.
Security must be layered – Even if an attacker gains access to your system, your defences should limit the blast radius.
You can't fake maturity in legacy migrations – Updating cryptographic practices in legacy systems takes thoughtful planning, feature flags, and often, “two-way door” changes that allow rollbacks without disruption.
AI productivity tools need context – LLMs can boost productivity, but only if your environment (and your prompts) provide clear, complete input.
🌱 Career & Content Creation
Shivani’s experience spans Adobe, Amazon, and now Microsoft. Security has taught her how deeply tied code is to operating systems, and why debugging cryptographic issues often feels like solving puzzles with invisible pieces.
Beyond engineering, Shivani’s been building a community around applied AI and LLMs… Sharing hands-on content, exploring production-ready patterns, and helping others understand complex systems. She’s currently creating learning resources and aims to launch a newsletter focused on real-world AI integration.
“ Security is one area which is not going anywhere. If anything, it would be more important as we go forward. ”
Whether it’s LLMs, cryptography, or system design at scale, Shivani is proof that combining curiosity with rigour can open doors in any domain.
🎧 Listen Now
This episode is for you if:
You’re working with legacy infrastructure
You want to build more secure systems
You’re exploring how AI fits into your dev workflow
You like learning from engineers who’ve solved high-stakes problems under pressure
You want to understand how other engineers have progressed in big companies
What’s the most overlooked security practice in your team? I’d love to hear your thoughts in the comments section.
Until next time,
Jade Wilson
Host, Tech Unfiltered
Share this post