<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Bogdan Bujdea]]></title><description><![CDATA[Tech Lead with 10+ years in .NET, former Microsoft MVP. I train developers to use AI coding tools effectively, with production-ready results.]]></description><link>https://bogdanbujdea.dev</link><generator>RSS for Node</generator><lastBuildDate>Fri, 17 Apr 2026 02:16:19 GMT</lastBuildDate><atom:link href="https://bogdanbujdea.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[10 Rules for Writing Production-Ready Code with AI]]></title><description><![CDATA[Welcome to the second edition of The Copilot’s Log.
If you’re a developer experimenting with AI tools like Copilot, Cursor, or Claude, you’ve probably seen both ends of the spectrum: blazing-fast progress… and mysteriously broken code.
That's because...]]></description><link>https://bogdanbujdea.dev/10-rules-for-writing-production-ready-code-with-ai</link><guid isPermaLink="true">https://bogdanbujdea.dev/10-rules-for-writing-production-ready-code-with-ai</guid><category><![CDATA[vibe coding]]></category><category><![CDATA[AI-Assisted coding]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Wed, 06 Aug 2025 21:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756147868897/5616222f-f9c6-4c90-8480-23d086ae2e3e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the second edition of <a target="_blank" href="https://www.linkedin.com/newsletters/7355571647858229250/?displayConfirmation=true"><em>The Copilot’s Log</em></a>.</p>
<p>If you’re a developer experimenting with AI tools like Copilot, Cursor, or Claude, you’ve probably seen both ends of the spectrum: blazing-fast progress… and mysteriously broken code.</p>
<p>That's because AI can be an incredible force multiplier, but only if you use it the right way.</p>
<p>In this edition, I’m sharing 10 habits and principles I’ve picked up from building real products with AI tools. These aren’t abstract theories or copy-paste workflows, they’re lessons learned the hard way, after shipping features, breaking things, and figuring out what actually works.</p>
<p>Let’s dive in.</p>
<h3 id="heading-1-know-how-llms-think-because-they-dont"><strong>1. Know How LLMs Think (Because They Don't)</strong></h3>
<p>LLMs (Large Language Models) like GPT or Claude generate code by predicting the next most likely word or token—just like a hyper-advanced autocomplete trained on billions of code and text examples.</p>
<p>Imagine you type: “The quick brown fox jumps over...” The model fills in: “the lazy dog.” It’s not thinking, it’s guessing what’s most likely to come next based on patterns in its training data.</p>
<p>When you ask for code, it’s the same. The more details you give (context, requirements, style), the more likely you’ll get something useful and accurate. If you’re vague, you’ll get a generic or even wrong answer—because the model is just picking from what it’s seen before, not truly understanding your intent.</p>
<p>But the amount of context you can provide is limited by the “context window” of the model (how much text it can process at once). If you try to include your entire codebase, it will start forgetting details from earlier in the conversation.</p>
<p>But the one thing you should understand here is that prompts are an important skill—you shouldn’t expect an LLM to provide good answers with vague requirements. It’s not magic; your results depend on how you ask.</p>
<p><strong>More details on how LLMs really work and how to master context will be in the next edition.</strong></p>
<hr />
<h3 id="heading-2-zero-trust-coding-always-review-the-ais-work"><strong>2. Zero Trust Coding: Always Review the AI’s Work</strong></h3>
<p>As I said earlier, the better the prompt the more accurate the results, but how can you be sure that you gave the best prompt that provides the best result? Even more, what if there is no result but the LLM still provides incorrect ones while being sure they are valid?</p>
<p>No matter how good the suggestion looks, always review every line, especially for anything beyond throwaway code.</p>
<p><strong>Why?</strong> As I shared last time: I once asked Copilot to remove a C# entity from a microservice. It “helped” by deleting a database table I didn’t mean to touch. We lost QA data, and over 50 people were blocked for hours. The AI did what I said (and more), because I didn’t review the changes closely enough.</p>
<p><strong>Practical Advice:</strong></p>
<ul>
<li><p>Treat every AI commit like a PR from an overeager junior dev.</p>
</li>
<li><p>Use git diff to inspect <em>all</em> changes, especially deletions and multi-file edits.</p>
</li>
</ul>
<hr />
<h3 id="heading-3-iterate-in-small-steps"><strong>3. Iterate in Small Steps</strong></h3>
<p>Don't try to generate an entire feature from one prompt. Break your work into multiple steps: focus on a small piece at a time, make sure it works, and <strong>always commit or stage</strong> before moving on. This way, you always have a clean, working state to return to. Plus, if you break something, it’s easy to pinpoint what changed or roll back to your last safe spot.</p>
<p><strong>Example:</strong> Let's say you use this prompt: <strong><em>“Refactor the OrderService class and fix performance issues”</em></strong>. You might end up spending hours talking with AI and getting nowhere, kind of like this guy:</p>
<p><strong>Minimize image</strong></p>
<p><strong>Edit image</strong></p>
<p><strong>Delete image</strong></p>
<p><img src="https://media.licdn.com/dms/image/v2/D4D12AQFMIrVq3kUYiA/article-inline_image-shrink_400_744/B4DZiFzAkUH8AY-/0/1754591403351?e=1761782400&amp;v=beta&amp;t=lN5zNuGcRwWVimyUcIyIeXKMVXvR7wiiT94L6D93LQY" alt /></p>
<p>Instead, you should do it like this:</p>
<p><strong>Step 1:</strong> Stage all your changes with git (<strong><em>git add .*</em></strong>)*</p>
<p><strong>Step 2:</strong> Use a focused, targeted prompt for just one small change. For example: <strong><em>"The OrderService class has many functions that use the same code for authentication. Move that code into a helper function and call it to prevent duplication.”</em></strong></p>
<p><strong>Step 3:</strong> Review the code, at this stage you have 3 options:</p>
<ul>
<li><p><strong>Code is flawless:</strong> continue to step 4</p>
</li>
<li><p><strong>Code needs small changes from your part:</strong> make the changes and continue</p>
</li>
<li><p><strong>The code has too many issues</strong>: just reset the changes and try again with a different prompt, this time making the necessary adjustments so that it doesn't end up in the same state. For example, if the duplicated code is removed but the helper function does something completely unrelated from what it was doing before, then say <strong><em>"T*</em></strong>he OrderService class has many functions that use the same code for authentication. Move that code into a helper function and call it to prevent duplication.<em> **</em>The helper function should keep the same logic as the duplicated code that we remove".<em>*</em> Send the prompt and go back to Step 3, if you do this 2-3 times and it doesn't work, just do the refactoring yourself! Worst case: you’ve lost 30 minutes experimenting with AI instead of spending a day doing the refactoring manually, so it’s not a waste of time in my opinion.</p>
</li>
</ul>
<p><strong>Step 4:</strong> Once you have code that you want to keep, stage your changes. Sometimes the code might be 99% perfect, and you just want a small tweak (like updating the text of a button), but the AI updates every button in the app instead. If this happens, reverting manually could take ages, but with git you can instantly roll back and get back to your 99% working state. Staging your changes early and often is the safety net that saves you from these moments.</p>
<p><strong>Step 5:</strong> We'll now use this prompt: <strong>“The ReadOrders function takes too long and I know the query for retrieving the orders is the main issue, give me at least two ways it can be improved.”</strong></p>
<p><strong>Step 6:</strong> You should now have at least two options for improving the performance, choose one or continue the conversation until you find an acceptable solution. If the AI is unable to provide a solution, then maybe you need to give it more details. For example, the LINQ query might be perfect so the AI can't give any more solutions, but if you provide the database structure in the context it can suggest creating an index. That's why it's important to give as many details as possible.</p>
<p><strong>Step 7:</strong> Continue this cycle for each sub-task. With this approach, something that used to take a day or two might be done in 30 minutes (on a good day!).</p>
<p>The key is to work incrementally, stage your changes often, and never be afraid to reset and try again. This habit saves you hours of debugging and gives you confidence in every step.</p>
<hr />
<h3 id="heading-4-play-to-ais-strengths"><strong>4. Play to AI’s Strengths</strong></h3>
<p>LLMs truly shine when you use them for what they do best: generating and summarizing text. Their sweet spot includes:</p>
<ul>
<li><p><strong>Writing Documentation:</strong> Give the AI a few bullet points or a rough outline and it can produce a professional, typo-free README or even detailed tickets for Jira. You’ll be surprised how much time you save and how much clearer your docs become.</p>
</li>
<li><p><strong>Generating Scripts:</strong> Whether you need a bash script, a one-off migration, or a quick automation, AI is fast, reliable, and generally accurate for these bite-sized, isolated tasks. Scripts are usually just a single file, without hundreds of dependencies spread across a codebase, making LLMs ideal for generating/changing scripts quickly and safely.</p>
</li>
<li><p><strong>Brainstorming:</strong> Stuck on naming, architectural choices, or edge cases? Use the LLM to quickly list pros/cons, generate alternatives, or unblock your thinking, then refine the results with your own expertise.</p>
</li>
</ul>
<p>LLMs were designed to generate natural language, so let them handle the boilerplate and wordsmithing while you focus on building and reviewing.</p>
<hr />
<h3 id="heading-5-know-your-tool-inside-out"><strong>5. Know Your Tool Inside Out</strong></h3>
<p>Each AI coding tool has its own advanced features. It’s like moving from Notepad++ to Visual Studio, but there you only use the Visual Studio text editor. You're literally missing on 95% of its capabilities if you're doing that!</p>
<p>Most devs only scratch the surface of what these AI tools can do by treating them like "chat with AI" tools, they then end up missing out on real productivity gains. For example:</p>
<ul>
<li><p><strong>Cursor:</strong> Earlier I mentioned how I use Git to stage changes between prompts, but did you know that Cursor <a target="_blank" href="https://docs.cursor.com/en/agent/chat/checkpoints"><strong>lets you instantly undo your last set of changes done by AI</strong></a>?</p>
</li>
<li><p><strong>Copilot:</strong> You can add a <em>copilot-instructions.md</em> file to your repo to give additional context for the work it does in that repository. Say your team uses XUnit and NSubstitute for tests, but each time you ask it to write a new test class it uses MSTest and Moq. Instead of mentioning these libraries in each prompt, just update your <a target="_blank" href="http://copilot-instructions.md/"><strong>copilot-instructions.md</strong></a> file with this: <em>“My unit tests are written with XUnit and I use NSubstitute for mocking.”</em> Copilot will then use your stack by default, letting you focus prompts on <em>what</em> to test.</p>
</li>
<li><p>PS: Cursor has "rules" that work the same way as copilot-instructions, and are a bit more advanced, but I'll talk about this in a later edition.</p>
</li>
</ul>
<p>It’s worth taking a few minutes to learn these features. You’ll save yourself hours (and frustration) in the long run, and your AI generated code will be much more accurate.</p>
<hr />
<h3 id="heading-6-why-mainstream-stacks-work-best-with-ai"><strong>6. Why Mainstream Stacks Work Best with AI</strong></h3>
<p>AI models are only as good as their training data. This doesn't mean you should switch to React just because it's more popular than Blazor. On the contrary, you should use the technology where you're most experienced, so you can catch issues faster and review the AI’s output with confidence.</p>
<p>However, if you sometimes struggle to generate good code for a less popular technology, now you know why: there just isn’t as much high-quality example code for the model to draw from. You can still get good results, but you might need to invest more time in better prompts and context.</p>
<p>On the other hand, if you’re a CTO or tech lead deciding on the stack for a new project, consider that your team will generally get better AI-assisted results with more popular technologies. If fast onboarding and high-quality AI suggestions are a priority, investing in a mainstream stack pays off, not just for you, but for anyone using these tools on your codebase.</p>
<hr />
<h3 id="heading-7-enforce-restrictions-early"><strong>7. Enforce Restrictions Early</strong></h3>
<p>Turn on strict modes in your language. For example:</p>
<ul>
<li><p><strong>C#</strong>: In your .csproj, set TreatWarningsAsErrors to true.</p>
</li>
<li><p><strong>TypeScript</strong>: Enable strict mode.</p>
</li>
<li><p><strong>And so on...</strong></p>
</li>
</ul>
<p><strong>Why?</strong> These guardrails force both you and the AI to write safer, more robust code, and make it much easier to spot potential issues. For example, in C#, I often see warnings for possible NullReferenceException which most often come true. Some devs ignore these, but I always double-check, is this warning real? By doing this, I’ve almost eliminated these errors in my projects, when before they were my most common runtime bug.</p>
<p>Another “restriction” is writing unit tests. If you ask the AI to modify code that’s covered by tests, you can immediately run them to see if it works—no guessing, just feedback.</p>
<hr />
<h3 id="heading-8-use-mcp-servers"><strong>8. Use MCP servers</strong></h3>
<p>I love using MCP servers, they’ve become essential in my workflow. To give you one example, in my current project I use Azure Boards, and the MCP server made onboarding ridiculously easy. For example, if I’m assigned a ticket with ID #1234, I just use a prompt like:</p>
<blockquote>
<p>“Read ticket #1234, analyze the codebase based on the description, and determine what changes need to be made. Then provide a summary.”</p>
</blockquote>
<p>Remember how long it used to take to do even the simplest task on a new project? You’d have to hunt for the right files, piece together context, and hope you didn’t miss anything. Now, AI can instantly show you where to start and what to look for. You still need to review and understand the code yourself, but with AI guiding you, you get a huge head start.</p>
<p>Although it's a new concept, MCP servers are very popular and now it's very easy to find (or create) one for basically anything. Azure, Github, Todoist, etc.</p>
<p>Here are some places where you can look for MCP servers:</p>
<p><a target="_blank" href="https://docs.cursor.com/en/tools/mcp"><strong>https://docs.cursor.com/en/tools/mcp</strong></a></p>
<p><a target="_blank" href="https://mcpservers.org/"><strong>https://mcpservers.org/</strong></a></p>
<p><a target="_blank" href="https://mcp.so/"><strong>https://mcp.so/</strong></a></p>
<hr />
<h3 id="heading-9-structure-your-codebase-for-ai-agents"><strong>9. Structure Your Codebase for AI Agents</strong></h3>
<p>If you want the best suggestions from AI tools, make it easy for them (and your teammates) to find the right files and understand your project structure. The more predictable and well-organized your codebase, the better AI agents can navigate and make smart recommendations.</p>
<p><strong>Best practices include:</strong></p>
<ul>
<li><p>Use clear, consistent naming conventions for files, classes, and functions. (e.g., OrderService.cs vs. svc1.cs)</p>
</li>
<li><p>Organize code by feature or domain, not just by layer or type.</p>
</li>
<li><p>Keep related code together, and avoid giant “miscellaneous” folders.</p>
</li>
<li><p>Maintain up-to-date README files and project documentation at the root.</p>
</li>
<li><p>Use standard folder names (src, tests, docs, etc.), so AI agents instantly know where to look.</p>
</li>
</ul>
<p>This isn’t just for the AI—future you (and your team) will thank you, too. But with an organized repo, AI tools can connect the dots more easily, suggest relevant changes, and avoid confusion.</p>
<hr />
<h3 id="heading-10-stay-in-the-loop"><strong>10. Stay in the Loop</strong></h3>
<p>AI tools evolve fast... sometimes too fast.</p>
<p>I've bookmarked new tools or articles only to find them outdated a month later. What used to take years to change now happens in weeks. Keeping up isn’t optional if you want to use these tools effectively... but let’s be honest, it’s also exhausting.</p>
<p>You already have a full-time job. Staying current often means using your own time, and most of what’s out there is either noise or hype. A lot of popular content leans hard into vibe-coding and “AI will replace devs” takes. Not because it’s helpful, but because it gets clicks.</p>
<p>That’s why I write this newsletter: to offer a more grounded, practical perspective. No hype. No fear. Just real workflows that work in production.</p>
<p><strong>How to stay up to date (without burning out):</strong></p>
<ul>
<li><p>Follow tool changelogs and release notes</p>
</li>
<li><p>Subscribe to developer-first newsletters (like this one)</p>
</li>
<li><p>Join communities around the tools you actually use</p>
</li>
</ul>
<p>Stay curious, but filter hard!</p>
<hr />
<p>That’s a wrap for the second edition of <em>The Copilot’s Log</em>. I aimed to keep it concise while giving you enough to build a solid foundation. In upcoming issues, I’ll dive deeper into each of these practices, with real examples and workflows you can try.</p>
<p>If this was helpful, there’s more where it came from—subscribe to <em>The Copilot’s Log</em> and follow me on LinkedIn for weekly tips, walkthroughs, and lessons from the field.</p>
<hr />
<p><strong>PS.</strong> Beyond writing this newsletter and my day-to-day job, I help dev teams level up their AI workflows through hands-on training. No buzzwords, no slides, just practical sessions focused on using tools like Copilot, Cursor, Claude Code or Windsurf effectively <strong>in production environments.</strong></p>
<p>If your team is adopting AI coding tools and wants to get it right from the start, reach out. I’d be happy to help.</p>
]]></content:encoded></item><item><title><![CDATA[The Copilot's Log #1: Vibe-Coding vs AI-Assisted Coding]]></title><description><![CDATA[Welcome to the first edition of The Copilot's Log, a weekly newsletter about AI-assisted software development, from a developer who's using these tools in the real world.
No hype. No fear. Just honest stories and practical takeaways.
To kick things o...]]></description><link>https://bogdanbujdea.dev/the-copilots-log-1-vibe-coding-vs-ai-assisted-coding</link><guid isPermaLink="true">https://bogdanbujdea.dev/the-copilots-log-1-vibe-coding-vs-ai-assisted-coding</guid><category><![CDATA[vibe coding]]></category><category><![CDATA[AI-Assisted coding]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Sun, 03 Aug 2025 15:09:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756150067349/c4d75b7f-e8df-49dc-82d5-feb507cf1a90.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the first edition of <a target="_blank" href="https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7355571647858229250"><em>The Copilot's Log</em></a>, a weekly newsletter about AI-assisted software development, from a developer who's using these tools in the real world.</p>
<p>No hype. No fear. Just honest stories and practical takeaways.</p>
<p>To kick things off, I want to share two short stories from the past 9 months. These moments will give you a feel for what this newsletter is all about.</p>
<h2 id="heading-story-1"><strong>Story #1</strong></h2>
<p><a target="_blank" href="https://todoist.com/"><strong>Todoist</strong></a>'s CEO just teased a new feature on his Twitter page. My app, <a target="_blank" href="https://www.linkedin.com/company/task-analytics-for-todoist/"><strong>Task Analytics for Todoist</strong></a>, is a power-up that adds features their platform doesn't offer. So I thought it would be cool to try to build the same feature before they launch it. I fired up Cursor, crafted the initial prompt, and in less than two minutes I had a basic version of the calendar that was 100% functional from the start without requiring any changes from my part. I pushed it to prod, recorded a video about it, and replied with that video in the same thread.</p>
<p>The result? <strong>21 new users and over 300$ for 5 minutes of vibe-coding.</strong></p>
<p>Part of me was thrilled. Another part thought: if AI can do this so easily… what happens to my career as a developer?</p>
<p><img src="https://media.licdn.com/dms/image/v2/D4D12AQGx3L5s2NZycQ/article-inline_image-shrink_1500_2232/B4DZhhqs2RHwAU-/0/1753985242018?e=1759968000&amp;v=beta&amp;t=Q-2p05XbJmjw8Eb3qIvfYPgLSpuqCQMukFsyanQYo-A" alt="Article content" /></p>
<h3 id="heading-story-2"><strong>Story #2</strong></h3>
<p>November 2024, I was tasked with removing endpoints related to an entity from a microservice. The job seemed simple enough for AI, so I opened VS Code and asked Copilot to remove all the code that used that C# entity (services, controllers, repositories, etc.)</p>
<p>Less than a minute later, over 50 files had been changed. I scanned the diffs in GitKraken, but not too carefully. Everything looked clean. The service still ran, smoke tests passed, and I pushed the code. A colleague gave it a quick review, and the PR was merged.</p>
<p>Two days later a table gets deleted from our QA database. I thought it might be related with my changes, so I started to go through the latest pull request. Indeed, someone created a PR with a migration that had the "DROP TABLE" command, but they insisted they haven't removed that entity. I then looked at my PR and found that I was the one who removed it. Turns out that Copilot decided to delete a little more than I requested, and I missed it because I haven't reviewed it carefully.</p>
<p>That’s how I learned a hard lesson: AI can make decisions you didn’t ask for, in places you least expect, and if you’re not watching, those decisions can have severe consequences.</p>
<p>Luckily for me, this didn’t reach production. But it blocked more than 50 people from working on that environment for over two hours while the issue was found and the data was restored.</p>
<h3 id="heading-why-im-writing-this-newsletter"><strong>Why I'm Writing This Newsletter</strong></h3>
<p>Over the past few years, I’ve used AI tools extensively to boost my productivity, and it’s made a real difference in my career. AI has helped me deliver more in less time, stay steady in a volatile job market, and even launch my own app, where it filled in as a kind of co-founder... handling front-end work, marketing copy, and more.</p>
<p>But for every success, I’ve had failures too. And I want to share both. Openly.</p>
<p>I’m pragmatic about AI. I don’t believe it’s a silver bullet, and I don’t think we should fear it either. That probably means I won’t attract as many readers as those hyping vibe coding or those warning that AI is killing our craft. But I’m okay with that.</p>
<p>This newsletter is for devs who want to get better (not just faster) with AI. So, in this first edition, I want to talk about exactly that: <strong>Vibe coding vs AI-assisted development.</strong></p>
<h2 id="heading-vibe-coding"><strong>Vibe-coding</strong></h2>
<blockquote>
<p>Vibe coding is <strong>a software development approach where developers heavily rely on large language models (LLMs) to generate code from natural language prompts</strong>. It involves giving general, high-level instructions to the LLM, which then produces the working code. This method aims to accelerate development and make app building more accessible, especially for those with limited programming experience.</p>
</blockquote>
<p>Does this definition sound too good to be true? That's because it really is too good to be true, at least at the moment I'm writing this newsletter.</p>
<p>This approach has its pros and cons, and that’s why I wanted to share both success and failure stories. Relying on AI to write code without truly understanding or reviewing it can lead to serious consequences. I’m not gatekeeping here—vibe coding can be a great way to dip your toes into programming. But using it as your only tool to build and ship products? That’s where things get risky.</p>
<h3 id="heading-why-vibe-coding-can-be-risky"><strong>Why Vibe Coding Can Be Risky</strong></h3>
<p>Vibe coding is becoming popular among folks with little to no dev background. While I admire the entrepreneurial mindset, this can be dangerous when taken too far.</p>
<p>When I used AI to implement that feature from the story #1. <strong>that was vibe-coding</strong>. The term didn’t exist back then, we just called it "AI-generated code." But let’s be honest, "vibe coding" sounds better.</p>
<p>Think of it like this: being a vibe-coder without foundational knowledge is like trying to be an electrician without proper training. At some point, you’ll get shocked... or start a fire.</p>
<p>That doesn’t mean you need a license or formal degree to be a developer. You can absolutely start with vibe coding. But to build stable, safe, and scalable software, you need more than prompts. You need to learn the basics: clean code, version control, security, system design. That’s what turns quick wins into long-term success.</p>
<p>If you ignore don't have basic programming skills, you risk introducing issues like:</p>
<ul>
<li><p><strong>Security vulnerabilities</strong> that could expose user data and lead to legal trouble</p>
</li>
<li><p><strong>Bugs</strong> that frustrate users and drive customers away</p>
</li>
<li><p><strong>Performance issues</strong> that inflate cloud costs or degrade the user experience</p>
</li>
</ul>
<h2 id="heading-when-vibe-coding-does-work"><strong>When Vibe Coding Does Work</strong></h2>
<h3 id="heading-proof-of-concepts"><strong>Proof of Concepts</strong></h3>
<p>Vibe coding is great for prototyping. Need to integrate a new library to see if it’s worth paying for? AI can spin up a working demo in minutes. Just point it at the docs and let it work while you drink coffee. For internal demos, it doesn’t need to be secure or pretty—just functional.</p>
<h3 id="heading-when-failure-is-an-option"><strong>When Failure Is an Option</strong></h3>
<p>I’m not a frontend dev. That used to stop me from launching products. Now, with AI, I’ve built and sold a SaaS with frontend code that works but it would make a real frontend dev cry. And that’s fine. For a small project, where UI bugs aren’t dealbreakers, vibe coding works. But this would never fly in a mature product with real users and expectations.</p>
<p>Vibe coding is fine when you <em>can afford to fail</em>. But you'll never see NASA vibe-coding the navigation system for a spacecraft that puts astronauts on the ISS.</p>
<h2 id="heading-what-is-ai-assisted-coding"><strong>What Is AI-Assisted Coding?</strong></h2>
<p>AI-assisted coding is different. You don’t trust the AI blindly. You use it like a junior dev or a very fast pair programmer, you guide it, verify its output, and think critically at every step. Sure, it might take longer than vibe coding, but the goal is to produce results that match or even exceed your own expertise.</p>
<p>And it’s not just about writing code. AI can assist with all the unglamorous but essential parts of the job:</p>
<ul>
<li><p>Writing and updating documentation</p>
</li>
<li><p>Generating tests</p>
</li>
<li><p>Reviewing pull requests</p>
</li>
<li><p>Drafting technical emails and status updates</p>
</li>
<li><p>Exploring unfamiliar APIs or libraries</p>
</li>
</ul>
<p>You can also use it to simulate pair programming: bouncing ideas off the model, thinking through architectural decisions, and even catching edge cases during reviews.</p>
<p>Used thoughtfully, AI becomes a productivity multiplier, not just a code generator.</p>
<p>That’s what this newsletter is all about: giving you real-world ways to use AI that make you a better, faster, and more reliable developer.... <strong><em>without putting your job or product at risk.</em></strong></p>
<p>This first edition was about setting the stage and explaining my take on vibe coding vs. AI-assisted development. In the next issue, <strong>I’ll share</strong> <strong>actionable tips</strong> <strong>I’ve tested myself</strong>, stuff you can use right away to boost your workflow.</p>
<p>But what about you? What’s been your biggest <em>aha</em> moment with AI? Or your most frustrating failure? I’d love to hear your stories!</p>
<p>Thanks for reading and stay tuned!</p>
]]></content:encoded></item><item><title><![CDATA[From CLU to Semantic Kernel: Building a Secure, Intelligent Teams Bot]]></title><description><![CDATA[In the AI and natural language processing world, building a solid chat interface can be tricky. When we started with Azure's Conversational Language Understanding (CLU), we wanted to make a system that could really get what users were saying and resp...]]></description><link>https://bogdanbujdea.dev/from-clu-to-semantic-kernel-building-a-secure-intelligent-teams-bot</link><guid isPermaLink="true">https://bogdanbujdea.dev/from-clu-to-semantic-kernel-building-a-secure-intelligent-teams-bot</guid><category><![CDATA[C#]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[semantic kernel]]></category><category><![CDATA[llm]]></category><category><![CDATA[Azure]]></category><category><![CDATA[azure cognitive services]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 24 Mar 2025 11:06:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1742814205957/4ab0d630-2eed-4f52-93db-72897109698d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the AI and natural language processing world, building a solid chat interface can be tricky. When we started with Azure's Conversational Language Understanding (CLU), we wanted to make a system that could really get what users were saying and respond well. We did this by linking the predicted "TopIntent" directly to certain handler methods. This way, we made it easier to figure out what users wanted and do the right thing, making the interaction smooth.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ProcessIntentAsync</span>(<span class="hljs-params">ITurnContext turnContext, CancellationToken cancellationToken</span>)</span>
{
    <span class="hljs-keyword">var</span> query = turnContext.Activity.Text; <span class="hljs-comment">// "What's the busiest queue today?"</span>
    <span class="hljs-keyword">var</span> userId = turnContext.Activity.From.Id;
    <span class="hljs-keyword">var</span> tenantId = <span class="hljs-keyword">await</span> _tenantResolver.GetTenantIdForUser(userId);

    <span class="hljs-comment">// Get intent from Azure CLU</span>
    <span class="hljs-keyword">var</span> cluResponse = <span class="hljs-keyword">await</span> _conversationsClient.AnalyzeConversationAsync(query);

    <span class="hljs-comment">// Check if user has permission for this intent</span>
    <span class="hljs-keyword">if</span> (!<span class="hljs-keyword">await</span> _permissionChecker.HasPermissionForIntent(userId, tenantId, cluResponse.TopIntent))
    {
        <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(<span class="hljs-string">$"You don't have permission to access <span class="hljs-subst">{prediction.TopIntent}</span>."</span>);
        <span class="hljs-keyword">return</span>;
    }

    <span class="hljs-comment">// Process the intent using a factory pattern</span>
    <span class="hljs-keyword">var</span> intentProcessor = _intentFactory.GetProcessor(cluResponse.TopIntent);
    <span class="hljs-keyword">if</span> (intentProcessor != <span class="hljs-literal">null</span>)
    {
        <span class="hljs-keyword">await</span> intentProcessor.ProcessAsync(turnContext, cluResponse.Entities);
    }
    <span class="hljs-keyword">else</span>
    {
        <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(<span class="hljs-string">"I'm not sure how to help with that."</span>);
    }
}
</code></pre>
<h2 id="heading-azure-clu-limitations-why-we-needed-more">Azure CLU Limitations: Why We Needed More</h2>
<p>While this implementation worked, we quickly encountered several limitations:</p>
<h3 id="heading-rigid-parameter-extraction">Rigid Parameter Extraction</h3>
<p>We had to define explicit entities for each parameter and handle missing entities manually:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleAgentPerformanceIntent</span>(<span class="hljs-params">ITurnContext turnContext, List&lt;Entity&gt; entities</span>)</span>
{
    <span class="hljs-comment">// Required entity extraction</span>
    <span class="hljs-keyword">var</span> agentEntity = entities.FirstOrDefault(e =&gt; e.Category == <span class="hljs-string">"AgentName"</span>);
    <span class="hljs-keyword">if</span> (agentEntity == <span class="hljs-literal">null</span>)
    {
        <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(<span class="hljs-string">"Please specify an agent name."</span>);
        <span class="hljs-keyword">return</span>;
    }

    <span class="hljs-keyword">var</span> dateRangeEntity = entities.FirstOrDefault(e =&gt; e.Category == <span class="hljs-string">"DateRange"</span>);
    <span class="hljs-keyword">if</span> (dateRangeEntity == <span class="hljs-literal">null</span>)
    {
        <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(<span class="hljs-string">"Please specify a time period."</span>);
        <span class="hljs-keyword">return</span>;
    }

    <span class="hljs-comment">// Continue with handler...</span>
}
</code></pre>
<h3 id="heading-maintenance-burden">Maintenance Burden</h3>
<p>Each new capability required extensive changes across multiple systems:</p>
<ol>
<li><p>Creating and training a new CLU intent with dozens of examples</p>
</li>
<li><p>Adding entity definitions for any parameters</p>
</li>
<li><p>Adding a new case to our switch statement</p>
</li>
<li><p>Implementing a new handler method with parameter extraction</p>
</li>
<li><p>Adding permission checks specific to that intent</p>
</li>
</ol>
<h3 id="heading-contextual-amnesia">Contextual Amnesia</h3>
<p>The bot couldn't maintain conversation context or understand follow-up questions:</p>
<pre><code class="lang-json">First query: <span class="hljs-string">"What was my busiest queue last month?"</span>
    Bot processes GetBusiestQueue intent successfully

Follow-up query: <span class="hljs-string">"And what about December?"</span>
    CLU has no context from previous query, so this likely fails or matches a different intent entirely
</code></pre>
<h2 id="heading-semantic-kernel-approach-dynamic-function-selection">Semantic Kernel Approach: Dynamic Function Selection</h2>
<p>With Semantic Kernel, we eliminated hard-coded intent matching in favor of AI-powered function selection:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QueueAnalyticsPlugin</span>
{
    [<span class="hljs-meta">KernelFunction, Description(<span class="hljs-meta-string">"Gets statistics for the busiest queue within a specified time period."</span>)</span>]
    [<span class="hljs-meta">Parameter(<span class="hljs-meta-string">"dateRange"</span>, <span class="hljs-meta-string">"The date range to analyze (e.g., 'last month', 'yesterday', etc.)"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">GetBusiestQueue</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> dateRange, KernelArguments arguments</span>)</span>
    {
        <span class="hljs-comment">// Security and implementation...</span>
    }

    [<span class="hljs-meta">KernelFunction, Description(<span class="hljs-meta-string">"Compares call volumes between two queues over a time period."</span>)</span>]
    [<span class="hljs-meta">Parameter(<span class="hljs-meta-string">"queue1"</span>, <span class="hljs-meta-string">"The first queue to compare"</span>)</span>]
    [<span class="hljs-meta">Parameter(<span class="hljs-meta-string">"queue2"</span>, <span class="hljs-meta-string">"The second queue to compare"</span>)</span>]
    [<span class="hljs-meta">Parameter(<span class="hljs-meta-string">"dateRange"</span>, <span class="hljs-meta-string">"The date range to analyze"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">CompareQueues</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> queue1, <span class="hljs-keyword">string</span> queue2, <span class="hljs-keyword">string</span> dateRange, KernelArguments arguments</span>)</span>
    {
        <span class="hljs-comment">// Security and implementation...</span>
    }
}
</code></pre>
<p>This approach allows the LLM to understand the user's intent and select the most appropriate function, even with varied phrasing and follow-up questions, while maintaining the same security guarantees.</p>
<h2 id="heading-the-semantic-kernel-plugin-architecture">The Semantic Kernel Plugin Architecture</h2>
<p>Semantic Kernel allowed us to create a more powerful bot by organizing functionality into plugins that the LLM can intelligently select based on user intent.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">SemanticKernelBot</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Kernel _kernel;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IPermissionService _permissionService;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">SemanticKernelBot</span>(<span class="hljs-params">Kernel kernel, IPermissionService permissionService</span>)</span>
    {
        _kernel = kernel;
        _permissionService = permissionService;

        <span class="hljs-comment">// Register all plugins</span>
        RegisterPlugins();
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RegisterPlugins</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-comment">// Register our domain-specific plugins</span>
        _kernel.Plugins.AddFromObject(<span class="hljs-keyword">new</span> QueueAnalyticsPlugin(_permissionService), <span class="hljs-string">"QueueAnalytics"</span>);
        _kernel.Plugins.AddFromObject(<span class="hljs-keyword">new</span> AgentPerformancePlugin(_permissionService), <span class="hljs-string">"AgentPerformance"</span>);
        _kernel.Plugins.AddFromObject(<span class="hljs-keyword">new</span> CallStatisticsPlugin(_permissionService), <span class="hljs-string">"CallStatistics"</span>);
    }
}
</code></pre>
<h3 id="heading-plugin-implementation-with-security-checks">Plugin Implementation with Security Checks</h3>
<p>Each plugin function automatically enforces security checks before accessing any data:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QueueAnalyticsPlugin</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IMediator _mediator;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IPermissionService _permissionService;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QueueAnalyticsPlugin</span>(<span class="hljs-params">IMediator mediator, IPermissionService permissionService</span>)</span>
    {
        _mediator = mediator;
        _permissionService = permissionService;
    }

    [<span class="hljs-meta">KernelFunction, Description(<span class="hljs-meta-string">"Gets statistics for the busiest queue within a specified time period."</span>)</span>]
    [<span class="hljs-meta">Parameter(<span class="hljs-meta-string">"dateRange"</span>, <span class="hljs-meta-string">"The date range to analyze (e.g., 'last month', 'yesterday', etc.)"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">GetBusiestQueue</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> dateRange, KernelArguments arguments</span>)</span>
    {
        <span class="hljs-comment">// Extract tenant ID and user ID from context</span>
        <span class="hljs-keyword">var</span> tenantId = arguments[<span class="hljs-string">"tenantId"</span>] <span class="hljs-keyword">as</span> <span class="hljs-keyword">string</span>;
        <span class="hljs-keyword">var</span> userId = arguments[<span class="hljs-string">"userId"</span>] <span class="hljs-keyword">as</span> <span class="hljs-keyword">string</span>;

        <span class="hljs-comment">// Check permissions - this is the security boundary</span>
        <span class="hljs-keyword">if</span> (!<span class="hljs-keyword">await</span> _permissionService.HasPermission(userId, tenantId, Permission.ViewQueueAnalytics))
        {
            <span class="hljs-keyword">return</span> <span class="hljs-string">"You don't have permission to view queue analytics. Please contact your administrator."</span>;
        }

        <span class="hljs-comment">// Convert natural language date range to DateTime values</span>
        <span class="hljs-keyword">var</span> (startDate, endDate) = ParseDateRange(dateRange);

        <span class="hljs-comment">// Reuse existing mediator query</span>
        <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> _mediator.Send(<span class="hljs-keyword">new</span> GetBusiestQueueQuery(tenantId, startDate, endDate));

        <span class="hljs-keyword">return</span> <span class="hljs-string">$"The busiest queue between <span class="hljs-subst">{startDate:d}</span> and <span class="hljs-subst">{endDate:d}</span> was <span class="hljs-subst">{result.QueueName}</span> with <span class="hljs-subst">{result.CallCount}</span> calls."</span>;
    }
}
</code></pre>
<h2 id="heading-reusing-existing-business-logic">Reusing Existing Business Logic</h2>
<p>A key advantage is that each plugin simply calls our existing mediator-based business logic:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// Inside our plugin</span>
<span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> _mediator.Send(<span class="hljs-keyword">new</span> GetBusiestQueueQuery(tenantId, startDate, endDate));
</code></pre>
<p>This means:</p>
<ol>
<li><p>No duplication of business logic</p>
</li>
<li><p>Existing security checks remain in place</p>
</li>
<li><p>All tenant isolation guarantees are maintained</p>
</li>
</ol>
<h2 id="heading-natural-language-understanding-with-context">Natural Language Understanding with Context</h2>
<p>Semantic Kernel enables the bot to maintain context across the conversation:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleMessageAsync</span>(<span class="hljs-params">ITurnContext turnContext</span>)</span>
{
    <span class="hljs-keyword">var</span> userQuery = turnContext.Activity.Text;
    <span class="hljs-keyword">var</span> userId = turnContext.Activity.From.Id;

    <span class="hljs-comment">// Get chat history for context</span>
    <span class="hljs-keyword">var</span> chatHistory = <span class="hljs-keyword">await</span> _chatHistoryRepository.GetForUser(userId);

    <span class="hljs-comment">// Execute query through Semantic Kernel with context</span>
    <span class="hljs-keyword">var</span> kernelArguments = <span class="hljs-keyword">new</span> KernelArguments
    {
        [<span class="hljs-meta"><span class="hljs-meta-string">"userId"</span></span>] = userId,
        [<span class="hljs-meta"><span class="hljs-meta-string">"tenantId"</span></span>] = <span class="hljs-keyword">await</span> _tenantResolver.GetTenantIdForUser(userId),
        [<span class="hljs-meta"><span class="hljs-meta-string">"history"</span></span>] = chatHistory.ToString()
    };

    <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> _kernel.InvokePromptAsync(userQuery, kernelArguments);
    <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(result.ToString());
}
</code></pre>
<p>This allows natural follow-up questions:</p>
<p><strong>User</strong>: "What was my busiest queue last month?"<br /><strong>Bot</strong>: "The busiest queue in January was Support with 1,245 calls."<br /><strong>User</strong>: "What about December?"<br /><strong>Bot</strong>: "In December, the busiest queue was Sales with 982 calls."</p>
<h2 id="heading-the-best-of-both-worlds-hybrid-approach">The Best of Both Worlds: Hybrid Approach</h2>
<p>While Semantic Kernel excels at conversation, CLU is still better for specific structured outputs like Adaptive Cards:</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleMessageAsync</span>(<span class="hljs-params">ITurnContext turnContext</span>)</span>
{
    <span class="hljs-keyword">var</span> userQuery = turnContext.Activity.Text;

    <span class="hljs-comment">// First check if this is a request for an Adaptive Card</span>
    <span class="hljs-keyword">var</span> cluResult = <span class="hljs-keyword">await</span> _languageClient.PredictAsync(userQuery);

    <span class="hljs-keyword">if</span> (cluResult.TopIntent == <span class="hljs-string">"MostBusyAgent"</span> &amp;&amp; cluResult.TopScore &gt; <span class="hljs-number">0.7</span>)
    {
        <span class="hljs-comment">// Use deterministic card generator</span>
        <span class="hljs-keyword">var</span> card = <span class="hljs-keyword">await</span> _cardGenerator.CreateMostBusyAgentAdaptiveCard(cluResult.Entities);
        <span class="hljs-keyword">await</span> turnContext.SendActivityAsync(MessageFactory.Attachment(card));
        <span class="hljs-keyword">return</span>;
    }

    <span class="hljs-comment">// Otherwise, use Semantic Kernel for natural language handling</span>
    <span class="hljs-keyword">await</span> HandleWithSemanticKernel(turnContext);
}
</code></pre>
<h2 id="heading-multi-tenancy-and-security-never-compromised">Multi-tenancy and Security: Never Compromised</h2>
<p>Every function implements permission checks before accessing any data:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">KernelFunction</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">GetAgentPerformance</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> agentName, <span class="hljs-keyword">string</span> dateRange, KernelArguments arguments</span>)</span>
{
    <span class="hljs-keyword">var</span> tenantId = arguments[<span class="hljs-string">"tenantId"</span>] <span class="hljs-keyword">as</span> <span class="hljs-keyword">string</span>;
    <span class="hljs-keyword">var</span> userId = arguments[<span class="hljs-string">"userId"</span>] <span class="hljs-keyword">as</span> <span class="hljs-keyword">string</span>;

    <span class="hljs-comment">// Security check #1: User permission</span>
    <span class="hljs-keyword">if</span> (!<span class="hljs-keyword">await</span> _permissionService.HasPermission(userId, tenantId, Permission.ViewAgentData))
    {
        <span class="hljs-keyword">return</span> <span class="hljs-string">"You don't have permission to view agent performance data."</span>;
    }

    <span class="hljs-comment">// Security check #2: Data boundary - ensure agent belongs to tenant</span>
    <span class="hljs-keyword">if</span> (!<span class="hljs-keyword">await</span> _agentRepository.BelongsToTenant(agentName, tenantId))
    {
        <span class="hljs-keyword">return</span> <span class="hljs-string">$"No agent named '<span class="hljs-subst">{agentName}</span>' was found in your organization."</span>;
    }

    <span class="hljs-comment">// Proceed with data retrieval...</span>
}
</code></pre>
<p>These checks ensure:</p>
<ul>
<li><p>Users can only access data they're authorized to view</p>
</li>
<li><p>No cross-tenant data leakage can occur</p>
</li>
<li><p>All requests are properly scoped to the user's tenant</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Transitioning to Semantic Kernel transformed the Clobba Teams Bot into a more powerful, context-aware assistant while maintaining strict security guarantees.</p>
<p>The key advantages:</p>
<ol>
<li><p><strong>Powerful natural language understanding</strong> that handles variations in phrasing</p>
</li>
<li><p><strong>Contextual awareness</strong> that maintains conversation state</p>
</li>
<li><p><strong>Security-first approach</strong> with permission checks in every function</p>
</li>
<li><p><strong>Tenant isolation guarantees</strong> to prevent data leakage</p>
</li>
<li><p><strong>Simplified development</strong> through plugin architecture</p>
</li>
</ol>
<p>By combining the strengths of Semantic Kernel and Azure CLU, we've created a bot that's both more powerful and more secure—giving users a modern AI experience without compromising on security.</p>
]]></content:encoded></item><item><title><![CDATA[Reinventing the Clobba Teams Bot: How Semantic Kernel Changed the Game]]></title><description><![CDATA[In my previous article, I shared why Semantic Kernel makes it straightforward to integrate AI into software without compromising security or reliability. This time, I’ll show how those principles worked in a real multi-tenant environment: Clobba Flex...]]></description><link>https://bogdanbujdea.dev/reinventing-the-clobba-teams-bot-how-semantic-kernel-changed-the-game</link><guid isPermaLink="true">https://bogdanbujdea.dev/reinventing-the-clobba-teams-bot-how-semantic-kernel-changed-the-game</guid><category><![CDATA[semantic kernel]]></category><category><![CDATA[openai]]></category><category><![CDATA[llm]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Wed, 19 Feb 2025 07:49:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739952454153/a431f229-80e2-4b57-b7e9-17e2b4119fec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my previous article, I shared why <strong>Semantic Kernel</strong> makes it straightforward to integrate AI into software without compromising security or reliability. This time, I’ll show how those principles worked in a <strong>real multi-tenant environment</strong>: <strong>Clobba Flex</strong>, a platform developed by <a target="_blank" href="https://www.codesoftware.net/">Code Software</a>.</p>
<p>Below, I’ll outline our high-level approach for turning Clobba’s Teams bot into a secure, AI-driven experience. In the next post, I’ll share code samples that detail how <strong>Azure Conversational Language Understanding (CLU)</strong> and <a target="_blank" href="https://learn.microsoft.com/en-us/semantic-kernel/overview/"><strong>Semantic Kernel</strong></a> come together under the hood.</p>
<hr />
<h2 id="heading-background-clobba-flexs-multi-tenant-role-based-setup">Background: Clobba Flex’s Multi-Tenant, Role-Based Setup</h2>
<p><a target="_blank" href="https://www.codesoftware.net/clobba-flex/">Clobba Flex</a> provides dashboards, call analytics, and license usage reports, among other features, for multiple customers on the same infrastructure. Each user may have different permissions, and no one should ever see data belonging to another tenant. Needless to say, they take security seriously and have even obtained an ISO 27001 certification, highlighting their commitment to data security. This means that while adding an AI chatbot was a logical improvement for analyzing data with natural language, we couldn't just plug in a large language model (LLM) without addressing these challenges:</p>
<ol>
<li><p><strong>Cross-Tenant Data Isolation</strong>: One user’s query must never show another tenant’s information.</p>
</li>
<li><p><strong>Secure Handling of Customer Data</strong>: In a world where companies still block access to AI due to fears of data leaks and misuse for training AI models (which is understandable), we must assure our customers that their data will not leave our infrastructure.</p>
</li>
<li><p><strong>Accurate Reporting</strong>: LLMs are not known for their precision; they can sometimes make things up, which can cause confusion or lead to incorrect decisions for our customers.</p>
</li>
</ol>
<hr />
<h2 id="heading-first-attempt-direct-llm-access">First Attempt: Direct LLM Access</h2>
<p>A simple approach would have been to connect an LLM directly to Clobba’s database or a smaller set of data and let it extract information from there. However, this option doesn't address any of the three important points mentioned above, so it's not a viable solution.</p>
<hr />
<h2 id="heading-moving-to-azure-clu">Moving to Azure CLU</h2>
<p>Our initial solution was <a target="_blank" href="https://learn.microsoft.com/en-us/azure/ai-services/language-service/conversational-language-understanding/overview"><strong>Azure Conversational Language Understanding</strong></a> <strong>(Azure CLU)</strong> because it provided us with the security and precision we needed. Here’s how it worked:</p>
<ol>
<li><p><strong>Define Intents</strong>: For each query type—like “Most Busy Queue”—we trained an intent with around 30 example utterances. Examples of utterances for this case would be “What queue had the most calls this month” or “Identify the queue with the most calls today”. Azure CLU allows us to extract entities, for example, we can instruct it to detect queue names or date ranges.</p>
</li>
<li><p><strong>Check Permissions</strong>: If CLU recognizes an intent, the system verifies the user’s role. Valid users get their data; everyone else is refused with a nice message indicating the permissions they’re missing.</p>
</li>
<li><p><strong>Add Q&amp;A</strong>: We supplemented with <a target="_blank" href="https://azure.microsoft.com/en-us/products/ai-services/question-answering"><strong>Azure Question Answering</strong></a> so users can ask about Clobba’s features and functionality.</p>
</li>
</ol>
<p>You can get a better idea of this flow by looking at the picture below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739526744944/2d4e0d58-7354-4330-9c40-5a0231d3b481.png" alt="Azure CLU intents" class="image--center mx-auto" /></p>
<p>When a user asks a question, we send it to Azure CLU, which returns the identified intent along with a confidence score. We have a mapping between the intents and the required permissions needed to process that intent. If the user lacks the necessary permissions, we send a friendly message informing them to request those specific permissions and try again.</p>
<h3 id="heading-pros-of-using-azure-clu">Pros of using Azure CLU</h3>
<ul>
<li><p><strong>It’s fast!</strong> Each time you add or change intents you have to retrain the model and it takes a few minutes, but that’s not done very often. However, when sending a query to Azure CLU the intent is detected in 0.2-0.5 seconds.</p>
</li>
<li><p><strong>It makes ensuring data security easy.</strong> This method keeps data secure by using Clobba’s existing role checks. Although it uses AI to train the model with those utterances, it doesn’t access customer data. Only the utterances and user questions are sent to Azure CLU, while the information from the database goes directly to the user, not to Azure CLU. This approach reassured customers about their data privacy concerns when using AI.</p>
</li>
<li><p><strong>The cost is predictable</strong>. With Azure CLU, a Search Service is required, and while the Free Tier is available, it does not support Advanced Model Training for intents. This increases the risk of similar intents being confused, reducing accuracy.</p>
<p>  To ensure higher precision, we use the Basic Tier of Azure AI Search with one instance, costing around $75/month. This enables Advanced Model Training in Azure CLU, allowing us to refine intent recognition and improve accuracy.</p>
</li>
</ul>
<h3 id="heading-cons-of-using-azure-clu">Cons of using Azure CLU</h3>
<p>It was predictable but limited—adding new features meant retraining more intents, which was time-consuming.</p>
<p>This meant that the bot was pretty limited because the conversation looked like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739526506134/8c078e7e-ce49-4b6b-bca5-3f12c0402f06.png" alt class="image--center mx-auto" /></p>
<p>As you can see, the user asks a question and the Clobba Teams bot responds straight to the point. The problem is that it's not 2015 anymore; we're in 2025. Nowadays, when people talk with a bot, they expect a ChatGPT-level conversation where they can ask for more details or have the bot explain it in a way that they can understand. How can we achieve this while providing accurate responses without risking a security breach?</p>
<hr />
<h2 id="heading-integrating-semantic-kernel-and-azure-openai">Integrating Semantic Kernel and Azure OpenAI</h2>
<p>To expand the bot’s intelligence, we introduced <strong>Semantic Kernel</strong> with <a target="_blank" href="https://learn.microsoft.com/en-us/azure/ai-services/openai/overview"><strong>Azure OpenAI</strong></a>:</p>
<ol>
<li><p><strong>Plugin-Based Structure</strong></p>
<ul>
<li><p>Each plugin handles a specific task (e.g., “Retrieve Most Busy Queue”, “Retrieve most busy agent”, “Retrieve all queues”, etc. ).</p>
</li>
<li><p>The LLM parses user queries and calls the correct plugin.</p>
</li>
<li><p>Inside that plugin, we check if the user is allowed to access that data using the same security model.</p>
</li>
</ul>
</li>
<li><p><strong>Enterprise Data Protection</strong></p>
<ul>
<li>Azure OpenAI ensures that Code Software's customer data is not used for model training, aligning with ISO 27001 standards for secure data handling. The inputs, outputs, embeddings, and training data are not shared with other customers or OpenAI, nor are they used to improve models without our consent. The Azure OpenAI Service operates within Microsoft's Azure environment, ensuring data security and compliance with Azure's offerings.</li>
</ul>
</li>
<li><p><strong>Flexibility for Customers -</strong> we allow them to select how powerful the bot is</p>
<ul>
<li><p><strong>No Bot</strong>: Some clients opt out entirely.</p>
</li>
<li><p><strong>Azure CLU Only</strong>: Deterministic approach with minimal overhead.</p>
</li>
<li><p><strong>Azure CLU + Azure OpenAI</strong>: A more conversational experience under tight security constraints.</p>
</li>
</ul>
</li>
</ol>
<p>Here’s what a conversation looks like now:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739528177731/dd39f832-963f-4309-bc67-52e381257d8f.png" alt class="image--center mx-auto" /></p>
<p>As you can see, the conversation is more natural. What’s more, the user can now ask some difficult questions that Azure CLU could not interpret. An LLM is perfect at this because it can detect the context of the question and add the missing pieces without forcing the user to do that.</p>
<p>For example, an Azure CLU intent that requires a date range will not work unless the user specifies it.</p>
<p>E.g. <em>“Show me the busiest queue for</em> <strong><em>this month”</em></strong></p>
<p>However, an LLM has the history of the conversation and knows that earlier I talked about a specific date range. Thus, the query will use that information - “January 2024”, “January 2025”.</p>
<h3 id="heading-pros-of-using-semantic-kernel">Pros of using Semantic Kernel</h3>
<ul>
<li><p><strong>Strict Security</strong>: Plugins enforce permission checks before any data retrieval, ensuring a user never sees another tenant’s info.</p>
</li>
<li><p><strong>Maintained Precision</strong>: Rather than letting the LLM guess how to build SQL queries, each plugin executes vetted code paths.</p>
</li>
<li><p><strong>Easy Feature Expansion</strong>: No massive retraining each time—just add a new plugin.</p>
</li>
<li><p><strong>Natural conversation</strong>: It makes the conversation feel more natural, allowing users to speak as if they are talking to another person who understands the context.</p>
</li>
</ul>
<hr />
<h3 id="heading-cons-of-using-semantic-kernel">Cons of using Semantic Kernel</h3>
<ul>
<li><p><strong>It's slower than Azure CLU</strong>. Behind the scenes, Semantic Kernel first sends the response to Azure OpenAI for processing. The AI decides that plugin A must be used, and then the result from the plugin goes back to OpenAI, which returns the final response.This process can take between 1 to 5 seconds, depending on the complexity of the query, which is longer than Azure CLU. However, in my opinion, the results are worth the wait.</p>
</li>
<li><p><strong>For debugging, it’s inherently unpredictable</strong>. Unlike compiled code, where you can reliably reproduce bugs and patch them, the non-deterministic nature of LLMs means there’s no guarantee you’ll even see the same issue twice. If an incorrect plugin call occurs, it might not happen the same way again, so you often have to accept it rather than fix it in the usual way.</p>
</li>
<li><p><strong>Unpredictable costs.</strong> With Azure CLU, we generally know the monthly cost, but Azure OpenAI uses a pay-per-use model. Costs depend on the number of tokens processed for both inputs and outputs. This means pricing increases with usage, which can make it more expensive than CLU, but also more flexible. In a future article, I'll share strategies to save costs for both Azure OpenAI and CLU, including tips to optimize search usage and cut unnecessary spending.</p>
</li>
</ul>
<h2 id="heading-next-steps">Next Steps</h2>
<p>In the upcoming article, I’ll share <strong>code samples</strong> that show how we connected Azure CLU, Semantic Kernel, and Azure OpenAI behind the scenes. You’ll learn how we enforce permissions and tackle the challenge of creating prompts that ensure precision.</p>
]]></content:encoded></item><item><title><![CDATA[From Hype to Reality: How Semantic Kernel Powers Smarter Apps]]></title><description><![CDATA[Introduction
I’ve been working with Semantic Kernel for the past year, and it’s completely changed the way I integrate AI into my applications. In this new blog series, I’ll walk through how to use Semantic Kernel in real-world projects, sharing the ...]]></description><link>https://bogdanbujdea.dev/from-hype-to-reality-how-semantic-kernel-powers-smarter-apps</link><guid isPermaLink="true">https://bogdanbujdea.dev/from-hype-to-reality-how-semantic-kernel-powers-smarter-apps</guid><category><![CDATA[clobba]]></category><category><![CDATA[semantic kernel]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Fri, 07 Feb 2025 11:41:38 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>I’ve been working with <a target="_blank" href="https://github.com/microsoft/semantic-kernel">Semantic Kernel</a> for the past year, and it’s completely changed the way I integrate AI into my applications. In this new blog series, I’ll walk through how to use Semantic Kernel in real-world projects, sharing the lessons I learned while building two apps:</p>
<ul>
<li><p><a target="_blank" href="https://task-analytics.com/"><strong>Task Analytics</strong></a>, my own SaaS for Todoist task management and insights</p>
</li>
<li><p><a target="_blank" href="https://www.codesoftware.net/clobba/"><strong>Clobba</strong></a>, a Teams bot that enforces robust security rules in a multi-tenant environment</p>
</li>
</ul>
<p>Before we get into specific demos, let’s talk about why I believe Semantic Kernel is one of the best tools for incorporating AI without losing control, reliability, or security.</p>
<hr />
<h2 id="heading-why-semantic-kernel">Why Semantic Kernel?</h2>
<h3 id="heading-1-enhance-your-app-with-little-effort">1. Enhance Your App with Little Effort</h3>
<p>Traditional AI integrations can be time-consuming. Even a seemingly simple feature, like controlling tasks with voice commands, can turn into a huge coding exercise—parsing user input, mapping it to database operations, and handling edge cases. With Semantic Kernel:</p>
<ul>
<li><p>You <strong>teach it a few skills</strong>—in my case, marking tasks as complete or changing due dates.</p>
</li>
<li><p>You let users talk in <strong>plain language</strong>.</p>
</li>
<li><p>Semantic Kernel <strong>interprets the request</strong> and <strong>executes the right plugin</strong>.</p>
</li>
</ul>
<p>That’s exactly how I approached <strong>Task Analytics</strong>. Instead of writing a large chunk of custom code, I leaned on Semantic Kernel to handle the complex language parsing. It became straightforward to let users say “Complete this task” or “Move my meeting to tomorrow,” and let the AI handle the rest.</p>
<h3 id="heading-2-enforce-role-based-security">2. Enforce Role-Based Security</h3>
<p>Connecting an LLM directly to a database sounds tempting—libraries exist that can convert natural language into SQL queries automatically. But what happens if you have multiple customers, each with different permissions? Or different departments that shouldn’t see each other’s data? This is where Semantic Kernel’s fine-grained security controls shine.</p>
<p>Consider <strong>Clobba</strong>, a bot I built that interacts with data in a multi-tenant environment. If the AI just fed plain-text prompts into SQL, it would be easy for a manager in one department to accidentally (or maliciously) query sensitive information from another department. Semantic Kernel solves this by:</p>
<ul>
<li><p>Allowing you to define <strong>clear rules</strong> on what the AI can or cannot access.</p>
</li>
<li><p>Plugging into your existing <strong>role-based authentication</strong> so that each user only sees the data they’re authorized to see.</p>
</li>
</ul>
<p>The result? A powerful bot that can answer a range of questions while respecting strict security boundaries—a must-have in enterprise settings.</p>
<h3 id="heading-conclusion-balancing-ai-freedom-and-control"><strong>Conclusion: Balancing AI Freedom and Control</strong></h3>
<p>Semantic Kernel provides exactly the right balance: you can let AI handle labor-intensive tasks—like language parsing, user-friendly interfaces, or data summaries—while still defining what it can and can’t touch. With role-based security and flexible skill creation, you maintain complete oversight, ensuring users don’t overstep their privileges or gain unintended access.</p>
<hr />
<h2 id="heading-looking-ahead-peakit-007-and-upcoming-posts">Looking Ahead: PeakIt #007 and Upcoming Posts</h2>
<p>On April 3rd, I’ll be speaking at the <a target="_blank" href="https://peakit.ro/">PeakIt #007 conference</a> in Brasov, sharing more about my journey with Semantic Kernel. The talk focuses on how developers can <strong>integrate AI in their apps in a responsible and useful way</strong>. If you’re curious about real-world AI adoption, I’d love to see you there.</p>
<p>Over the next few blog posts, I’ll show:</p>
<ul>
<li><p><strong>How to build secure, multi-tenant apps with Semantic Kernel</strong> using Clobba as an example, ensuring each user only has access to their data.</p>
</li>
<li><p><strong>How to create user-friendly and useful interactions</strong> like I did in Task Analytics, letting users talk naturally to manage tasks without heavy custom code.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Configuring Azure Application Insights to Prevent High Costs]]></title><description><![CDATA[What Went Wrong
Recently, I encountered a costly issue with my application. A misconfigured Polly retry policy caused over 80 million retries to a 3rd-party API within just two days. Each retry failure generated logs in the AppEvents and AppException...]]></description><link>https://bogdanbujdea.dev/configuring-azure-application-insights-to-prevent-high-costs</link><guid isPermaLink="true">https://bogdanbujdea.dev/configuring-azure-application-insights-to-prevent-high-costs</guid><category><![CDATA[Azure]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Tue, 17 Dec 2024 10:36:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1734432179328/b28782dd-c39a-4a2b-ab4d-b731d6aed02a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-went-wrong">What Went Wrong</h3>
<p>Recently, I encountered a costly issue with my application. A misconfigured <strong>Polly retry policy</strong> caused over <strong>80 million retries</strong> to a 3rd-party API within just two days. Each retry failure generated logs in the <strong>AppEvents</strong> and <strong>AppExceptions</strong> tables of my Azure Log Analytics workspace linked to Application Insights. This caused a big increase in <strong>data ingestion</strong>, leading to an unexpected rise in my Azure costs. Normally, I pay about 70 EUR per month, but this month I had to pay 150 EUR, effectively doubling my costs.</p>
<p>The main problem was that the Polly retry logic failed to handle the rate limit headers correctly because the API, for some reason, sent a <code>RetryAfter</code> value of 0 instead of providing an appropriate wait time. This led to requests being retried too frequently, despite the API's advice to pause, which in turn flooded the logging system with telemetry data.</p>
<hr />
<h3 id="heading-steps-taken-to-prevent-this-in-the-future">Steps Taken to Prevent This in the Future</h3>
<p>To address this issue and avoid a similar situation in the future, I implemented several measures, including <strong>daily ingestion caps</strong>, <strong>telemetry sampling</strong>, <strong>alerts</strong>, and <strong>data purging</strong>. Here’s how you can configure these features in Application Insights to ensure cost-effective monitoring and logging.</p>
<hr />
<h3 id="heading-1-configure-daily-caps-in-log-analytics">1. Configure Daily Caps in Log Analytics</h3>
<p>Daily caps are an effective way to prevent runaway data ingestion costs. If your application unexpectedly generates excessive telemetry, the daily cap will stop data ingestion once the cap is reached, protecting you from unexpected billing spikes.</p>
<h4 id="heading-how-to-set-a-daily-cap">How to Set a Daily Cap:</h4>
<ol>
<li><p>Go to your <strong>Log Analytics Workspace</strong> in the Azure portal.</p>
</li>
<li><p>Navigate to <strong>Usage and estimated costs</strong> in the left menu.</p>
</li>
<li><p>Select the <strong>Daily cap</strong> tab.</p>
</li>
<li><p>Set the desired cap in GB (e.g., <code>2 GB/day</code>).</p>
</li>
<li><p>Save the settings.</p>
</li>
</ol>
<p>Azure will stop data ingestion for the day once the cap is reached. Note that ingestion resumes the next day.</p>
<hr />
<h3 id="heading-2-implement-telemetry-sampling-in-application-insights">2. Implement Telemetry Sampling in Application Insights</h3>
<p>Telemetry sampling reduces the volume of data sent to Application Insights by collecting a representative subset of telemetry data. This ensures you retain insights into application behavior while keeping ingestion volumes under control.</p>
<h4 id="heading-how-to-enable-telemetry-sampling">How to Enable Telemetry Sampling:</h4>
<ol>
<li><p>Open your <strong>Application Insights</strong> resource in the Azure portal.</p>
</li>
<li><p>Navigate to <strong>Sampling</strong> under the <strong>Configure</strong> section.</p>
</li>
<li><p>Enable <strong>Adaptive Sampling</strong>:</p>
<ul>
<li>Adaptive Sampling dynamically adjusts the sampling rate based on the volume of incoming telemetry.</li>
</ul>
</li>
<li><p>Adjust sampling rates for specific telemetry types (e.g., reduce detailed logs while keeping critical exceptions).</p>
</li>
</ol>
<p>This ensures that only essential logs are sent to the workspace, reducing data ingestion without losing valuable insights.</p>
<hr />
<h3 id="heading-3-configure-alerts-to-monitor-unexpected-behavior">3. Configure Alerts to Monitor Unexpected Behavior</h3>
<p>Alerts help you detect spikes in telemetry data before they escalate into high costs. By monitoring key metrics, such as data ingestion rates or exception counts, you can act quickly to resolve the underlying issue.</p>
<h4 id="heading-how-to-set-up-alerts">How to Set Up Alerts:</h4>
<ol>
<li><p>In your <strong>Application Insights</strong> resource, go to <strong>Alerts</strong> in the left menu.</p>
</li>
<li><p>Click <strong>+ New alert rule</strong>.</p>
</li>
<li><p>Configure the alert:</p>
<ul>
<li><p><strong>Scope</strong>: Select your Application Insights resource.</p>
</li>
<li><p><strong>Condition</strong>: Choose a signal like <code>Exceptions &gt; Count</code> or <code>Data Ingestion &gt; Volume</code>.</p>
</li>
<li><p><strong>Threshold</strong>: Set a threshold (e.g., trigger when exceptions exceed <code>1000</code> in 5 minutes).</p>
</li>
<li><p><strong>Action Group</strong>: Define an email or SMS notification for the alert.</p>
</li>
</ul>
</li>
<li><p>Save and enable the alert.</p>
</li>
</ol>
<p>This allows you to receive notifications when unusual patterns emerge, such as a sudden increase in exceptions or retries.</p>
<hr />
<h3 id="heading-4-purge-unnecessary-data-with-the-rest-api">4. Purge Unnecessary Data with the REST API</h3>
<p>After identifying the source of the excessive logs, I needed to remove the unnecessary data to clean up my workspace, hoping it would lower my bill. However, that's not the case. You are billed for the data you logged at the end of the day, so purging the data after two days didn't affect my invoice. Still, you might want to do this, so here's how I did it step-by-step:</p>
<h4 id="heading-step-1-assign-the-data-purger-role">Step 1: Assign the Data Purger Role</h4>
<ol>
<li><p>Go to your <strong>Log Analytics Workspace</strong> in the Azure portal.</p>
</li>
<li><p>Navigate to <strong>Access Control (IAM)</strong>.</p>
</li>
<li><p>Add a role assignment:</p>
<ul>
<li><p>Role: <strong>Data Purger</strong></p>
</li>
<li><p>Assign access to: Your Application Insights or service principal.</p>
</li>
</ul>
</li>
</ol>
<h4 id="heading-step-2-generate-an-access-token">Step 2: Generate an Access Token</h4>
<ol>
<li><p>Use Azure CLI or Postman to generate a token:</p>
<pre><code class="lang-bash"> az account get-access-token --resource https://management.azure.com/
</code></pre>
<p> Use the token in the <code>Authorization</code> header of your requests.</p>
</li>
</ol>
<h4 id="heading-step-3-send-a-purge-request">Step 3: Send a Purge Request</h4>
<ol>
<li><p>Use the following POST endpoint:</p>
<pre><code class="lang-bash"> POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{workspaceName}/purge?api-version=2023-09-01
</code></pre>
</li>
<li><p>Include the following request body (set the value to whatever you need to purge):</p>
<pre><code class="lang-json"> {
   <span class="hljs-attr">"table"</span>: <span class="hljs-string">"AppExceptions"</span>,
   <span class="hljs-attr">"filters"</span>: [
     {
       <span class="hljs-attr">"column"</span>: <span class="hljs-string">"Type"</span>,
       <span class="hljs-attr">"operator"</span>: <span class="hljs-string">"="</span>,
       <span class="hljs-attr">"value"</span>: <span class="hljs-string">"Polly.RateLimit.RateLimitRejectedException"</span>
     }
   ]
 }
</code></pre>
</li>
<li><p>Monitor the operation with the returned <code>operationId</code>. In my case it took more than 24 hours for the purge to execute.</p>
</li>
</ol>
<hr />
<h3 id="heading-conclusion-why-configuring-application-insights-is-essential">Conclusion: Why Configuring Application Insights is Essential</h3>
<p>This experience highlighted the importance of properly configuring <strong>Application Insights</strong> to prevent high costs and ensure effective telemetry management. By implementing daily caps, telemetry sampling, alerts, and data purging, you can:</p>
<ul>
<li><p>Prevent runaway data ingestion costs.</p>
</li>
<li><p>Reduce unnecessary logs while retaining critical insights.</p>
</li>
<li><p>Detect and respond to unusual application behavior early.</p>
</li>
<li><p>Maintain a clean and manageable telemetry dataset.</p>
</li>
</ul>
<p>By taking these steps proactively, you can safeguard your monitoring budget and ensure your application remains performant and cost-effective.</p>
]]></content:encoded></item><item><title><![CDATA[The Hidden Costs of Selling SaaS from Romania]]></title><description><![CDATA[It's nice to wake up with Stripe payment notifications! Then your accountant tells you all the implications of selling SaaS licenses from Romania and you can't sleep anymore 😅
A month ago, my biggest challenges were things like making sure a page wa...]]></description><link>https://bogdanbujdea.dev/the-hidden-costs-of-selling-saas-from-romania</link><guid isPermaLink="true">https://bogdanbujdea.dev/the-hidden-costs-of-selling-saas-from-romania</guid><category><![CDATA[SaaS]]></category><category><![CDATA[romania]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Fri, 08 Nov 2024 11:33:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731065056321/b8b7df39-88df-435c-a0a3-4f35b79c32af.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It's nice to wake up with Stripe payment notifications! Then your accountant tells you all the implications of selling SaaS licenses from Romania and you can't sleep anymore 😅</p>
<p>A month ago, my biggest challenges were things like making sure a page was responsive or optimizing a database query. But now? Navigating the Romanian tax system for international SaaS sales feels like a whole new level of complexity. It’s almost enough to make me consider shutting down this little experiment because of the headaches involved.</p>
<p><strong>Let me break down the numbers to give you a clear picture:</strong></p>
<p>Since enabling Stripe on October 17th, I’ve sold <strong>10 licenses</strong> for a total of <strong>€66</strong>. Stripe’s commission takes <strong>€4</strong> (I think there’s also a VAT charge here), so I’m left with <strong>€62</strong>.</p>
<p>Then, my accountant charges <strong>€30 + VAT</strong> for processing up to 100 invoices each month (this doesn’t include the <strong>€84/month</strong> I already pay them for managing my PFA—let’s leave that out for now).</p>
<p>So after the accountant’s fees, I’m left with ~<strong>€26</strong>. Of course, I can have 90 more customers included in this price, but the idea is that you can't have a SaaS that barely makes any money, otherwise just accounting will kill it.</p>
<p>There’s also the tax that I need to pay in Romania, so I’ll be left with less than <strong>€20-€25</strong> after this.</p>
<p><strong>What about hosting</strong>? Well, Azure will cost me <strong>~70EUR/month</strong> for the current usage, probably I can fit 100-200 more clients before I need to scale up, so I'm already losing money.</p>
<p><strong>But it doesn’t end there...</strong></p>
<p>Stripe collects VAT for each sale but leaves the actual <strong>VAT remittance</strong> to me. Within the EU, I can handle this through Romania’s OSS system, which simplifies things. But for <strong>non-EU countries</strong> like the UK, I’d need to register and remit VAT separately in each country where I have sales. I have no idea how much this would cost and I don’t intent to find out, so I refunded the customer for now and let them keep their license.</p>
<p>And there’s more: for Romanian customers, I also need to issue an <strong>e-factura</strong> (electronic invoice), which Stripe doesn’t support—so that’s on me too.</p>
<p>Given all this complexity, the simplest option might be to shut it down and refund the payments. But I’m not ready to give up yet! Instead, I’m looking into <strong>Paddle</strong> as an alternative.</p>
<h3 id="heading-why-paddle">Why Paddle?</h3>
<p>Paddle acts as a <strong>Merchant of Record (MoR)</strong>, essentially a reseller for my product. They handle all the VAT collection and remittance in each country, and they take care of compliance with local tax authorities. This means I’ll receive just <strong>one monthly invoice from Paddle</strong>, instead of needing to manage VAT and invoices for each individual sale. In theory, this should make everything much simpler—and hopefully let me get back to focusing on building features, not dealing with tax filings!</p>
<p>It’s a bit disappointing that I need to switch payment processors instead of working on new product features, but it’s a necessary step.</p>
<p>I'll keep you posted on the process once I’m done with the Paddle integration (still waiting for their approval).</p>
<p>📣 <strong>To my fellow Romanian SaaS creators</strong>—if you've faced these challenges and have any tips to share, I’d love to hear from you!</p>
]]></content:encoded></item><item><title><![CDATA[I Launched My First SaaS Product]]></title><description><![CDATA[I’ve had several ideas for software products over the years—things I thought could be useful, fun to build, or just a way to experiment with new tech. But like a lot of developers, I often found myself starting strong and then getting stuck in this l...]]></description><link>https://bogdanbujdea.dev/i-launched-my-first-saas-product</link><guid isPermaLink="true">https://bogdanbujdea.dev/i-launched-my-first-saas-product</guid><category><![CDATA[SaaS]]></category><category><![CDATA[SaaS application development services]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[backend]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 21 Oct 2024 10:23:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729505577725/e532a5f9-5db7-4a04-bae4-c2ac8d89926e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I’ve had several ideas for software products over the years—things I thought could be useful, fun to build, or just a way to experiment with new tech. But like a lot of developers, I often found myself starting strong and then getting stuck in this loop of second-guessing. I’d either lose motivation because someone else had built something similar, or I’d get overwhelmed by everything I thought needed to be perfect before launch.</p>
<p>Being a backend developer, I always felt more comfortable with the server-side stuff, but taking an idea and turning it into a product people could actually use—frontend and all—that’s where things got tricky. So, I finally decided to challenge myself: pick a simple idea, build it, and actually launch it. No overthinking, no polishing things until they’re perfect. Just build and ship.</p>
<p>This article is about how I made that happen, focusing on shipping something usable instead of waiting for perfection.</p>
<h2 id="heading-what-was-stopping-me-from-launching-a-saas">What was stopping me from launching a SaaS?</h2>
<p>If you're looking to launch a product and want to reach a large audience, my opinion is that building a web app is the best way to go. Better yet, a PWA can give your app a native feel, and you don’t have to worry about building apps for different platforms. But what happens when most of your experience is in backend development? Sure, any developer can put together a basic website, but it's 2024, not 1994—your app needs to do more than just work; it needs to stand out. That was the first hurdle I faced when I had an idea.</p>
<p>Another challenge I’ve always faced is discouraging myself early on. If I worked for a week or two and saw another similar app, I would convince myself there was no point in continuing—after all, why build something that's already out there? Or, worse, if I heard someone dislike a similar app, I would assume everyone would hate mine too. Over the years, this led to several ideas being abandoned, many of which others eventually built.</p>
<p>But last month, I decided to break that cycle. I challenged myself to build a simple app and launch it.</p>
<h2 id="heading-the-idea">The Idea</h2>
<p>The goal of this challenge was simple: to launch the product. Success wasn’t tied to any financial targets or milestones—it was purely about shipping something quickly. That’s why I decided to build on something I’d already created a few years ago, a small Blazor app I used personally to analyze my <a target="_blank" href="https://todoist.com/">Todoist</a> tasks and plan long-term. Todoist is a simple app great for planning your tasks for today or this week, but it’s not that good for thinking way ahead into the future, planning more complex tasks (goals) or even showing you some statistics about your productivity. I figured that if it’s helpful for me and I already have some of the functionality done, then it should be useful for others as well and it wouldn’t take long to rewrite it into a SaaS product. I was wrong on the last part, what I thought would take 1 week actually took almost 2 weeks, but funny enough it’s not the development part that required the most work.</p>
<p>For example, I spent a lot of time obsessing over a name and domain, until I realized I was wasting more time on this than on the actual implementation. In the end, I went with <a target="_blank" href="https://task-analytics.com">https://task-analytics.com</a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729503242020/31217d19-7eb8-4ef2-8828-e0491999d867.png" alt="Task Analytics landing page" class="image--center mx-auto" /></p>
<h2 id="heading-writing-frontend-code-with-ai">Writing frontend code with AI</h2>
<p>Now let’s get back to my challenge, the key was to develop something simple and usable. I didn’t want to spend months developing the app, knowing that the longer I worked on it, the greater the risk of losing momentum and quitting. However, there’s still the frontend issue!</p>
<p>As a .NET developer, it’s easier for me to pick up Blazor than to dive into React or Vue. Still, Blazor doesn’t make the frontend magically easy—knowledge of HTML/CSS is still required, and most of my frontend experience consists of admin interfaces (simple to use, not responsive), or using predefined components that are simple to use but they are not “beautiful” by default (Radzen, Syncfusion, etc). So, should I start learning frontend development? Honestly, life is too short for that, especially when AI tools can step in. At the end of August I heard that <a target="_blank" href="https://www.cursor.com/">Cursor AI</a> is generating some good impressions so I thought it’s time to try it as well. This is a tool that integrates AI right into your code editor. If you’re familiar with VS Code, Cursor AI feels like home since it's a fork of VS Code, but with AI built into it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729502579584/e81a46ba-4e8d-4cea-93c0-9939c5a13a3b.png" alt="Cursor AI" class="image--center mx-auto" /></p>
<p>Here’s an example: I can create an empty <code>Settings.razor</code> page, then give it a prompt like the one below:</p>
<p><em>Implement the settings page UI and functionality. Make sure it’s consistent with the Home page. The settings page should include a profile picture, user name, join date, and a sign-out button. Call the user info endpoint when the page initializes, and prompt the user on sign-out to confirm. If they say yes, clear the cache and redirect to the home page.</em></p>
<p>In response, Cursor generates the code in less than 30 seconds. If it looks good, I hit a button and apply the changes. If not, I can keep chatting with the AI to tweak it.</p>
<p>An experienced developer could probably write the functionality in minutes, but it’s the UI that’s tricky. In my opinion, the hardest part of frontend work isn't writing the code—it’s the design. Without a clear design, I spend more time tweaking layouts and colors than actually writing new functionality. With AI, I can give broad directions like “make it consistent with the Home page,” and it ensures the pages have the same structure and color scheme. If I don’t like the look, I can prompt, “make it more modern” and the AI adjusts the design. Of course, that’s a pretty bad prompt and I use it only when I don’t know what I want, I just know I don’t like it 😅. Most likely it might not get great results, but the more descriptive I get, the better the output.</p>
<p>Another benefit is handling responsiveness. I can simply prompt, “make it responsive,” and then test it on different screen sizes. If something’s off, I can tell it what needs to be changed by writing the requirements or I can upload screenshots and have the AI fix the UI. Sometimes I’ll even use a combination of Chat GPT and Cursor AI, for example I give a screenshot of my page to Chat GPT and tell it to roast it and write requirements as a conclusion, then I just give those requirements to Cursor AI.</p>
<p>I’ll make a YouTube video this week where I’ll go through some example to see where AI handles the task very well, but also where it screws things up and you spend more time improving the prompts instead of writing the actual code.</p>
<h2 id="heading-balancing-perfection-and-practicality">Balancing Perfection and Practicality</h2>
<p>Now, Cursor AI isn’t perfect—it’s far from it. It still makes mistakes, and sometimes it takes longer to generate code than writing it manually as I said above. But with practice, you learn which prompts work best and which to avoid. For me, it made frontend development feasible, even though I hadn’t touched CSS in over a decade. I’m not a frontend developer but even I can tell that the Lighthouse score in the screenshot below is pretty bad, and it proves that AI doesn’t produce flawless code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729500507805/e6c4cc5e-db37-41ae-89ce-353a143caf6a.png" alt="Lighthouse score" class="image--center mx-auto" /></p>
<p>If the app proves successful, I'll definitely invest time into optimizing it or even bringing in a designer. But, as I mentioned before, the goal was to launch something usable, not perfect. Sometimes, that approach clashed with my usual instincts, especially on the backend. I love squeezing every bit of performance out of my code, but this wasn’t the time for that. For instance, I started writing Azure Functions for handling heavy operations, only to realize I was about to spend 1-2 days on something unnecessary at this stage. Instead, I opted for a simpler solution that I could write in 30 minutes. It was tough to take this route, knowing all too well the risks of technical debt. But in the end, I’m more satisfied with having launched something practical rather than perfect.</p>
<p>The point wasn’t to create an open-source project showcasing flawless .NET code—it was about sticking to the core goals and priorities. That said, my app isn’t slow or expensive to run, I pay around 50-60 EUR/month for Azure and I would probably need to scale up when I go over 100-200 users, or make some serious performance improvements. Until then, I can focus working on new functionalities.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729505760679/2cf97001-a544-4711-bb29-bc07e0b3309c.png" alt="Azure costs" class="image--center mx-auto" /></p>
<p>Just to be clear, I’m not advocating for cutting corners to the point where it harms the product. I’ve seen indie developers justify junk code in paid apps that barely function, and I have no intention of going down that road..</p>
<h2 id="heading-writing-backend-code-with-ai">Writing backend code with AI</h2>
<p>I also used Cursor AI on the backend when things got repetitive or too simple to bother coding manually.</p>
<p>Even though I made some exceptions on the performance and quality, there were still some things that I wasn’t willing to give up on. For example, implementing a good CI/CD pipeline, not having any warnings, writing some unit tests for critical areas of the business logic and writing clean code overall. Even though I have a decent amount of backend code generated by AI, I made sure it respects basic principles of clean code. Sometimes I actually had AI refactor my code by giving it some similar classes and asking it to remove the code duplication, which it handled pretty well. <em>If you want more tips on using AI in your code, I have a</em> <a target="_blank" href="https://www.linkedin.com/feed/update/urn:li:activity:7249726589628805120/"><em>LinkedIn post</em></a> <em>about this you can read.</em></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>If I were to develop this app in an enterprise setting, it would easily take a year. First, you’d spend 2-3 months planning the app, followed by another few months just getting approvals. Then comes the implementation phase, where a significant amount of time would be spent reviewing code and debating things like whether to use microservices or a monolithic architecture 😅.</p>
<p>Now that my app is live (and it even has paying users), I can consider this challenge complete. But this is just the beginning. As the title says, this is my <strong>first</strong> SaaS app, I’m not stopping here. I have more ideas in the pipeline, and I plan to build in public to show how you can build your own product as a .NET developer, even if your expertise lies mostly in the backend. I’ll follow up with more blog posts and YouTube videos that will have more details about my journey, so make sure you follow me if you want to know more!</p>
]]></content:encoded></item><item><title><![CDATA[Upgrading to .NET 8]]></title><description><![CDATA[I am currently working on several projects, and I plan to migrate all of them to .NET 8. That's why I thought it would be helpful to write an article detailing the various challenges I encountered during this process.

Port 80 is no longer the defaul...]]></description><link>https://bogdanbujdea.dev/upgrading-to-net-8</link><guid isPermaLink="true">https://bogdanbujdea.dev/upgrading-to-net-8</guid><category><![CDATA[.NET]]></category><category><![CDATA[#dotnet8]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Thu, 16 Nov 2023 01:17:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1700123795535/9db136d1-5198-4d18-af06-cbf29257ffc9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am currently working on several projects, and I plan to migrate all of them to .NET 8. That's why I thought it would be helpful to write an article detailing the various challenges I encountered during this process.</p>
<ol>
<li><h2 id="heading-port-80-is-no-longer-the-default">Port 80 is no longer the default</h2>
</li>
</ol>
<p>This project is an ASP.NET Web API on .NET 7 and it's really fresh, a few days old, but I still had issues upgrading to .NET 8. When I deployed the project to Azure Container Apps I got this error:  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">upstream connect error or disconnect/reset before headers. retried and the latest reset reason:</span> <span class="hljs-string">remote</span> <span class="hljs-string">connection</span> <span class="hljs-string">failure,</span> <span class="hljs-attr">transport failure reason: delayed connect error:</span> <span class="hljs-number">111</span>
</code></pre>
<p>After ~30 minutes I figured out that .NET 8 is listening on port 8080 instead of 80, and when I googled this I immediately found out that this is a breaking change. Here's the link with more details:</p>
<p><a target="_blank" href="https://learn.microsoft.com/en-us/dotnet/core/compatibility/containers/8.0/aspnet-port">https://learn.microsoft.com/en-us/dotnet/core/compatibility/containers/8.0/aspnet-port</a></p>
<p>I'll update this blog post when I have more stuff.</p>
]]></content:encoded></item><item><title><![CDATA[Deploying Azure SQL Server using Bicep]]></title><description><![CDATA[In this article, part of our ongoing series on Azure DevOps, we're diving into how you can deploy Azure SQL Server using Bicep. This process allows you to deploy an Azure SQL Server, generate its connection string, and then pass this connection strin...]]></description><link>https://bogdanbujdea.dev/deploying-azure-sql-server-using-bicep</link><guid isPermaLink="true">https://bogdanbujdea.dev/deploying-azure-sql-server-using-bicep</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[SQL Server]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Sat, 14 Oct 2023 11:02:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697280707485/857605bc-dfd0-4715-8910-35b8aca10c06.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, part of our ongoing series on Azure DevOps, we're diving into how you can deploy Azure SQL Server using Bicep. This process allows you to deploy an Azure SQL Server, generate its connection string, and then pass this connection string into another Bicep file that sets up an App Service in Azure.</p>
<p>Before we get started I'd like to point out that I'll only show certain parts of the Bicep files just to keep this article as short as possible, but you can find the repository link at the end of the article.</p>
<h3 id="heading-how-the-bicep-files-will-work">How The Bicep Files Will Work</h3>
<ol>
<li><p><strong>Variables</strong>: We will get the <code>DB_ADMIN_USERNAME</code> and <code>DB_ADMIN_PASSWORD</code> from our Azure DevOps pipeline variables.</p>
</li>
<li><p><strong>PowerShell Script</strong>: These variables will then be fed into our PowerShell script, which will trigger the Bicep deployment.</p>
</li>
<li><p><strong>SQL Server Module</strong>: A separate Bicep module will handle the creation of the SQL Server, generating a connection string as an output.</p>
</li>
<li><p><strong>App Service Module</strong>: This connection string will then be used as a parameter in another Bicep module that creates the App Service.</p>
</li>
<li><p><strong>Configuration</strong>: The connection string will be added to the App Service configuration under the connection strings instance.</p>
</li>
<li><p><strong>SKU and Tier</strong>: We also have a JSON configuration file that specifies the SKU and tier of the database, which for this example will be Basic.</p>
</li>
</ol>
<h3 id="heading-bicep-code-explained">Bicep Code Explained</h3>
<p>The <code>main.bicep</code> file combines the modules for the SQL Server and the App. The SQL Server module gets its parameters like <code>databaseSku</code>, <code>databaseTier</code>, <code>databaseAdminUserName</code>, and <code>databaseAdminPassword</code> from the main Bicep parameters.</p>
<p>Here is a snippet from <code>main.bicep</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-string">module</span> <span class="hljs-string">sqlServer</span> <span class="hljs-string">'sqlserver.bicep'</span> <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'sqlserver'</span>
  <span class="hljs-attr">params:</span> {
    <span class="hljs-attr">prefix:</span> <span class="hljs-string">prefix</span>
    <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
    <span class="hljs-attr">sku:</span> <span class="hljs-string">databaseSku</span>
    <span class="hljs-attr">tier:</span> <span class="hljs-string">databaseTier</span>
    <span class="hljs-attr">administratorLogin:</span> <span class="hljs-string">databaseAdminUserName</span>
    <span class="hljs-attr">administratorLoginPassword:</span> <span class="hljs-string">databaseAdminPassword</span>
  }
}
</code></pre>
<p>The <code>sqlserver.bicep</code> module is responsible for creating the SQL Server. It also generates a connection string which will be outputted and used by the <code>app.bicep</code> module.</p>
<pre><code class="lang-yaml"><span class="hljs-string">output</span> <span class="hljs-string">dbConnectionString</span> <span class="hljs-string">string</span> <span class="hljs-string">=</span> <span class="hljs-string">connectionString</span>
</code></pre>
<p>The <code>app.bicep</code> module creates the App Service and attaches the SQL database to it via the connection string.</p>
<pre><code class="lang-yaml"><span class="hljs-string">param</span> <span class="hljs-string">connectionString</span> <span class="hljs-string">string</span>
<span class="hljs-string">resource</span> <span class="hljs-string">webApp</span> <span class="hljs-string">'Microsoft.Web/sites@2022-03-01'</span> <span class="hljs-string">=</span> {
  <span class="hljs-string">//</span> <span class="hljs-string">...</span> <span class="hljs-string">(other</span> <span class="hljs-string">properties)</span>
  <span class="hljs-attr">properties:</span> {    
    <span class="hljs-string">//</span> <span class="hljs-string">...</span>
    <span class="hljs-attr">siteConfig:</span> {
      <span class="hljs-attr">connectionStrings:</span> [
        {
          <span class="hljs-attr">connectionString:</span> <span class="hljs-string">connectionString</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">'DefaultConnection'</span>
          <span class="hljs-attr">type:</span> <span class="hljs-string">'SQLAzure'</span>
        }
      ]
    }
  }
}
</code></pre>
<p>The <code>prod.json/qa.json</code> configuration files will specify database details such as its SKU and tier. We can use different values according to our environment. In this case, I'll use Basic for both, but in real-world applications, you might use a lower tier for non-prod environments and a more powerful one for the production environment.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"parameters"</span>: {
    <span class="hljs-comment">// ...</span>
    <span class="hljs-attr">"databaseTier"</span>: {
      <span class="hljs-attr">"value"</span>: <span class="hljs-string">"Basic"</span>
    },
    <span class="hljs-attr">"databaseSku"</span>: {
      <span class="hljs-attr">"value"</span>: <span class="hljs-string">"Basic"</span>
    }
  }
}
</code></pre>
<p>The full source code is available on this Azure DevOps repository:<br /><a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a></p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>In a real-world application you would also configure database backups using Bicep, allow managed identity authentication, and so on... but for this tutorial, I wanted to keep things simple and show you how easy it is to spin up an Azure SQL Server instance using Bicep. If you would like me to expand on this topic just drop a comment below 👇</p>
]]></content:encoded></item><item><title><![CDATA[Conditional Bicep Deployment in Azure DevOps Using Git]]></title><description><![CDATA[In a previous article, we've explored how Bicep makes deploying infrastructure to Azure both efficient and streamlined. This approach shines particularly when you need to spin up new environments rapidly. However, you've likely noticed a snag: our Az...]]></description><link>https://bogdanbujdea.dev/conditional-bicep-deployment-in-azure-devops-using-git</link><guid isPermaLink="true">https://bogdanbujdea.dev/conditional-bicep-deployment-in-azure-devops-using-git</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Bicep]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Azure Pipelines]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 02 Oct 2023 15:05:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1696258754179/9c27dd42-9eb1-4a5d-af04-6c32c77c8260.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In <a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines">a previous article</a>, we've explored how Bicep makes deploying infrastructure to Azure both efficient and streamlined. This approach shines particularly when you need to <a target="_blank" href="https://bogdanbujdea.dev/provisioning-new-environments-with-bicep-and-azure-devops-yaml-pipelines">spin up new environments rapidly</a>. However, you've likely noticed a snag: our Azure DevOps pipeline reruns Bicep deployments with every execution, regardless of whether the Bicep files have changed. While Azure is intelligent enough to assess file changes against the existing resource group setup, this comparison isn't instantaneous; it can take up valuable minutes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696257515237/9d024ff7-0a1a-489f-a05f-c18dd853d87f.png" alt class="image--center mx-auto" /></p>
<p>As you can see in the screenshot above, it takes ~1m 15s to run a deployment that doesn't have any infrastructure changes.</p>
<p>As a solution, you might consider disabling the Bicep deployment altogether, triggering it manually only when required. But that approach has its pitfalls: What if you deploy a new feature requiring infrastructure modifications but forget that critical manual step? You risk breaking your application, causing downtime for your users until you update the infrastructure and redeploy.</p>
<p>To sidestep these issues, we'll employ a more surgical method: using Git commands to detect changes in our Bicep files, triggering the infrastructure deployment only when necessary. This not only streamlines the process but also minimizes the risk of human error.</p>
<h2 id="heading-using-git-to-detect-changes-in-bicep-files">Using Git to detect changes in Bicep files</h2>
<p>Azure DevOps doesn't offer a straightforward mechanism to conditionally run a pipeline step based on changes to specific files or folders. To work around this limitation, we'll leverage Git, specifically the <code>git diff</code> command, to achieve this granular control.</p>
<p>Before we can execute any git commands we have to fetch our repository, so I'll use this command:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">checkout:</span> <span class="hljs-string">self</span>
    <span class="hljs-attr">fetchDepth:</span> <span class="hljs-number">0</span>
</code></pre>
<p>Now we are going to add a <code>script</code> step in our pipeline which will check if there are any .bicep files modified and put this result into a variable. Our script looks like this:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">|</span>
     <span class="hljs-string">echo</span> <span class="hljs-string">"##vso[task.setvariable variable=RunBicepDeployment]$(git diff --quiet HEAD HEAD~1 **/*.bicep; echo $?)"</span>
</code></pre>
<p>I know it looks very messy, so let's break it down in two parts:</p>
<ol>
<li>Checking if Bicep files have been modified in the latest commit</li>
</ol>
<ol>
<li>Setting a variable in our pipeline with the result of the previous command</li>
</ol>
<p>To check if any .bicep files have been modified we use this command:</p>
<pre><code class="lang-yaml"><span class="hljs-string">git</span> <span class="hljs-string">diff</span> <span class="hljs-string">--quiet</span> <span class="hljs-string">HEAD</span> <span class="hljs-string">HEAD~1</span> <span class="hljs-string">**/*.bicep;</span> <span class="hljs-string">echo</span> <span class="hljs-string">$?</span>
</code></pre>
<p>Here's a breakdown of each component of this command:</p>
<ul>
<li><p><code>git diff</code>: The basic command to show differences between two points in your Git history.</p>
</li>
<li><p><code>--quiet</code>: This flag suppresses output and focuses solely on the exit status. The exit status tells you whether or not there are differences.</p>
</li>
<li><p><code>HEAD</code>: Represents the latest commit in the current branch.</p>
</li>
<li><p><code>HEAD~1</code>: Represents the commit just before the latest commit in the current branch.</p>
</li>
<li><p><code>**/*.bicep</code>: A glob pattern indicating that we're interested in any <code>.bicep</code> files, regardless of their location in the directory structure.</p>
</li>
<li><p><code>echo $?</code>: Outputs the exit status of the last command. If the exit status is 0, it means there's no difference; otherwise, it'll be 1, indicating a difference.</p>
</li>
</ul>
<p>Putting it all together, this command compares the current and previous commits, checks if any <code>.bicep</code> files have changed, and then echoes the result. An exit code of 0 means no change, and 1 means there is a change.</p>
<p>The next step is to set a variable with this exit code so we can reuse it in another task:</p>
<pre><code class="lang-yaml"><span class="hljs-string">echo</span> <span class="hljs-string">"##vso[task.setvariable variable=RunBicepDeployment]$(git diff --quiet HEAD HEAD~1 **/*.bicep; echo $?)"</span>
</code></pre>
<p><code>echo "##vso[task.setvariable variable=RunBicepDeployment]</code>: This is Azure DevOps specific syntax to set a pipeline variable. It sets a variable named <code>RunBicepDeployment</code> which we'll reuse in the next step:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
    <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
    <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
    <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
    <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>
  <span class="hljs-attr">condition:</span> <span class="hljs-string">eq(variables.RunBicepDeployment,</span> <span class="hljs-string">'1'</span><span class="hljs-string">)</span>
</code></pre>
<p>Our <code>Deploy Bicep Infrastructure</code> task has only one change, the <code>condition</code> added at the end which allows the task to run only if the <code>RunBicepDeployment</code> variable equals <code>1</code>.</p>
<p>Our entire stage looks like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">QA</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To QA Env'</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">Test</span>
  <span class="hljs-attr">variables:</span>
    <span class="hljs-attr">location:</span> <span class="hljs-string">'westeurope'</span>
    <span class="hljs-attr">configFileName:</span> <span class="hljs-string">'qa.json'</span>
    <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">'azure-devops-yaml-pipeline-qa'</span>
    <span class="hljs-attr">stagingAppUrl:</span> <span class="hljs-string">'https://bogdan-todo-qa-app-staging.azurewebsites.net/health'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">checkout:</span> <span class="hljs-string">self</span>
          <span class="hljs-attr">fetchDepth:</span> <span class="hljs-number">0</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">script:</span> <span class="hljs-string">|
            echo "##vso[task.setvariable variable=RunBicepDeployment]$(git diff --quiet HEAD HEAD~1 **/*.bicep; echo $?)"
</span>          <span class="hljs-attr">displayName:</span> <span class="hljs-string">Check</span> <span class="hljs-string">Bicep</span> <span class="hljs-string">Changes</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">setRunBicepDeployment</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>
          <span class="hljs-attr">condition:</span> <span class="hljs-string">eq(variables.RunBicepDeployment,</span> <span class="hljs-string">'1'</span><span class="hljs-string">)</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
      <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to QA'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-qa-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>
</code></pre>
<p>If we push this change we notice that the Bicep deployment is skipped:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1696258325162/178b9282-0f73-4c4a-890f-0c08cceb5f7b.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By incorporating this Git-based conditional logic into our pipeline, we've made it substantially more efficient. Now, the Bicep deployment step takes less than a second when there are no changes to the <code>.bicep</code> files. While saving a minute may not seem significant, it's important to recognize that this is a simplified example featuring only an app service and an app service plan. In a more complex, real-world application, the Bicep files would likely be far more intricate, potentially taking several minutes for each unnecessary deployment. Over time, these saved minutes add up, accelerating the time-to-market for new features and bug fixes. Moreover, for pipelines that run frequently, these efficiencies can also translate into cost savings.</p>
<p>The code and the pipeline I'm using to showcase these articles can be accessed here:</p>
<p><a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a></p>
<p><a target="_blank" href="https://dev.azure.com/bujdea/AzureDevopsYamlPipeline/_build?definitionId=16">https://dev.azure.com/bujdea/AzureDevopsYamlPipeline/_build?definitionId=16</a></p>
<p>Feel free to subscribe to the newsletter below or <a target="_blank" href="https://twitter.com/BogdanBujdea"><strong>follow me on Twitter</strong></a> if you'd like to be notified as soon as possible when I post my next article!</p>
]]></content:encoded></item><item><title><![CDATA[Conditional deployment in Bicep]]></title><description><![CDATA[In the Bicep infrastructure article I demonstrated how to write Bicep files and deploy them on Azure each time our pipeline runs. This enabled us to have our resources defined in code and in a later article we easily deployed a new environment using ...]]></description><link>https://bogdanbujdea.dev/conditional-deployment-in-bicep</link><guid isPermaLink="true">https://bogdanbujdea.dev/conditional-deployment-in-bicep</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure Pipelines]]></category><category><![CDATA[Bicep]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 25 Sep 2023 08:53:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695631698381/5d5d5c77-41f9-4804-a667-58cd1728e2dd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the <a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines">Bicep infrastructure article</a> I demonstrated how to write Bicep files and deploy them on Azure each time our pipeline runs. This enabled us to have our resources defined in code and in <a target="_blank" href="https://bogdanbujdea.dev/provisioning-new-environments-with-bicep-and-azure-devops-yaml-pipelines">a later article</a> we easily deployed a new environment using the same Bicep files.</p>
<p>Our QA environment was a mirror image of our production environment and that's okay for now, but this isn't always the case in real-life applications.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695630507682/247b5008-ba50-4d3d-bf03-8aad63c44de4.png" alt class="image--center mx-auto" /></p>
<p>As you can see, we have a staging slot in our QA resource group, but do we really need it?</p>
<p><strong>To Slot or Not to Slot on QA</strong>:</p>
<p>The immediate consequence of omitting the staging slot for our QA environment is a minor downtime during deployments. However, considering it's the QA environment, a brief pause is acceptable. Why? Let's break it down:</p>
<ol>
<li><p><strong>Cost Efficiency</strong>: By doing away with the staging slot, we can downgrade from the S1 tier to the B1 tier for our app service plan specifically for the QA environment. This transition alone saves approximately 50 EUR/month.</p>
</li>
<li><p><strong>Resource Optimization</strong>: By aligning resources more appropriately to their intended purpose (QA vs. Production), we ensure that we're not over-provisioning for a testing environment.</p>
</li>
</ol>
<p>Now that we decided to remove it, you may be tempted to just go into Azure and delete the deployment slot, then manually downgrade the tier of the App Service Plan. However, if you do that, then on the next pipeline run the slot will be created again and our App Service Plan will get back up to the S1 tier.</p>
<p>At first glance, it might seem tedious to modify Bicep files for every infrastructure change. However, consider this: using Bicep ensures consistent, automated, and error-free updates. And, realistically, how frequently do you actually alter your infrastructure?</p>
<p>To optimize our QA environment, we'll adapt our Bicep configurations to exclude the staging slot and shift the app service plan from S1 to B1. This approach is practical, cost-efficient, and tailored to our specific deployment needs. Let's do this in 3 steps:</p>
<ol>
<li><p>Introducing a parameter within our configuration files.</p>
</li>
<li><p>Incorporating this parameter into our Bicep definitions</p>
</li>
<li><p>Facilitating deployments contingent on the established parameter value.</p>
</li>
</ol>
<hr />
<h1 id="heading-creating-a-parameter-in-configuration-files">Creating a parameter in configuration files</h1>
<p>We introduce a <code>hasStagingSlot</code> parameter in our configuration files to determine whether a staging slot should be deployed.</p>
<p>For the Production environment, <code>hasStagingSlot</code> is set to <code>true</code>:</p>
<pre><code class="lang-yaml">{
    <span class="hljs-string">"$schema"</span><span class="hljs-string">:</span> <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"</span>,
    <span class="hljs-attr">"contentVersion":</span> <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters":</span> {
      <span class="hljs-attr">"location":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"westeurope"</span>
      },
      <span class="hljs-attr">"appName":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"bogdan-todo"</span>
      },
      <span class="hljs-attr">"appSku":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"S1"</span>
      },
      <span class="hljs-attr">"hasStagingSlot":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-literal">true</span> <span class="hljs-string">//</span> <span class="hljs-string">on</span> <span class="hljs-string">prod</span> <span class="hljs-string">environment</span> <span class="hljs-string">we</span> <span class="hljs-string">want</span> <span class="hljs-string">the</span> <span class="hljs-string">slot</span> <span class="hljs-string">to</span> <span class="hljs-string">be</span> <span class="hljs-string">deployed</span>
      }
    }
  }
</code></pre>
<p>For the QA environment, <code>hasStagingSlot</code> is set to <code>false</code>:</p>
<pre><code class="lang-yaml">{
    <span class="hljs-string">"$schema"</span><span class="hljs-string">:</span> <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"</span>,
    <span class="hljs-attr">"contentVersion":</span> <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters":</span> {
      <span class="hljs-attr">"location":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"westeurope"</span>
      },
      <span class="hljs-attr">"appName":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"bogdan-todo-qa"</span>
      },
      <span class="hljs-attr">"appSku":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"B1"</span>
      },
      <span class="hljs-attr">"hasStagingSlot":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-literal">false</span> <span class="hljs-string">//</span> <span class="hljs-string">on</span> <span class="hljs-string">qa</span> <span class="hljs-string">environment</span> <span class="hljs-string">we</span> <span class="hljs-string">don't</span> <span class="hljs-string">want</span> <span class="hljs-string">the</span> <span class="hljs-string">slot</span> <span class="hljs-string">to</span> <span class="hljs-string">be</span> <span class="hljs-string">deployed</span>
      }
    }
  }
</code></pre>
<p>We also change the <code>appSku</code> value to B1.</p>
<h1 id="heading-integrating-with-bicep">Integrating with Bicep</h1>
<p>The <code>main.bicep</code> file receives the <code>hasStagingSlot</code> variable and passes it onto <code>app.bicep</code> as a parameter.</p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-string">param</span> <span class="hljs-string">hasStagingSlot</span> <span class="hljs-string">bool</span> <span class="hljs-string">=</span> <span class="hljs-literal">false</span> <span class="hljs-string">//</span> <span class="hljs-string">we</span> <span class="hljs-string">can</span> <span class="hljs-string">add</span> <span class="hljs-string">a</span> <span class="hljs-string">default</span> <span class="hljs-string">value</span> <span class="hljs-string">if</span> <span class="hljs-string">we</span> <span class="hljs-string">want</span>

<span class="hljs-string">module</span> <span class="hljs-string">app</span> <span class="hljs-string">'app.bicep'</span>  <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'bogdan-todo-app'</span>
  <span class="hljs-string">params:</span>{
    <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
    <span class="hljs-attr">appName:</span> <span class="hljs-string">appName</span>
    <span class="hljs-attr">appSku:</span> <span class="hljs-string">appSku</span>
    <span class="hljs-attr">hasStagingSlot:</span> <span class="hljs-string">hasStagingSlot</span> <span class="hljs-string">//</span> <span class="hljs-string">we</span> <span class="hljs-string">send</span> <span class="hljs-string">this</span> <span class="hljs-string">parameter</span> <span class="hljs-string">to</span> <span class="hljs-string">our</span> <span class="hljs-string">app.bicep</span> <span class="hljs-string">module</span>
  }
}
</code></pre>
<h1 id="heading-conditional-deployment-based-on-the-parameter-value">Conditional deployment based on the parameter value</h1>
<p>In <code>app.bicep</code>, the deployment of the staging slot depends on the state of <code>hasStagingSlot</code></p>
<p><code>app.bicep</code></p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-string">param</span> <span class="hljs-string">hasStagingSlot</span> <span class="hljs-string">bool</span> <span class="hljs-string">//</span> <span class="hljs-string">this</span> <span class="hljs-string">is</span> <span class="hljs-string">our</span> <span class="hljs-string">variable</span> <span class="hljs-string">which</span> <span class="hljs-string">decides</span> <span class="hljs-string">if</span> <span class="hljs-string">we're</span> <span class="hljs-string">creating</span> <span class="hljs-string">the</span> <span class="hljs-string">slot</span> <span class="hljs-string">or</span> <span class="hljs-string">not</span>
<span class="hljs-string">...</span>

<span class="hljs-string">//</span> <span class="hljs-string">using</span> <span class="hljs-string">the</span> <span class="hljs-string">if</span> <span class="hljs-string">statement</span> <span class="hljs-string">we</span> <span class="hljs-string">will</span> <span class="hljs-string">deploy</span> <span class="hljs-string">the</span> <span class="hljs-string">slot</span>
<span class="hljs-string">//</span> <span class="hljs-string">only</span> <span class="hljs-string">if</span> <span class="hljs-string">hasStagingSlot</span> <span class="hljs-string">is</span> <span class="hljs-literal">true</span>
<span class="hljs-string">resource</span> <span class="hljs-string">webAppSlot</span> <span class="hljs-string">'Microsoft.Web/sites/slots@2022-03-01'</span> <span class="hljs-string">=</span> <span class="hljs-string">if</span> <span class="hljs-string">(hasStagingSlot)</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'staging'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">'app,linux'</span>
  <span class="hljs-attr">parent:</span> <span class="hljs-string">webApp</span>
  <span class="hljs-string">properties:</span>{
    <span class="hljs-attr">enabled:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">httpsOnly:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">siteConfig:</span> {
      <span class="hljs-attr">httpLoggingEnabled:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">linuxFxVersion:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
    }
  }
}
</code></pre>
<p>This segment essentially states that the staging slot will only be deployed if <code>hasStagingSlot</code> is <code>true</code>.</p>
<p>There's one more thing we have to do, we need to remove the slot deployment from our <code>qa.yaml</code> stage:</p>
<pre><code class="lang-yaml"><span class="hljs-string">...</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">UpdateAzureResources</span>
  <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
        <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
        <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
      <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to QA'</span>
      <span class="hljs-attr">inputs:</span>
        <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
        <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
        <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-qa-app'</span>
        <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
        <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
        <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>
</code></pre>
<p>Now let's deploy the changes and see what happens!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695631158123/a1198b00-2152-4d63-b69d-31a85dbc55cc.png" alt class="image--center mx-auto" /></p>
<p>Our pipeline was deployed successfully, but when we look at our QA environment we still see the staging slot:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695631176377/5ad4eef9-8a76-46b8-8187-375afd41d6be.png" alt class="image--center mx-auto" /></p>
<p>When a resource is missing from a Bicep file but exists in Azure, Bicep doesn't automatically delete it. Here's why:</p>
<ol>
<li><p><strong>Safety</strong>: Deleting resources can be destructive. Imagine accidentally removing a database with valuable data. By not auto-deleting, Bicep ensures safety.</p>
</li>
<li><p><strong>State Management</strong>: Bicep and ARM templates are declarative. This means you declare your desired state, and Azure ensures that the actual state matches. However, they don't maintain a history of previous states. So, when a resource is omitted from the Bicep file, Bicep doesn't necessarily recognize it as a directive to delete; it just sees it as "not part of the current directive."</p>
</li>
<li><p><strong>Explicitness</strong>: The philosophy behind tools like Bicep is that operations, especially destructive ones, should be explicit. Removing a resource from a Bicep file is implicit. If you want to delete a resource, it's generally better to do it explicitly, ensuring you're aware of the ramifications.</p>
</li>
</ol>
<p>This basically means we have to go do it ourselves manually. Now that we updated our Bicep files, the staging slot will only be used in the production environment.</p>
<h1 id="heading-takeaways"><strong>Takeaways</strong></h1>
<h3 id="heading-when-to-use-conditional-deployment"><strong>When to Use Conditional Deployment</strong>:</h3>
<ol>
<li><p>You need to maintain different configurations for dev/test vs. production.</p>
</li>
<li><p>There are cost constraints and you want to deploy certain resources only when absolutely necessary.</p>
</li>
</ol>
<h3 id="heading-how-to-do-conditional-deployments"><strong>How to do conditional deployments</strong>:</h3>
<ol>
<li><p>Use configuration files with different values per environment</p>
</li>
<li><p>Use the if statement for conditional deployments based on your conditions</p>
</li>
</ol>
<p><strong>Conclusion</strong>:</p>
<p>Conditional deployments provide precise control over our infrastructure configuration. They optimize resource usage and can significantly minimize deployment errors. However, like all tools, they have their appropriate applications. Assess your needs, understand the circumstances, and employ them wisely.</p>
<p>The full source code can be downloaded from here: <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a></p>
<h1 id="heading-whats-next"><strong>What's next</strong></h1>
<p>In the next article, I will demonstrate how to add application settings using Bicep. Feel free to subscribe to the newsletter below or <a target="_blank" href="https://twitter.com/BogdanBujdea"><strong>follow me on Twitter</strong></a> if you'd like to be notified as soon as possible!</p>
]]></content:encoded></item><item><title><![CDATA[Provisioning new environments with Bicep and Azure DevOps YAML Pipelines]]></title><description><![CDATA[In this article, we continue working on our sample application and I'm going to walk you through setting up a new QA environment, neatly tucked into its own resource group. The best part? We'll automate the entire process using Azure Pipelines and Bi...]]></description><link>https://bogdanbujdea.dev/provisioning-new-environments-with-bicep-and-azure-devops-yaml-pipelines</link><guid isPermaLink="true">https://bogdanbujdea.dev/provisioning-new-environments-with-bicep-and-azure-devops-yaml-pipelines</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[DevOps CI/CD pipeline]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[asp.net core]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Fri, 22 Sep 2023 12:25:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695383257156/56eddd84-561d-47c8-b932-7dcc3cbb970c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, we continue working on our <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">sample application</a> and I'm going to walk you through setting up a new QA environment, neatly tucked into its own resource group. The best part? We'll automate the entire process using Azure Pipelines and Bicep!</p>
<p>Before diving in, I highly recommend reading these two articles first:</p>
<p><a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines?source=more_series_bottom_blogs"><em>Bicep Infrastructure Deployment from Azure DevOps YAML Pipelines</em></a></p>
<p>and</p>
<p><a target="_blank" href="https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml?source=more_series_bottom_blogs"><em>Azure DevOps Best Practices: Breaking Down the Monolithic YAML</em></a></p>
<p>After reading those articles, you will comprehend how to deploy an app using Azure DevOps pipelines and Bicep, as well as grasp the reasoning behind dividing my infrastructure code into multiple files.</p>
<p>If you have already done so or already have this knowledge, feel free to proceed with the rest of the article!</p>
<h2 id="heading-provisioning-a-new-environment-using-bicep">Provisioning a New Environment Using Bicep</h2>
<p>There are various ways to separate environments, but my usual preference is to have one resource group for each environment, which is what I'll demonstrate in this guide.</p>
<p>The beauty of this approach is its ability to logically isolate environments. By separating resources into distinct resource groups according to their environment, management and monitoring become significantly more effortless.</p>
<p>If you remember, the primary configuration for our environment was stored in a <code>prod.json</code> file. This file contains crucial details, like the tier of the App Service, its SKU, and environment-specific prefixes (e.g., "prod"). Here is the file that we already have in our project:</p>
<pre><code class="lang-yaml">{
    <span class="hljs-string">"$schema"</span><span class="hljs-string">:</span> <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"</span>,
    <span class="hljs-attr">"contentVersion":</span> <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters":</span> {
      <span class="hljs-attr">"location":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"westeurope"</span>
      },
      <span class="hljs-attr">"appName":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"bogdan-todo"</span>
      },
      <span class="hljs-attr">"appSku":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"S1"</span>
      }
    }
}
</code></pre>
<p>If we want another environment we should first add another file in the configurations folder, so let's continue with the first step!</p>
<hr />
<h2 id="heading-step-1-create-a-new-configuration-file"><strong>Step 1:</strong> Create a New Configuration File</h2>
<p>Start by adding a new file named <code>qa.json</code> in your configurations directory. This file will define settings tailored to the QA environment, setting it apart from production or any other environments you might have.</p>
<pre><code class="lang-yaml">{
    <span class="hljs-string">"$schema"</span><span class="hljs-string">:</span> <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"</span>,
    <span class="hljs-attr">"contentVersion":</span> <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters":</span> {
      <span class="hljs-attr">"location":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"westeurope"</span>
      },
      <span class="hljs-attr">"appName":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"bogdan-todo-qa"</span>
      },
      <span class="hljs-attr">"appSku":</span> {
        <span class="hljs-attr">"value":</span> <span class="hljs-string">"S1"</span>
      }
    }
  }
</code></pre>
<h2 id="heading-step-2-adjust-environment-specific-variables"><strong>Step 2:</strong> Adjust Environment-Specific Variables</h2>
<p>Inside <code>qa.json</code>, alter variables to reflect the QA environment's attributes. This could involve tweaking parameters like the App Service tier, resource names, or any environment-specific metadata. In my file, I made just one change, the <code>appName</code> parameter was set to <code>bogdan-todo-qa</code></p>
<p>I did this because we can't have two identical subdomains under the same domain (azurewebsites.net). Since I already have <a target="_blank" href="https://bogdan-todo-app.azurewebsites.net">bogdan-todo-app.azurewebsites.net</a> I need to change the <code>appName</code> to something else, in our case, it will be <a target="_blank" href="https://bogdan-todo-qa-app.azurewebsites.net">bogdan-todo-qa-app.azurewebsites.net</a></p>
<h2 id="heading-step-3-create-the-pipeline-stage-for-this-environment"><strong>Step 3:</strong> Create the pipeline stage for this environment</h2>
<p>I think it's a good idea to break your pipeline into multiple components, so that's why I have each environment with its own stage. For the production stage we already have the file <code>production.yaml</code>, so let's create one for QA as well.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">QA</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To QA Env'</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">Test</span>
  <span class="hljs-attr">variables:</span>
    <span class="hljs-attr">location:</span> <span class="hljs-string">'westeurope'</span>
    <span class="hljs-attr">configFileName:</span> <span class="hljs-string">'qa.json'</span>
    <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">'azure-devops-yaml-pipeline-qa'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
      <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to QA app service'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-qa-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>
</code></pre>
<p>I copy-pasted my <code>production.yaml</code> to <code>qa.yaml</code>, but I still had to make some changes:</p>
<ul>
<li><p>the name of the stage was changed to <code>QA</code> , also the <code>displayName</code> was updated</p>
</li>
<li><p>We need a new resource group for this environment, so I changed the name of the <code>resourceGroupName</code> variable to <code>'azure-devops-yaml-pipeline-qa'</code></p>
</li>
</ul>
<p>We also have to update our <code>production.yaml</code> so that it will not depend on the <code>Test</code> stage anymore. We need to prevent our production deployment if the <code>QA</code> deployment fails.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Production</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To Production Env'</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">QA</span> <span class="hljs-comment"># this was changed from Test to QA</span>
<span class="hljs-string">....</span>
</code></pre>
<h2 id="heading-step-4-include-the-new-stage-in-your-main-pipeline-file"><strong>Step 4:</strong> Include the new stage in your main pipeline file</h2>
<p>I will now add the <code>qa.yaml</code> reference to <code>azure-pipelines.yaml:</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">'ubuntu-latest'</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-attr">solution:</span> <span class="hljs-string">'**/*.sln'</span>
  <span class="hljs-attr">buildPlatform:</span> <span class="hljs-string">'Any CPU'</span>
  <span class="hljs-attr">buildConfiguration:</span> <span class="hljs-string">'Release'</span>


<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/build.yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/test.yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/qa.yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/production.yaml</span>
</code></pre>
<h2 id="heading-step-5-deploy-and-watch-bicep-work-its-magic"><strong>Step 5:</strong> Deploy and Watch Bicep Work Its Magic</h2>
<p>Initiate the deployment process, just as you did previously. As Bicep processes the <code>qa.json</code> configuration, it will provision a brand-new resource group labeled for the QA environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695391107048/d0ede17e-5ec0-4424-b79d-fb1c4c35aa7e.png" alt="Provision environment using Bicep and Azure DevOps YAML pipelines" class="image--center mx-auto" /></p>
<p>Once the QA stage is complete the new environment will be up and running in Azure!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695391166674/930516f7-da6c-4fe2-aa49-1ded4aac45d3.png" alt="Azure resource group provisioned using Bicep and Azure DevOps YAML pipelines" class="image--center mx-auto" /></p>
<h2 id="heading-wrapping-up">Wrapping Up</h2>
<p>With a few simple configurations and Bicep's robust capabilities, you can dynamically create multiple environments tailored to specific needs. The era of tedious, manual provisioning is behind us!</p>
<h2 id="heading-whats-next">What's next</h2>
<p>In the next article, I will remove the staging slot from the QA environment to demonstrate how to add conditions in Bicep files. Feel free to subscribe to the newsletter below or <a target="_blank" href="https://twitter.com/BogdanBujdea">follow me on Twitter</a> if you'd like to be notified as soon as possible!</p>
]]></content:encoded></item><item><title><![CDATA[Azure DevOps Best Practices: Breaking Down the Monolithic YAML]]></title><description><![CDATA[Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects

Running tests with code coverage in Azure DevOps YAML pipelines

Static code analysis with NDepend in Azure Pipelines

Running e2e tests with Playwright in Azure YAML Pipelines

...]]></description><link>https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml</link><guid isPermaLink="true">https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure Pipelines]]></category><category><![CDATA[.NET]]></category><category><![CDATA[asp.net core]]></category><category><![CDATA[ASP.NET]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Wed, 20 Sep 2023 09:45:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695202686669/c4ee2c62-58f5-4341-b107-53002c237c0d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ol>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/creating-a-multi-stage-yaml-pipeline-in-azure-devops-for-net-projects?source=more_series_bottom_blogs">Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-tests-with-code-coverage-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Running tests with code coverage in Azure DevOps YAML pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/static-code-analysis-with-ndepend-in-azure-pipelines?source=more_series_bottom_blogs">Static code analysis with NDepend in Azure Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-e2e-tests-with-playwright-in-azure-yaml-pipelines?source=more_series_bottom_blogs">Running e2e tests with Playwright in Azure YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/publishing-playwright-report-as-an-artifact-in-azure-devops?source=more_series_bottom_blogs">Publishing Playwright report as an artifact in Azure DevOps</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Bicep Infrastructure Deployment from Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/blue-green-deployments-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Blue-green Deployments in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/pre-deployment-health-checks-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Pre-Deployment Health Checks in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml?source=more_series_bottom_blogs">Azure DevOps Best Practices: Breaking Down the Monolithic YAML</a></p>
</li>
</ol>
<hr />
<p>In the realm of Azure DevOps, infrastructure management via YAML files has become increasingly prominent. While this shift offers immense power, it can lead to lengthy and often complex <code>azure-pipelines.yaml</code> files. But what if there was a more structured, readable approach?</p>
<p>Today, we’ll unravel the art of breaking down a monolithic <code>azure-pipelines.yaml</code> into distinct files, each representing its own stage.</p>
<h4 id="heading-why-break-down-your-yaml"><strong>Why Break Down Your YAML?</strong></h4>
<ul>
<li><p><strong>Readability:</strong> Smaller, function-specific files are easier to read, understand, and maintain, just like your code.</p>
</li>
<li><p><strong>Collaboration:</strong> Different teams can own different stages without stepping on each other's toes.</p>
</li>
<li><p><strong>Error Management:</strong> Isolating stages helps in pinpointing issues quickly.</p>
</li>
</ul>
<h1 id="heading-step-by-step-guide-to-splitting-your-yaml"><strong>Step-by-step Guide to Splitting Your YAML</strong></h1>
<h2 id="heading-1-understand-your-existing-structure"><strong>1. Understand Your Existing Structure</strong></h2>
<ul>
<li>First, take stock of your current <code>azure-pipelines.yaml</code>. How is it structured? What stages exist, and what does each stage entail? I'll use the yaml from our sample project as an example:</li>
</ul>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">'ubuntu-latest'</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-attr">solution:</span> <span class="hljs-string">'**/*.sln'</span>
  <span class="hljs-attr">buildPlatform:</span> <span class="hljs-string">'Any CPU'</span>
  <span class="hljs-attr">buildConfiguration:</span> <span class="hljs-string">'Release'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">'westeurope'</span>
  <span class="hljs-attr">configFileName:</span> <span class="hljs-string">'prod.json'</span>
  <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">'azure-devops-yaml-pipeline'</span>
  <span class="hljs-attr">stagingAppUrl:</span> <span class="hljs-string">'https://bogdan-todo-app-staging.azurewebsites.net/health'</span>


<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">build</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">BuildSolution</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Build .NET solution'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'build'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Create publish artifact'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'publish'</span>
          <span class="hljs-attr">publishWebProjects:</span> <span class="hljs-literal">false</span>
          <span class="hljs-attr">arguments:</span> <span class="hljs-string">'-o $(build.artifactStagingDirectory)'</span>
          <span class="hljs-attr">zipAfterPublish:</span> <span class="hljs-literal">false</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishPipelineArtifact@1</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Publish artifact'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">targetPath:</span> <span class="hljs-string">$(build.artifactStagingDirectory)</span>
          <span class="hljs-attr">artifact:</span> <span class="hljs-string">'drop'</span>
          <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'pipeline'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">test</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">RunUnitTests</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Run unit tests'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'test'</span>
          <span class="hljs-attr">projects:</span> <span class="hljs-string">'**/*[Tt]est*/*.csproj'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">update_infrastructure</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy_app</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To App Service'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">deploy</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to staging slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">deployToSlotOrASE:</span> <span class="hljs-literal">true</span>
            <span class="hljs-attr">slotName:</span> <span class="hljs-string">'staging'</span>
            <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>      
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">retryCountOnTaskFailure:</span> <span class="hljs-number">3</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Check API health before swapping slots'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/healthcheck.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(stagingAppUrl)'</span>   
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureAppServiceManage@0</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Swap slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">Action:</span> <span class="hljs-string">'Swap Slots'</span>
            <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">ResourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">SourceSlot:</span> <span class="hljs-string">'staging'</span>
</code></pre>
<h2 id="heading-2-create-distinct-yamls-for-each-stage"><strong>2. Create Distinct YAMLs for Each Stage</strong></h2>
<p>In the yaml file above we have 4 stages, so we can create these yaml files:</p>
<ul>
<li><p><code>build.yaml</code> - for building the .NET solution</p>
</li>
<li><p><code>test.yaml</code> - for running the tests</p>
</li>
<li><p><code>prod.yaml</code> - for deploying the production infrastructure using Bicep and then deploying the app</p>
</li>
</ul>
<p>Let's create them one by one, and we'll start with the <code>build.yaml</code> file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">BuildSolution</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Build .NET solution'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'build'</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Create publish artifact'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'publish'</span>
          <span class="hljs-attr">publishWebProjects:</span> <span class="hljs-literal">false</span>
          <span class="hljs-attr">arguments:</span> <span class="hljs-string">'-o $(build.artifactStagingDirectory)'</span>
          <span class="hljs-attr">zipAfterPublish:</span> <span class="hljs-literal">false</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">PublishPipelineArtifact@1</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Publish artifact'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">targetPath:</span> <span class="hljs-string">$(build.artifactStagingDirectory)</span>
          <span class="hljs-attr">artifact:</span> <span class="hljs-string">'drop'</span>
          <span class="hljs-attr">publishLocation:</span> <span class="hljs-string">'pipeline'</span>
</code></pre>
<p>In the <code>build.yaml</code> file we build the project and then we publish it as an artifact that will be used in the later stages.</p>
<p>The <code>test.yaml</code> file looks like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>        
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Test</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">Build</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">RunUnitTests</span>
      <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DotNetCoreCLI@2</span>
        <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Run unit tests'</span>
        <span class="hljs-attr">inputs:</span>
          <span class="hljs-attr">command:</span> <span class="hljs-string">'test'</span>
          <span class="hljs-attr">projects:</span> <span class="hljs-string">'**/*[Tt]est*/*.csproj'</span>
</code></pre>
<p>In this stage we are just running the unit tests. At the beginning of the file, you'll observe the use of the <code>dependsOn</code> keyword. This instructs the pipeline to wait for the completion of the <code>Build</code> stage before proceeding.</p>
<p>The last one is <code>production.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">Production</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To Production Env'</span>
  <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">Test</span>
  <span class="hljs-attr">variables:</span>
    <span class="hljs-attr">location:</span> <span class="hljs-string">'westeurope'</span>
    <span class="hljs-attr">configFileName:</span> <span class="hljs-string">'prod.json'</span>
    <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">'azure-devops-yaml-pipeline'</span>
    <span class="hljs-attr">stagingAppUrl:</span> <span class="hljs-string">'https://bogdan-todo-app-staging.azurewebsites.net/health'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">Deploy</span>
      <span class="hljs-attr">dependsOn:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to staging slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">deployToSlotOrASE:</span> <span class="hljs-literal">true</span>
            <span class="hljs-attr">slotName:</span> <span class="hljs-string">'staging'</span>
            <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>      
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">retryCountOnTaskFailure:</span> <span class="hljs-number">3</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Check API health before swapping slots'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/healthcheck.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(stagingAppUrl)'</span>   
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureAppServiceManage@0</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Swap slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">Action:</span> <span class="hljs-string">'Swap Slots'</span>
            <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">ResourceGroupName:</span> <span class="hljs-string">$(resourceGroupName)</span>
            <span class="hljs-attr">SourceSlot:</span> <span class="hljs-string">'staging'</span>
</code></pre>
<p>In this file, we have combined two previous stages: infrastructure deployment and deployment to the app service. The <code>Production</code> stage depends on the <code>Test</code> stage, so if your tests pass then we won't deploy our code to the production environment.</p>
<p>We have also relocated some variables from the azure-pipelines.yaml to this file, making them accessible to all the jobs within the <code>Production</code> stage.</p>
<p>One last thing I should mention is that jobs run in parallel, so I added the dependsOn clause to the <code>Deploy</code> job, this means that we will only deploy after the infrastructure deployment is finished successfully.</p>
<h2 id="heading-3-reference-these-files-within-the-main-yaml"><strong>3. Reference these files within the main YAML</strong></h2>
<p>I will now move these files to a folder called <code>stages</code> and reference them like this in the <code>azure-pipelines.yaml</code>:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">'ubuntu-latest'</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-attr">solution:</span> <span class="hljs-string">'**/*.sln'</span>
  <span class="hljs-attr">buildPlatform:</span> <span class="hljs-string">'Any CPU'</span>
  <span class="hljs-attr">buildConfiguration:</span> <span class="hljs-string">'Release'</span>


<span class="hljs-attr">stages:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/build.yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/test.yaml</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">template:</span> <span class="hljs-string">stages/production.yaml</span>
</code></pre>
<p>That's it for now! We've successfully split our YAML into individual stages making it cleaner and easier to read and navigate.</p>
<p>The code is public and you can find it here: <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a></p>
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>In conclusion, just as we emphasize writing clean, concise, and readable code, the same principles should be applied to our YAML files in Azure DevOps pipelines. Keeping our YAML configurations compact and straightforward not only ensures better maintainability but also reduces the potential for errors and facilitates collaboration. Let's treat our infrastructure definitions with the same care and attention to detail as our application code, striving for clarity and simplicity in every line.</p>
<h2 id="heading-whats-next">What's next</h2>
<p><a target="_blank" href="https://bogdanbujdea.dev/provisioning-new-environments-with-bicep-and-azure-devops-yaml-pipelines">In the next article</a>, I will add a new environment to our sample application in 5 simple steps. Feel free to subscribe to the newsletter below or follow me on Twitter if you'd like to be notified as soon as possible!</p>
]]></content:encoded></item><item><title><![CDATA[Pre-Deployment Health Checks in Azure DevOps YAML Pipelines]]></title><description><![CDATA[Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects

Running tests with code coverage in Azure DevOps YAML pipelines

Static code analysis with NDepend in Azure Pipelines

Running e2e tests with Playwright in Azure YAML Pipelines

...]]></description><link>https://bogdanbujdea.dev/pre-deployment-health-checks-in-azure-devops-yaml-pipelines</link><guid isPermaLink="true">https://bogdanbujdea.dev/pre-deployment-health-checks-in-azure-devops-yaml-pipelines</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Devops]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Fri, 15 Sep 2023 07:38:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694763412277/ec7f37ae-6590-4b76-a77a-9a7d1d3dfdd8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ol>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/creating-a-multi-stage-yaml-pipeline-in-azure-devops-for-net-projects?source=more_series_bottom_blogs">Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-tests-with-code-coverage-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Running tests with code coverage in Azure DevOps YAML pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/static-code-analysis-with-ndepend-in-azure-pipelines?source=more_series_bottom_blogs">Static code analysis with NDepend in Azure Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-e2e-tests-with-playwright-in-azure-yaml-pipelines?source=more_series_bottom_blogs">Running e2e tests with Playwright in Azure YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/publishing-playwright-report-as-an-artifact-in-azure-devops?source=more_series_bottom_blogs">Publishing Playwright report as an artifact in Azure DevOps</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Bicep Infrastructure Deployment from Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/blue-green-deployments-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Blue-green Deployments in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/pre-deployment-health-checks-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Pre-Deployment Health Checks in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml?source=more_series_bottom_blogs">Azure DevOps Best Practices: Breaking Down the Monolithic YAML</a></p>
</li>
</ol>
<hr />
<p>In our last article, we explored the slot-swapping technique in Azure Pipelines. While swapping provides us with seamless deployments, how do we ensure that the newly deployed code in the staging slot is healthy and ready for production? The answer: Health Checks. This article uncovers the importance and implementation of health checks in .NET applications, ensuring that we can confidently swap our slots without unexpected hiccups.</p>
<h1 id="heading-why-health-checks"><strong>Why Health Checks?</strong></h1>
<p>Imagine deploying a new release to your staging environment. You swap the slots, and suddenly, users are facing issues. You'd wish there were a mechanism to catch these glitches before the switch. That's where health checks come in.</p>
<p>At its core, health checks provide an automated way to verify if your application is functioning correctly after a new deployment. They can test various parts of your system and ensure that they're not just alive but well.</p>
<h1 id="heading-creating-a-custom-health-check"><strong>Creating a custom Health Check</strong></h1>
<p>Before diving deep, let's start with a simple custom health check in a .NET application. This check will randomly return a health status:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">public class ToDoAppHealthCheck :</span> <span class="hljs-string">IHealthCheck</span>
{
    <span class="hljs-string">public</span> <span class="hljs-string">Task&lt;HealthCheckResult&gt;</span> <span class="hljs-string">CheckHealthAsync(HealthCheckContext</span> <span class="hljs-string">context</span>, <span class="hljs-string">CancellationToken</span> <span class="hljs-string">cancellationToken</span> <span class="hljs-string">=</span> <span class="hljs-string">new</span> <span class="hljs-string">CancellationToken())</span>
    {
        <span class="hljs-string">try</span>
        {
            <span class="hljs-string">var</span> <span class="hljs-string">todoService</span> <span class="hljs-string">=</span> <span class="hljs-string">new</span> <span class="hljs-string">ToDoService();</span>
            <span class="hljs-string">if</span> <span class="hljs-string">(todoService.IsUpAndRunning())</span>
            {
                <span class="hljs-string">return</span> <span class="hljs-string">Task.FromResult(HealthCheckResult.Healthy("The</span> <span class="hljs-string">ToDO</span> <span class="hljs-string">service</span> <span class="hljs-string">is</span> <span class="hljs-string">up</span> <span class="hljs-string">and</span> <span class="hljs-string">running"));</span>
            }

            <span class="hljs-string">return</span> <span class="hljs-string">Task.FromResult(HealthCheckResult.Degraded("The</span> <span class="hljs-string">ToDo</span> <span class="hljs-string">service</span> <span class="hljs-string">has</span> <span class="hljs-string">issues"));</span>
        }
        <span class="hljs-string">catch</span> <span class="hljs-string">(Exception</span> <span class="hljs-string">e)</span>
        {
            <span class="hljs-string">return</span> <span class="hljs-string">Task.FromResult(HealthCheckResult.Unhealthy("The</span> <span class="hljs-string">ToDo</span> <span class="hljs-string">service</span> <span class="hljs-string">is</span> <span class="hljs-string">having</span> <span class="hljs-string">issues"</span>, <span class="hljs-string">e));</span>
        }
    }
}
<span class="hljs-string">...</span>
<span class="hljs-string">//</span> <span class="hljs-string">this</span> <span class="hljs-string">is</span> <span class="hljs-string">the</span> <span class="hljs-string">method</span> <span class="hljs-string">from</span> <span class="hljs-string">the</span> <span class="hljs-string">ToDoService</span> <span class="hljs-string">which</span> <span class="hljs-string">sends</span> <span class="hljs-string">a</span> <span class="hljs-string">random</span> <span class="hljs-string">status</span>
<span class="hljs-string">public</span> <span class="hljs-string">bool</span> <span class="hljs-string">IsUpAndRunning()</span>
        {
            <span class="hljs-string">var</span> <span class="hljs-string">status</span> <span class="hljs-string">=</span> <span class="hljs-string">DateTime.UtcNow.Millisecond</span> <span class="hljs-string">%</span> <span class="hljs-number">3</span><span class="hljs-string">;</span>
            <span class="hljs-string">switch</span> <span class="hljs-string">(status)</span>
            {
                <span class="hljs-attr">case 0:</span>
                    <span class="hljs-string">return</span> <span class="hljs-literal">true</span><span class="hljs-string">;</span>
                <span class="hljs-attr">case 1:</span>
                    <span class="hljs-string">return</span> <span class="hljs-literal">false</span><span class="hljs-string">;</span>
                <span class="hljs-attr">default:</span>
                    <span class="hljs-string">throw</span> <span class="hljs-string">new</span> <span class="hljs-string">Exception("Service</span> <span class="hljs-string">is</span> <span class="hljs-string">down");</span>
            }
        }
</code></pre>
<p>This is not a practical health check but serves as a simple example to get us started.</p>
<p>To enable health checks in our app we have to add these lines to <code>Program.cs</code>:</p>
<pre><code class="lang-yaml">
<span class="hljs-string">builder.Services.AddHealthChecks()</span>
    <span class="hljs-string">.AddCheck&lt;ToDoAppHealthCheck&gt;("ToDoApp");</span>
<span class="hljs-string">...</span>
<span class="hljs-string">app.MapHealthChecks("/health");</span>
</code></pre>
<p>If you start your app and go to the /health endpoint you'll notice that the response is a string with one the following values: "Healthy", "Degraded" and "Unhealthy". This is fine but I like more details so I'm sending a JSON instead, and for this you need to add a parameter to the <code>MapHealthChecks</code> function like this:</p>
<pre><code class="lang-csharp">app.MapHealthChecks(<span class="hljs-string">"/health"</span>, <span class="hljs-keyword">new</span> HealthCheckOptions
{
    ResponseWriter = <span class="hljs-keyword">async</span> (context, report) =&gt;
    {
        <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">new</span>
        {
            status = report.Status.ToString(),
            entries = report.Entries.Select(e =&gt; <span class="hljs-keyword">new</span>
            {
                key = e.Key,
                status = e.Value.Status.ToString(),
                description = e.Value.Description,
                duration = e.Value.Duration,
                exception = e.Value.Exception?.Message
            })
        };

        <span class="hljs-keyword">await</span> context.Response.WriteAsJsonAsync(result);
    }
});
</code></pre>
<p>Now the response will look like this:</p>
<pre><code class="lang-csharp">{
    <span class="hljs-string">"status"</span>: <span class="hljs-string">"Unhealthy"</span>,
    <span class="hljs-string">"entries"</span>: [
        {
            <span class="hljs-string">"key"</span>: <span class="hljs-string">"ToDoApp"</span>,
            <span class="hljs-string">"status"</span>: <span class="hljs-string">"Unhealthy"</span>,
            <span class="hljs-string">"description"</span>: <span class="hljs-string">"The ToDo service is having issues"</span>,
            <span class="hljs-string">"duration"</span>: <span class="hljs-string">"00:00:00.0002569"</span>,
            <span class="hljs-string">"exception"</span>: <span class="hljs-string">"Service is down"</span>
        }
    ]
}
</code></pre>
<h1 id="heading-azure-pipeline-triggering-health-check"><strong>Azure Pipeline: Triggering Health Check</strong></h1>
<p>For our Azure Pipeline, we'll implement a YAML step that runs a PowerShell script. This script will make a web request to the <code>/health</code> endpoint and verify the application's health status.</p>
<p>This is the YAML code for calling the script:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
  <span class="hljs-attr">retryCountOnTaskFailure:</span> <span class="hljs-number">3</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Check App health before swapping slots'</span>
  <span class="hljs-attr">inputs:</span>
    <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
    <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
    <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
    <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/healthcheck.ps1'</span>                
    <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(stagingAppUrl)'</span>
</code></pre>
<p>This task will invoke the script below and it will retry a maximum of 3 times. If the script fails 4 times then the pipeline won't proceed and our staging slot won't be swapped with the production one.</p>
<pre><code class="lang-yaml"><span class="hljs-string">param</span> <span class="hljs-string">(</span>
    <span class="hljs-string">$stagingApiUrl</span>
<span class="hljs-string">)</span>
<span class="hljs-string">Start-Sleep</span> <span class="hljs-string">-Seconds</span> <span class="hljs-number">5</span>

<span class="hljs-string">Write-Host</span> <span class="hljs-string">"Health check for $stagingApiUrl"</span>

<span class="hljs-string">$response</span> <span class="hljs-string">=</span> <span class="hljs-string">Invoke-WebRequest</span> <span class="hljs-string">-Uri</span> <span class="hljs-string">$stagingApiUrl</span> <span class="hljs-string">-UseBasicParsing</span> <span class="hljs-string">-ErrorAction</span> <span class="hljs-string">SilentlyContinue</span>
<span class="hljs-string">if</span> <span class="hljs-string">($null</span> <span class="hljs-string">-eq</span> <span class="hljs-string">$response)</span> {
    <span class="hljs-string">$statusCode</span> <span class="hljs-string">=</span> <span class="hljs-string">$Error</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.Exception.Response.StatusCode.Value__</span>
    <span class="hljs-string">$reader</span> <span class="hljs-string">=</span> <span class="hljs-string">New-Object</span> <span class="hljs-string">System.IO.StreamReader($Error</span>[<span class="hljs-number">0</span>]<span class="hljs-string">.Exception.Response.GetResponseStream())</span>
    <span class="hljs-string">$content</span> <span class="hljs-string">=</span> <span class="hljs-string">$reader.ReadToEnd()</span> <span class="hljs-string">|</span> <span class="hljs-string">ConvertFrom-Json</span>
} <span class="hljs-string">else</span> {
    <span class="hljs-string">$statusCode</span> <span class="hljs-string">=</span> <span class="hljs-string">$response.StatusCode</span>
    <span class="hljs-string">$content</span> <span class="hljs-string">=</span> <span class="hljs-string">$response.Content</span> <span class="hljs-string">|</span> <span class="hljs-string">ConvertFrom-Json</span>
}

<span class="hljs-string">if</span> <span class="hljs-string">($statusCode</span> <span class="hljs-string">-eq</span> <span class="hljs-number">200</span> <span class="hljs-string">-and</span> <span class="hljs-string">$content.status</span> <span class="hljs-string">-eq</span> <span class="hljs-string">"Healthy"</span><span class="hljs-string">)</span> {
    <span class="hljs-string">Write-Host</span> <span class="hljs-string">$response.Content</span>    
} <span class="hljs-string">else</span> {
    <span class="hljs-string">Write-Host</span> <span class="hljs-string">"API health check failed with status code: $statusCode and health status: $($content.status)"</span>
    <span class="hljs-string">exit</span> <span class="hljs-number">1</span>
}
</code></pre>
<p>This script could have been much shorter, but it isn't too complicated. Let me explain what it does.</p>
<p>First, the script waits for 5 seconds at startup. This is because just prior to this task, we deployed our code to the staging slot, and it might take some time for the staging slot to be ready to serve requests. Until that occurs, the app will return a 502 error, so we need to account for this. The time it takes to be ready can vary depending on the tier of your app service and what you have deployed. For instance, if you're running migrations at startup, the code is deployed, but the app won't serve requests until the migrations have been applied. As a result, it will return a 502 error for every request until the process is complete, like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694775536197/9fb85542-6ba5-4603-9609-7934b318d6cb.png" alt="Azure App Service Health Check, status 502 after deployment" class="image--center mx-auto" /></p>
<p>After the 5-second interval, the script sends a request to our staging slot's /health endpoint, which in our case is <a target="_blank" href="https://bogdan-todo-app-staging.azurewebsites.net/health">https://bogdan-todo-app-staging.azurewebsites.net/health</a>.</p>
<p>We then obtain the status and content of the response. To proceed with the swap, we need the status code to be 200 and the health check status to be <code>Healthy</code>. Of course, this can be adjusted; perhaps you are comfortable with a <code>Degraded</code> status as well. However, it's up to you to decide.</p>
<p>The <code>/health</code> endpoint will return a JSON that looks like this:</p>
<pre><code class="lang-yaml">{
    <span class="hljs-attr">"status":</span> <span class="hljs-string">"Unhealthy"</span>,
    <span class="hljs-attr">"entries":</span> [
        {
            <span class="hljs-attr">"key":</span> <span class="hljs-string">"ToDoApp"</span>,
            <span class="hljs-attr">"status":</span> <span class="hljs-string">"Unhealthy"</span>,
            <span class="hljs-attr">"description":</span> <span class="hljs-string">"The ToDo service is having issues"</span>,
            <span class="hljs-attr">"duration":</span> <span class="hljs-string">"00:00:00.0002569"</span>,
            <span class="hljs-attr">"exception":</span> <span class="hljs-string">"Service is down"</span>
        },
        {
            <span class="hljs-attr">"key":</span> <span class="hljs-string">"ExternalAPI"</span>,
            <span class="hljs-attr">"status":</span> <span class="hljs-string">"Healthy"</span>,
            <span class="hljs-attr">"description":</span> <span class="hljs-literal">null</span>,
            <span class="hljs-attr">"duration":</span> <span class="hljs-string">"00:00:00.0000015"</span>,
            <span class="hljs-attr">"exception":</span> <span class="hljs-literal">null</span>
        }
    ]
}
</code></pre>
<p>There's an overall status and then each health check has its own status. That's why it's up to you whether you want to check only the overall status or focus on specific services. For instance, I might have a health check for Google Analytics that I can ignore because it's not critical for my app, but I really need the Stripe API to be up and running, so I'll base my decision on that.</p>
<h1 id="heading-health-checks-in-real-world-applications">Health checks in real-world applications</h1>
<p>The example provided earlier serves as a basic introduction. Now, let's delve into the practical application of health checks in robust, real-world scenarios. In my ongoing project, I employ a combination of custom and predefined health checks. For instance, I've developed a custom health check for an external API. Despite having unit and integration tests for this API, they don't directly interact with it. Directly calling an external API from unit tests isn't justifiable. While our integration tests are responsible for contract testing, they don't communicate with the external API either. Instead, we ensure that the sent data adheres to the correct format and the expected response is received. But this doesn't guarantee full-proof coverage, and that's where the importance of health checks is underscored.</p>
<p>Imagine a scenario where a newly committed change unintentionally introduces a bug—say, it's related to retrieving API credentials from an Azure Key Vault. This glitch may remain unnoticed until deployment. Fortunately, our health checks are designed to flag such issues prior to the staging slot being swapped with production.</p>
<h2 id="heading-configuring-health-check-monitoring-in-azure">Configuring health check monitoring in Azure</h2>
<p>Azure App Services offers the flexibility to consistently monitor the /health endpoint. This monitoring can be set at specific intervals, coupled with the provision to set up alerts for anomalies. Consider a situation where post-deployment, someone modifies the credentials in the Key Vault. In such instances, an immediate alert can assist in swift remediation. Similarly, if the external API faces downtime due to reasons beyond your control, it's beneficial to be aware. This not only aids in timely communication with stakeholders but also reassures them of your system's integrity.</p>
<p>This is how you configure health check monitoring in Azure:</p>
<ol>
<li><p>Open the app service in Azure and go to <strong>Monitoring</strong> -&gt; <strong>Health check</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694762006492/6c8c4d44-c784-458e-a758-b3da55a1c8b2.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Select <strong>Enable</strong> and provide a valid URL path, in our case it's <code>/health</code></p>
</li>
<li><p>Select <strong>Save</strong>.</p>
</li>
</ol>
<p>To define an alert you have to click on <strong>Metrics</strong> on the same screen, and then click on <code>New alert rule</code></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694762178890/21a946d4-28e4-44ac-8865-070cb53e8724.png" alt class="image--center mx-auto" /></p>
<p>I won't delve into the details, as that could warrant a separate blog post, which I might consider writing later. For now, it's crucial to comprehend that by enabling health checks you can prevent issues from going into production or at least you can get notified when something bad happens.</p>
<h2 id="heading-using-predefined-health-checks">Using predefined health checks</h2>
<p>If you're using certain Azure services there are predefined health checks for them. For example, if you're using SQL Server with Entity Framework then you can install the package <code>Microsoft.Extensions.Diagnostics.HealthChecks.EntityFrameworkCore</code> and all you have to do is to add this line in your Program.cs</p>
<pre><code class="lang-yaml"><span class="hljs-string">services.AddHealthChecks()</span>
        <span class="hljs-string">.AddDbContextCheck&lt;MyDbContext&gt;();</span>
</code></pre>
<p>This health check will test the connectivity and you can even pass a test query that will be executed.</p>
<p>There's a predefined health check for Azure Key Vault as well, you just need the Nuget package <code>AspNetCore.HealthChecks.AzureKeyVault</code> and here's how we use it:</p>
<pre><code class="lang-yaml"><span class="hljs-string">services.AddHealthChecks()</span>
         <span class="hljs-string">.AddAzureKeyVault(</span>
              <span class="hljs-string">new</span> <span class="hljs-string">Uri(configuration["KeyVaultUri"]),</span>
              <span class="hljs-string">new</span> <span class="hljs-string">DefaultAzureCredential(new</span> <span class="hljs-string">DefaultAzureCredentialOptions()),</span>
              <span class="hljs-string">setup</span> <span class="hljs-string">=&gt;</span> { <span class="hljs-string">setup.AddSecret("externalApiClientSecret");</span> }<span class="hljs-string">);</span>
</code></pre>
<p>The AddAzureKeyVault method accepts an action that allows interaction with the Key Vault. In our case, we not only test the connection to the Key Vault but also verify if our API can add a secret. If someone accesses the Key Vault and revokes the permission to create secrets, our health check will alert us to this issue. Pretty cool, right?</p>
<h1 id="heading-best-practices-for-health-checks"><strong>Best Practices for Health Checks</strong></h1>
<ol>
<li><p><strong>Broad Coverage</strong>: Health checks should be comprehensive. It's not just about the application being up, but whether all its dependencies (like databases, external services) are responsive.</p>
</li>
<li><p><strong>Speed Matters</strong>: Health checks should be quick. Lengthy checks can delay deployments and might lead to timeouts.</p>
</li>
<li><p><strong>Isolation</strong>: Ensure health checks don't affect the system's state. They should be idempotent.</p>
</li>
<li><p><strong>Detailed Reporting</strong>: While a simple 'Healthy' or 'Unhealthy' might be okay for basic checks, having detailed reports can be invaluable during failures to trace back the issue.</p>
</li>
<li><p><strong>Regular Monitoring</strong>: Instead of only using them during deployments, integrate health checks into monitoring tools for continuous feedback.</p>
</li>
</ol>
<h2 id="heading-wrapping-up"><strong>Wrapping Up</strong></h2>
<p>I think we can agree that health checks play a vital role in ensuring our applications are not only up but functioning correctly. By integrating these checks into our pipeline, we get an extra layer of validation before the swap, leading to more robust deployments.</p>
<p>The code for this app can be found here: <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a></p>
<p>In the next article, we will add another layer of complexity and robustness by introducing another environment using Bicep. There, we'll focus on running end-to-end tests on this new environment while keeping our production environment's checks concise and efficient. Stay tuned and keep deploying with confidence!</p>
]]></content:encoded></item><item><title><![CDATA[Blue-green Deployments in Azure DevOps YAML Pipelines]]></title><description><![CDATA[Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects

Running tests with code coverage in Azure DevOps YAML pipelines

Static code analysis with NDepend in Azure Pipelines

Running e2e tests with Playwright in Azure YAML Pipelines

...]]></description><link>https://bogdanbujdea.dev/blue-green-deployments-in-azure-devops-yaml-pipelines</link><guid isPermaLink="true">https://bogdanbujdea.dev/blue-green-deployments-in-azure-devops-yaml-pipelines</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Blue/Green deployment]]></category><category><![CDATA[Azure Pipelines]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Wed, 13 Sep 2023 09:31:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694596727770/39942c64-8c06-4fe8-88e4-bda5abdd08f4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ol>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/creating-a-multi-stage-yaml-pipeline-in-azure-devops-for-net-projects?source=more_series_bottom_blogs">Creating a multi-stage YAML pipeline in Azure DevOps for .NET projects</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-tests-with-code-coverage-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Running tests with code coverage in Azure DevOps YAML pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/static-code-analysis-with-ndepend-in-azure-pipelines?source=more_series_bottom_blogs">Static code analysis with NDepend in Azure Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/running-e2e-tests-with-playwright-in-azure-yaml-pipelines?source=more_series_bottom_blogs">Running e2e tests with Playwright in Azure YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/publishing-playwright-report-as-an-artifact-in-azure-devops?source=more_series_bottom_blogs">Publishing Playwright report as an artifact in Azure DevOps</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Bicep Infrastructure Deployment from Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/blue-green-deployments-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Blue-green Deployments in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/pre-deployment-health-checks-in-azure-devops-yaml-pipelines?source=more_series_bottom_blogs">Pre-Deployment Health Checks in Azure DevOps YAML Pipelines</a></p>
</li>
<li><p><a target="_blank" href="https://bogdanbujdea.dev/azure-devops-best-practices-breaking-down-the-monolithic-yaml?source=more_series_bottom_blogs">Azure DevOps Best Practices: Breaking Down the Monolithic YAML</a></p>
</li>
</ol>
<hr />
<p>Production issues causing downtime are a nightmare for developers, yet few consider the downtime resulting from new releases. Typically, companies attempt to identify periods with the lowest customer activity, send advance notifications about the upcoming release and its inevitable downtime, and consider it normal. However, I prefer utilizing blue-green deployment to achieve zero downtime. At its core, it's about having two production environments: one that's live ('blue') and one that's idle ('green'). When we want to release a new version of our app, we deploy it to the idle environment, test it out, and once confident, switch traffic to this new environment, either all at once or only to a few customers.</p>
<p>In this manner, customers won't notice that just a moment ago they were browsing the old version, and now they are on the new one, even during peak traffic hours. Sounds promising, right? Let's explore how we can implement this using App Service Slots.</p>
<h2 id="heading-blue-green-deployment-using-azure-app-service-slots"><strong>Blue-green deployment using Azure App Service Slots</strong></h2>
<p>Azure App Service slots allow us to host multiple versions of a web app. Think of them like parallel universes for your app. For instance, in my ASP .NET Web API project, when I push changes, I deploy them to a staging slot first. It mirrors my production environment, ensuring everything works perfectly. Then, with the mere click of a button (or a few lines in the YAML pipeline), I swap the staging slot with the production one.</p>
<p>Let's see some code to understand this better. When you're directly deploying to an App Service this is how the YAML looks like:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy_app</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To App Service'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">deploy</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to app service'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>      
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
</code></pre>
<p>The second task handles the deployment; however, during blue-green deployments, we aim to deploy to a staging slot first and then swap it with the production slot. To achieve this, we must initially create a slot for our app service.</p>
<h2 id="heading-creating-a-deployment-slot-in-azure">Creating a deployment slot in Azure</h2>
<p>Not all app service plans support deployment slots. If you have a plan lower than S1, you will encounter this message when clicking on "Deployment Slots" within your App Service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694446683909/81996996-e0ee-4f96-acfd-1bb025f6b5ff.png" alt="upgrading to standard or premium plan to add deployment slots in Azure" class="image--center mx-auto" /></p>
<p>Either click on upgrade or go to the "Scale up" section and ensure that you're on a tier that has deployment slots.</p>
<p>Once you did that, go to "Deployment slots" and click on "Add slot":</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694595203401/efbdd344-4782-4ecb-bba3-666e04f23b71.png" alt="upgrading to standard or premium plan to add deployment slots in Azure" class="image--center mx-auto" /></p>
<p>You will notice that there's already a slot called "PRODUCTION", so we will add our "STAGING" slot now. I'll call my slot "staging" and click on "Add" at the bottom of the drawer:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694595118744/ba118a36-fe80-46cd-a446-12d45220a541.png" alt="creating a staging slot in Azure" class="image--center mx-auto" /></p>
<p>That's easy, but in the last post we went over Bicep infrastructure deployment, so I want to show you how to do it with Bicep as well:</p>
<h2 id="heading-creating-a-deployment-slot-using-bicep">Creating a deployment slot using Bicep</h2>
<p>It's actually pretty simple, we just need to add a <code>webAppSlot</code> resource to our <code>app.bicep</code> file:</p>
<pre><code class="lang-yaml"><span class="hljs-string">resource</span> <span class="hljs-string">webAppSlot</span> <span class="hljs-string">'Microsoft.Web/sites/slots@2022-03-01'</span> <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'staging'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">'app,linux'</span>
  <span class="hljs-attr">parent:</span> <span class="hljs-string">webApp</span>
  <span class="hljs-string">properties:</span>{
    <span class="hljs-attr">enabled:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">httpsOnly:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">siteConfig:</span> {
      <span class="hljs-attr">httpLoggingEnabled:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">linuxFxVersion:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
    }
  }
}
</code></pre>
<p>If we push this then the pipeline will take care of applying the infrastructure updates and our slot will be created.</p>
<p>When the staging slot is created you will have two instances of your API running in parallel. In my case, one is <a target="_blank" href="https://bogdan-todo-app.azurewebsites.net">bogdan-todo-app.azurewebsites.net</a> (production) and the other one is <a target="_blank" href="https://bogdan-todo-app-staging.azurewebsites.net">bogdan-todo-app-staging.azurewebsites.net</a> (staging)</p>
<p>Next, we'll deploy our app to the staging slot and then swap it with production.</p>
<h2 id="heading-deploy-to-a-staging-slot-from-azure-pipelines">Deploy to a staging slot from Azure Pipelines</h2>
<p>Let me first show you the entire deployment stage and then I'll talk about what's happening:</p>
<pre><code class="lang-yaml"><span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">deploy_app</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy To App Service'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">deploy</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">DownloadPipelineArtifact@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Download pipeline artifact'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">buildType:</span> <span class="hljs-string">'current'</span>
            <span class="hljs-attr">artifactName:</span> <span class="hljs-string">'drop'</span>
            <span class="hljs-attr">targetPath:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureWebApp@1</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy to staging slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">appType:</span> <span class="hljs-string">'webAppLinux'</span>
            <span class="hljs-attr">appName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
            <span class="hljs-attr">package:</span> <span class="hljs-string">'$(Pipeline.Workspace)/drop'</span>
            <span class="hljs-attr">deployToSlotOrASE:</span> <span class="hljs-literal">true</span>
            <span class="hljs-attr">slotName:</span> <span class="hljs-string">'staging'</span>
            <span class="hljs-attr">runtimeStack:</span> <span class="hljs-string">'DOTNETCORE|7.0'</span>
            <span class="hljs-attr">startUpCommand:</span> <span class="hljs-string">'dotnet ToDoApp.Server.dll'</span>      
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureAppServiceManage@0</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Swap slot'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">Action:</span> <span class="hljs-string">'Swap Slots'</span>
            <span class="hljs-attr">SourceSlot:</span> <span class="hljs-string">'staging'</span>
            <span class="hljs-attr">WebAppName:</span> <span class="hljs-string">'bogdan-todo-app'</span>
</code></pre>
<p>First, we're not deploying to the app service directly anymore, instead, we're deploying to the staging slot. To do this we've added these two lines:</p>
<pre><code class="lang-yaml">    <span class="hljs-attr">deployToSlotOrASE:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">slotName:</span> <span class="hljs-string">'staging'</span>
</code></pre>
<p>Then, once the deployment is done, we need to swap the slots. This means that traffic from production will be redirected to the staging slot and staging will become the new production. For this we're going to use the 'Swap Slots' action like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Action:</span> <span class="hljs-string">'Swap Slots'</span>
<span class="hljs-attr">SourceSlot:</span> <span class="hljs-string">'staging'</span>
</code></pre>
<p>Now your app was upgraded to a new version without any downtime! The full code is in this repository: <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline</a> and you can also access the pipeline:</p>
<h2 id="heading-wrapping-up"><strong>Wrapping Up</strong></h2>
<p>By utilizing blue-green deployments with App Service slots, we can achieve zero-downtime releases and ensure users experience seamless transitions between app versions.</p>
<p>Azure App Service slots, in conjunction with Azure DevOps YAML pipelines, have made blue-green deployments incredibly easy for my team. Naturally, this is just a basic demonstration of deployment slots, but in future articles, we will delve into more advanced scenarios such as health checks, deployment slot settings, and more.</p>
<h2 id="heading-whats-next"><strong>What's Next?</strong></h2>
<p>In the next article, I'll showcase how to incorporate health checks into our Azure DevOps pipeline. This ensures that we only swap slots when our deployment is successful, adding another layer of confidence to our release process.</p>
]]></content:encoded></item><item><title><![CDATA[Bicep Infrastructure Deployment from Azure DevOps YAML Pipelines]]></title><description><![CDATA[In this article, I will show you how to deploy your infrastructure using Bicep files from an Azure DevOps YAML pipeline. Instead of merely discussing theory, we will create a Blazor app and deploy it to Azure using Bicep. Before diving in, let's brie...]]></description><link>https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines</link><guid isPermaLink="true">https://bogdanbujdea.dev/bicep-infrastructure-deployment-from-azure-devops-yaml-pipelines</guid><category><![CDATA[azure-devops]]></category><category><![CDATA[Bicep]]></category><category><![CDATA[Infrastructure as code]]></category><category><![CDATA[Devops]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 11 Sep 2023 21:42:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694466741600/0cae9eb7-db20-430b-8b05-923c1cb2d7eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this article, I will show you how to deploy your infrastructure using Bicep files from an Azure DevOps YAML pipeline. Instead of merely discussing theory, we will create a Blazor app and deploy it to Azure using Bicep. Before diving in, let's briefly discuss the advantages of adopting this approach.</p>
<h2 id="heading-why-deploy-infrastructure-using-bicep">Why deploy infrastructure using Bicep?</h2>
<p>I won't cover all the advantages of this approach, but I'll highlight the two most important ones for me. First, you can treat your infrastructure as code and store it in a repository, allowing you to approve changes and track its progress over time. Second, automation becomes possible, as you can easily deploy Bicep files instead of manually creating and configuring resources in Azure. For instance, if you need another environment for your project, you can quickly create it with minor adjustments to your Bicep code. We'll discuss all of this in detail throughout the article.</p>
<h2 id="heading-preparing-azure-devops-for-bicep-deployment">Preparing Azure DevOps for Bicep deployment</h2>
<p>Before our pipeline can make Azure deployments we first need to create a service connection to Azure. For this, go to project settings and then click on "Service connections":</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694455362422/e1fe6e80-5e77-4471-88d6-438361b34680.png" alt="creating a service connection in Azure DevOps" class="image--center mx-auto" /></p>
<p>Now, choose the option labeled "Azure Resource Manager" and click on "Service principal (automatic)." Proceed by clicking "Next," and upon completing the wizard, your service connection will be established.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694455372662/97f78f0e-0969-4b56-839d-40b1bace90e7.png" alt="creating a service connection in Azure DevOps with service principal" class="image--center mx-auto" /></p>
<h2 id="heading-writing-the-bicep-code">Writing the Bicep code</h2>
<p>Our next step is to write the Bicep code which will create our app service and app service plan.</p>
<p>I won't turn this blog post into a Bicep tutorial, but I will write another post to provide some tips for getting started. For now, I'll assume that you know how to write Bicep or at least understand some basic YAML, as we're not dealing with complicated tasks.</p>
<p>When we're working with Infrastructure as Code (IaC), especially with more complex setups, it's crucial to maintain clarity and organization. Bicep lends itself brilliantly to modular designs so it's easy to understand why our Bicep code will be separated into two files: <code>main.bicep</code> and <code>app.bicep</code></p>
<p><code>main.bicep</code> can be seen as the entry point or the orchestrator of our infrastructure deployment. It's where we define which modules or sets of resources should be deployed. By referencing other Bicep files (like <code>app.bicep</code>), the main file allows us to break apart our infrastructure into manageable chunks or modules, giving us the freedom to combine, reuse, or extend them as needed.</p>
<pre><code class="lang-yaml"><span class="hljs-string">param</span> <span class="hljs-string">location</span> <span class="hljs-string">string</span> <span class="hljs-string">=</span> <span class="hljs-string">resourceGroup().location</span>
<span class="hljs-string">param</span> <span class="hljs-string">appName</span> <span class="hljs-string">string</span>
<span class="hljs-string">param</span> <span class="hljs-string">appSku</span> <span class="hljs-string">string</span>

<span class="hljs-string">module</span> <span class="hljs-string">app</span> <span class="hljs-string">'app.bicep'</span>  <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'bogdan-todo-app'</span>
  <span class="hljs-string">params:</span>{
    <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
    <span class="hljs-attr">appName :</span> <span class="hljs-string">appName</span>
    <span class="hljs-attr">appSku:</span> <span class="hljs-string">appSku</span>
  }
}
</code></pre>
<p>The <code>app.bicep</code> file is dedicated to defining the resources specifically needed for the Blazor app component of our application. By keeping the definition of our app infrastructure separate in <code>app.bicep</code>, we achieve a few things:</p>
<ul>
<li><p><strong>Clarity</strong>: It's immediately clear what resources this file defines without mixing it up with other unrelated resources.</p>
</li>
<li><p><strong>Reusability</strong>: If we were to deploy another similar app or service in the future, having a dedicated file makes it much easier to replicate or make slight modifications without reinventing the wheel.</p>
</li>
<li><p><strong>Scalability</strong>: As our app's infrastructure grows in complexity, having a dedicated file allows us to manage that complexity without cluttering our main deployment script.</p>
</li>
</ul>
<p>This is the code for our app, it's pretty simple but in future articles we're going to expand it by adding app settings, connection strings, slots, managed identities, etc.</p>
<pre><code class="lang-yaml"><span class="hljs-string">param</span> <span class="hljs-string">location</span> <span class="hljs-string">string</span> <span class="hljs-string">=</span> <span class="hljs-string">resourceGroup().location</span>
<span class="hljs-string">param</span> <span class="hljs-string">appName</span> <span class="hljs-string">string</span>
<span class="hljs-string">param</span> <span class="hljs-string">appSku</span> <span class="hljs-string">string</span> <span class="hljs-string">=</span> <span class="hljs-string">'S1'</span>

<span class="hljs-string">resource</span> <span class="hljs-string">asp</span> <span class="hljs-string">'Microsoft.Web/serverfarms@2022-03-01'</span> <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'${appName}-app-plan'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
  <span class="hljs-attr">sku:</span> {
    <span class="hljs-attr">name:</span> <span class="hljs-string">appSku</span>
  }
  <span class="hljs-attr">kind:</span> <span class="hljs-string">'linux'</span>
  <span class="hljs-string">properties:</span>{
    <span class="hljs-attr">reserved:</span> <span class="hljs-literal">true</span>
  }
}

<span class="hljs-string">resource</span> <span class="hljs-string">webApp</span> <span class="hljs-string">'Microsoft.Web/sites@2022-03-01'</span> <span class="hljs-string">=</span> {
  <span class="hljs-attr">name:</span> <span class="hljs-string">'${appName}-app'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">location</span>
  <span class="hljs-attr">identity:</span> {
    <span class="hljs-attr">type:</span> <span class="hljs-string">'SystemAssigned'</span>
  }
  <span class="hljs-attr">kind:</span> <span class="hljs-string">'app,linux'</span>
  <span class="hljs-attr">properties:</span> {    
    <span class="hljs-attr">serverFarmId:</span> <span class="hljs-string">asp.id</span>
    <span class="hljs-attr">httpsOnly:</span> <span class="hljs-literal">true</span>
    <span class="hljs-attr">siteConfig:</span> {
      <span class="hljs-attr">httpLoggingEnabled:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">linuxFxVersion:</span> <span class="hljs-string">'DOTNETCORE|6.0'</span> 
    }
  }
}
</code></pre>
<p>As you can see, the <code>app.bicep</code> file has 3 parameters: location, name and SKU. I'm using the same location as the resource group for simplicity, and the <code>appName</code> and <code>appSku</code> come from the main module, which in turn receives them from a configuration file.</p>
<p>The configuration (called prod.json) file looks like this:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"</span>,
    <span class="hljs-attr">"contentVersion"</span>: <span class="hljs-string">"1.0.0.0"</span>,
    <span class="hljs-attr">"parameters"</span>: {
      <span class="hljs-attr">"location"</span>: {
        <span class="hljs-attr">"value"</span>: <span class="hljs-string">"westeurope"</span>
      },
      <span class="hljs-attr">"appName"</span>: {
        <span class="hljs-attr">"value"</span>: <span class="hljs-string">"bogdan-todo"</span>
      },
      <span class="hljs-attr">"appSku"</span>: {
        <span class="hljs-attr">"value"</span>: <span class="hljs-string">"S1"</span>
      }
    }
  }
</code></pre>
<p>If I decide to create another environment in the future I just need to create another json file. For example, let's say I want to create a "dev" environment, so I'll create a <code>dev.json</code> file where I can simply change the <code>appName</code> to something like <code>bogdan-todo-dev</code> and choose a lower <code>appSku</code> if I want to reduce costs for that environment. As you can see, creating new environments becomes quite easy once this is set up.</p>
<h2 id="heading-running-the-bicep-deployment-from-a-yaml-pipeline">Running the Bicep deployment from a YAML pipeline</h2>
<p>To deploy these Bicep files I'm going to write a Powershell script which will get executed in the pipeline. The script looks like this:</p>
<pre><code class="lang-powershell"><span class="hljs-function">[<span class="hljs-type">CmdletBinding</span>()]</span>
<span class="hljs-keyword">param</span> (
    <span class="hljs-variable">$resourceGroupName</span>,
    <span class="hljs-variable">$location</span>,
    <span class="hljs-variable">$configFileName</span>
)

az <span class="hljs-built_in">group</span> create -<span class="hljs-literal">-name</span> <span class="hljs-variable">$resourceGroupName</span> -<span class="hljs-literal">-location</span> <span class="hljs-variable">$location</span>
az deployment <span class="hljs-built_in">group</span> create -<span class="hljs-literal">-resource</span><span class="hljs-literal">-group</span> <span class="hljs-variable">$resourceGroupName</span> -<span class="hljs-literal">-template</span><span class="hljs-operator">-file</span> ./infrastructure/main.bicep -<span class="hljs-literal">-parameters</span> ./infrastructure/environments/<span class="hljs-variable">$configFileName</span>
</code></pre>
<p>Now let's show the pipeline code as well:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">trigger:</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">main</span>

<span class="hljs-attr">pool:</span>
  <span class="hljs-attr">vmImage:</span> <span class="hljs-string">'ubuntu-latest'</span>

<span class="hljs-attr">variables:</span>
  <span class="hljs-attr">solution:</span> <span class="hljs-string">'**/*.sln'</span>
  <span class="hljs-attr">buildPlatform:</span> <span class="hljs-string">'Any CPU'</span>
  <span class="hljs-attr">buildConfiguration:</span> <span class="hljs-string">'Release'</span>
  <span class="hljs-attr">location:</span> <span class="hljs-string">'westeurope'</span>
  <span class="hljs-attr">configFileName:</span> <span class="hljs-string">'prod.json'</span>
  <span class="hljs-attr">resourceGroupName:</span> <span class="hljs-string">'azure-devops-yaml-pipeline'</span>

<span class="hljs-comment"># build and test stage skipped for now</span>

<span class="hljs-bullet">-</span> <span class="hljs-attr">stage:</span> <span class="hljs-string">update_infrastructure</span>
  <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
  <span class="hljs-attr">jobs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">job:</span> <span class="hljs-string">UpdateAzureResources</span>
      <span class="hljs-attr">steps:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">task:</span> <span class="hljs-string">AzureCLI@2</span>
          <span class="hljs-attr">displayName:</span> <span class="hljs-string">'Deploy Bicep Infrastructure'</span>
          <span class="hljs-attr">inputs:</span>
            <span class="hljs-attr">azureSubscription:</span> <span class="hljs-string">'AzureConnection'</span>
            <span class="hljs-attr">scriptType:</span> <span class="hljs-string">'pscore'</span>
            <span class="hljs-attr">scriptLocation:</span> <span class="hljs-string">'scriptPath'</span>
            <span class="hljs-attr">scriptPath:</span> <span class="hljs-string">'./infrastructure/deploy.ps1'</span>                
            <span class="hljs-attr">arguments:</span> <span class="hljs-string">'$(resourceGroupName) $(location) $(configFileName)'</span>

<span class="hljs-comment"># app service deployment at the end</span>
</code></pre>
<p>This stage updates the infrastructure every time it is executed. Since our script requires permission to interact with Azure, we must provide the name of our service connection in the azureSubscription parameter. Next, we specify the script's path and supply it with some arguments. With everything set up, let's push our code to Azure and observe how it functions.</p>
<p>One other step we have to do is to allow our service connection to be used by this pipeline, so click on <code>View</code>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694458992812/84541793-4e84-426d-b93f-638c415cb400.png" alt="Azure DevOps pipeline" class="image--center mx-auto" /></p>
<p>And then click on <code>Permit</code>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1694459011938/2ac547b0-bb75-4833-a67a-dd016644ef65.png" alt="allow service principal permissions for azure devops pipeline" class="image--center mx-auto" /></p>
<p>When you first use Bicep to deploy your infrastructure, every resource defined in your Bicep file is new to Azure. As a result, Azure needs to provision each of these resources from scratch. Depending on the nature and number of resources you've defined, this can be an intricate process involving multiple steps, dependencies, and configurations. Naturally, this initial setup tends to take the longest as everything is created anew.</p>
<p>After your resources have been initially set up, any subsequent deployments or updates with Bicep are processed differently. Bicep is designed to be idempotent, meaning it aims to achieve a desired state without redundant operations. In other words, if a resource already exists in the desired state, Bicep won't waste time recreating it.</p>
<p>When you make changes to your Bicep files and deploy again, Bicep will first evaluate the current state of the resources in Azure. It then intelligently determines what has changed since the last deployment. Only these changes are applied, making the process faster and more efficient. This not only saves time but also reduces the potential for errors or disruptions since unchanged resources remain untouched.</p>
<h2 id="heading-code-sample">Code sample</h2>
<p>The complete code and the pipeline can be found in a <a target="_blank" href="https://dev.azure.com/bujdea/_git/AzureDevopsYamlPipeline">public Git repository.</a> Feel free to experiment with it and deploy it to your own Azure subscription if you wish. Just make sure to update the service connection name and the app name.</p>
<h2 id="heading-wrapping-up"><strong>Wrapping Up</strong></h2>
<p>For me, it's amazing how easy it is to deploy our infrastructure from Azure DevOps using Bicep. The ease of setting up new environments stands out, and having most of the process automated saves a good chunk of time and cuts down on manual errors.</p>
<p>In the next article, we'll dive a bit deeper, exploring blue-green deployment with Azure App Service slots. So, stay tuned and happy coding!</p>
]]></content:encoded></item><item><title><![CDATA[How I landed my first job]]></title><description><![CDATA[This year, I am celebrating 10 years since I got my first job. In these 10 years, I'm happy to say that I was rarely bored at work. I worked with awesome people, built a lot of great memories during and outside work, learned a lot, and also taught so...]]></description><link>https://bogdanbujdea.dev/how-i-landed-my-first-job</link><guid isPermaLink="true">https://bogdanbujdea.dev/how-i-landed-my-first-job</guid><category><![CDATA[Career]]></category><category><![CDATA[tech careers]]></category><category><![CDATA[job search]]></category><category><![CDATA[beginnersguide]]></category><category><![CDATA[#beginners #learningtocode #100daysofcode]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Thu, 07 Sep 2023 16:07:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1694102497248/babca888-d5f7-4499-a9f1-741401059bc1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This year, I am celebrating 10 years since I got my first job. In these 10 years, I'm happy to say that I was rarely bored at work. I worked with awesome people, built a lot of great memories during and outside work, learned a lot, and also taught some of what I know. I would like to share some of my experiences through a series of blog posts, hoping that others can learn from both my successes and failures.</p>
<p>I'll discuss how I secured my first job, the reasons behind quitting my jobs, the successes I achieved, and the failures I encountered.</p>
<p>Another motivation for doing this is that I enjoy hearing about other people's experiences firsthand. While reading about soft skills theory is informative, I find that I prefer engaging stories, so I'd like to share a few of my own!</p>
<hr />
<h1 id="heading-story-1-how-i-landed-my-first-job">Story #1 - How I landed my first job</h1>
<p>I think I applied to dozens of interviews in my first 2 years of university, but when I finally got to one of them it seemed like a match made in heaven. The good thing about going to a lot of interviews is that I wasn't that scared of them anymore. After my ~20th interview I had no more sweaty palms, I knew most of the questions and answers to lame questions like "Where do I see myself in 5 years" or "How do I fit an elephant into a fridge", etc. I got through IQ tests, psychological evaluations, HR interviews, quizzes, leet code, team interviews, etc..... but the final interview was the one where I knew I got in before it ended.</p>
<p>In fall of 2012, I joined the <a target="_blank" href="https://mvp.microsoft.com/studentambassadors">Microsoft Student Partner</a> program, which I think has since been renamed to Microsoft Learn Student Ambassadors. Through this program, I could freely access the latest Microsoft technologies and conduct hands-on labs to share my knowledge with peers. A highlight of the program was participating in a Windows 8 hackathon in October 2012. I had dabbled with the Windows 8 SDK a bit before that and even got a simple sound recorder app into the store. The hackathon was a blast — coding late into the night, pizza, drinks, and some solid memories. Even though I didn't snag a cash prize, I made some invaluable connections, particularly with a couple of the jury members. Fast forward a week, and I found myself walking into an interview room at Thinslices for a Windows 8 internship. And guess who I saw? Those same jury members because they were Thinslices employees! We chatted about the hackathon, my app, and I learned I was the only candidate with an app already in the Microsoft Store. About 15 minutes in, I had a hunch I might get the spot. Not long after, they confirmed it — and just like that, my tech journey truly began.</p>
<p>A short version of the events leading up to my first job would look like this:</p>
<ul>
<li><p>Failed many interviews</p>
</li>
<li><p>Got involved in the community (Microsoft Student Partners)</p>
</li>
<li><p>Started to learn about Windows 8 and published an app to test my knowledge</p>
</li>
<li><p>Attended a hackathon and made connections</p>
</li>
<li><p>Passed my first interview and got the internship</p>
</li>
</ul>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p><strong><em>Get involved in the community and make connections</em></strong></p>
<p>This isn't the only time I'll say this, as I have numerous stories from my experiences that led me to the same conclusion. However, for now, it's evident why I believe it's crucial to put yourself out there, participate in events, join communities, volunteer, present something, and so on. Some might argue that I was merely fortunate, but I believe that <strong>you create your own luck</strong>.</p>
<p>If I wasn't a Microsoft Student Partner I couldn't attend that hackathon, and I also wouldn't have known anything about Windows 8, much less develop something for it (let's face it, it wasn't that popular at its release because people still wanted to stay on Windows 7).</p>
<p>Some of you might argue that I could have landed another job simply by attending a different interview. Sure, that might be true, and I can't say whether my career would have been the same or not, but I do know that I genuinely enjoyed my time at Thinslices, and it was the ideal place for my first job. I'll share more about this in the next article!</p>
]]></content:encoded></item><item><title><![CDATA[Programming with ChatGPT: 5 Steps to Future-Proof Your Career]]></title><description><![CDATA[Recently, I've been using ChatGPT in my daily work routine and found it to be a valuable resource for enhancing my programming skills and problem-solving abilities.
This experience sparked a realization: ChatGPT has immense potential for fellow progr...]]></description><link>https://bogdanbujdea.dev/programming-with-chatgpt-5-steps-to-future-proof-your-career</link><guid isPermaLink="true">https://bogdanbujdea.dev/programming-with-chatgpt-5-steps-to-future-proof-your-career</guid><category><![CDATA[chatgptguide]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Mon, 10 Apr 2023 19:20:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1681154250974/aa5c8ad9-266c-4e09-a870-1c7bdc99105d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently, I've been using ChatGPT in my daily work routine and found it to be a valuable resource for enhancing my programming skills and problem-solving abilities.</p>
<p>This experience sparked a realization: ChatGPT has immense potential for fellow programmers, especially those in junior and mid-level positions. Recognizing that more developers could benefit from this AI tool, I felt inspired to share my insights and experiences with others.</p>
<p>At first, I thought I could write a series of articles but then I thought I could try something new, so I decided to write an ebook called <a target="_blank" href="https://bogdanbujdea.gumroad.com/l/programmingwithchatgpt">"Programming with ChatGPT: 5 Steps to Future-Proof Your Career."</a> This eBook aims to provide an eye-opening introduction to the possibilities of working with ChatGPT, offering practical guidance and techniques to help you make the most of it in your programming journey.</p>
<p>If you're curious about ChatGPT and how it can offer a new perspective on your programming career, I invite you to <a target="_blank" href="https://bogdanbujdea.gumroad.com/l/programmingwithchatgpt">check out my book on Gumroad</a>. I really think it's an opportunity to expand your horizons and see what this innovative tool has to offer. Here are the 5 chapters I wrote:</p>
<ol>
<li><p>Chapter 1: Finding a Job with ChatGPT</p>
</li>
<li><p>Chapter 2: Onboarding 10x Faster with ChatGPT</p>
</li>
<li><p>Chapter 3: Enhancing Problem Solving and Debugging Skills with ChatGPT</p>
</li>
<li><p>Chapter 4: Code Optimization and Best Practices with ChatGPT</p>
</li>
<li><p>Chapter 5: Expanding Your Programming Knowledge with ChatGPT</p>
</li>
</ol>
<p>The first 100 people can download it for free using this code: UX58P7D</p>
<p>I really hope you enjoy it!</p>
]]></content:encoded></item><item><title><![CDATA[Building an app with ChatGPT: Day 1]]></title><description><![CDATA[Before starting work on this project, I had an insightful discussion with ChatGPT about what I should build. My idea was simple: a habit tracker that motivates you by making you compete with yourself. For example, if I want to do 10 pushups per day, ...]]></description><link>https://bogdanbujdea.dev/building-an-app-with-chatgpt-day-1</link><guid isPermaLink="true">https://bogdanbujdea.dev/building-an-app-with-chatgpt-day-1</guid><category><![CDATA[chatgpt]]></category><category><![CDATA[General Programming]]></category><category><![CDATA[AI]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[Blazor]]></category><dc:creator><![CDATA[Bogdan Bujdea]]></dc:creator><pubDate>Tue, 28 Mar 2023 15:46:04 GMT</pubDate><content:encoded><![CDATA[<p>Before starting work on this project, I had an insightful discussion with ChatGPT about what I should build. My idea was simple: a habit tracker that motivates you by making you compete with yourself. For example, if I want to do 10 pushups per day, I should have a habit with a simple yes/no question, "Did you do 10 pushups today?", and I could just hit yes or no. Do this for enough days (~66), and you form a habit of doing pushups each day. For me, this is pretty motivat</p>
<p>ional. I'm using <a target="_blank" href="https://habitsgarden.com/">Habits Garden</a> and look how nice my "No Coca-Cola" habit looks; I haven't drunk a sip of Coke since December last year, and keeping that streak keeps me motivated.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680011727237/456717b9-9080-4ad0-9fc4-ed159f4e5300.png" alt class="image--center mx-auto" /></p>
<p>Another way to make this more motivational is to use the concept of "personal records." For example, I could try to do 10 pushups every day, but I'd also like to know my highest number of pushups in a day. Using my habit tracker, I could log that as well and try to beat it.</p>
<p>I presented this idea to ChatGPT, and it helped me plan the functionality for an MVP:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680010907670/74a59f31-5941-465c-8b2f-4a9d7b352d09.png" alt class="image--center mx-auto" /></p>
<p>We agreed to work for 4 hours per day. Let's see if the estimation was correct for the first day by going through each task:</p>
<ol>
<li><h2 id="heading-set-up-the-development-environment">Set up the development environment</h2>
</li>
</ol>
<p>This was done already because I'm a .NET developer, so I had everything ready (.NET SDK, Visual Studio, etc.). Time required = 0 minutes.</p>
<ol>
<li><h2 id="heading-create-a-new-blazor-webassembly-project">Create a new Blazor WebAssembly project</h2>
<p> As I said in the previous post, I don't have that much frontend experience so I asked ChatGpt what could I use. Its suggestion was to go with Blazor, which I'm already familiar with and I was thinking about this as well.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680013491722/783d92fd-6e77-45fd-8672-03efdd619666.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-plan-the-database-schema">Plan the database schema</h2>
<p> Here's where I think ChatGPT did a fantastic job. I just had to give the requirements and explain that I'm using EF Core with SQL Server, and it generated all the entities. First, it gave me a list of the entities with their fields:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680012244911/af573cb1-5943-4d4b-a986-0388b9aef869.png" alt class="image--center mx-auto" /></p>
<p> Then I gave it some suggestions and asked for the actual code, which it handled pretty well:</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680012281363/5f1f41fe-995f-49b2-8985-2bb9c6a899c0.png" alt /></p>
</li>
</ol>
<p>Honestly, I think this saved a lot of time because it was all done in ~10 minutes.</p>
<ol>
<li><h2 id="heading-create-entity-classes">Create entity classes</h2>
</li>
</ol>
<p>Already did this in the above step</p>
<ol>
<li><h2 id="heading-configure-the-connection-string-and-register-dbcontext">Configure the connection string and register DbContext</h2>
</li>
</ol>
<p>This step was pretty simple, I just had to configure EF Core and the DbContext and it was a lot easier doing it myself instead of asking ChatGPT. Once the first migration ran I could go to the next step which was authentication.</p>
<ol>
<li><h2 id="heading-implement-basic-user-authentication">Implement basic user authentication</h2>
</li>
</ol>
<p>I don't have a lot of experience with authentication, so I was happy to have someone help me in this area. Initially I wanted to allow users to login with their Facebook/Twitter/Google and Microsoft accounts but for the MVP we agreed we should have username and password authentication. I asked ChatGPT to generate the login and register pages along with the API controller and it did ~80% of the job.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680016852414/4b7cff64-5b4b-49ea-9dcc-1be77198bab3.png" alt class="image--center mx-auto" /></p>
<p>It's not always perfect, as you can see it forgot that I'm creating a Blazor app, not ASP.NET MVC, and it did this a lot of times. Here's the next result... but without the API call:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680016968427/d40143ee-c86d-4cfe-b79d-9b28eec04e73.png" alt class="image--center mx-auto" /></p>
<p>I asked for the implementation of HandleLogin and it provided that as well:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680017018886/c13b2728-12b8-4d18-8191-ffc377582075.png" alt class="image--center mx-auto" /></p>
<p>The problem here is that I'm not accessing the UserManager directly from my Blazor app, I want to do that from the server controller, so I had to ask again for a rewrite:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680017089698/2f4bfab0-2047-43b7-a4da-c31e59ee7116.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-conclusion-for-the-1st-day">Conclusion for the 1st day</h1>
<p>In summary, ChatGPT has proven to be a valuable asset in this project, despite not being perfect. It generated most of the code and provided guidance, which saved a considerable amount of time. Although there were some errors and occasional trips to StackOverflow, these instances were less frequent than usual. The first day of work took less than 2 hours, which was quite efficient.</p>
<p>As I move on to developing UI components tomorrow, I'm eager to see the initial bits of functionality come together. Stay updated on my progress by subscribing to this blog or following my <a target="_blank" href="https://twitter.com/BogdanBujdea">Twitter</a> account!</p>
]]></content:encoded></item></channel></rss>