<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Intelligent Quality]]></title><description><![CDATA[My mission is to turn fundamental and complex Automation QA concepts into easy-to-understand and follow tutorials]]></description><link>https://idavidov.eu</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 09:52:32 GMT</lastBuildDate><atom:link href="https://idavidov.eu/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[I Started a YouTube Channel - Here's Why]]></title><description><![CDATA[After writing 30+ articles about Playwright and TypeScript, I always had the feeling that something was missing. The articles give you the patterns and the code - but the deeper explanations, the arch]]></description><link>https://idavidov.eu/youtube-channel-playwright-typescript-tutorials</link><guid isPermaLink="true">https://idavidov.eu/youtube-channel-playwright-typescript-tutorials</guid><category><![CDATA[playwright]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[youtube]]></category><category><![CDATA[QA]]></category><category><![CDATA[QA automation]]></category><category><![CDATA[qa testing]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test]]></category><category><![CDATA[video]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 08 Apr 2026 11:25:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/5631444a-df88-455e-a35c-72c9ec2a1565.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After writing 30+ articles about Playwright and TypeScript, I always had the feeling that something was missing. The articles give you the patterns and the code - but the deeper explanations, the architectural reasoning, the "why behind the why" - that needed a different format.</p>
<p>Written tutorials are great for reference. You bookmark them, copy the code blocks, and come back when you need a refresher. But some things just click faster when someone walks you through the reasoning - why this approach and not that one, what trade-offs were made, and what the architecture actually solves.</p>
<p>That's why I created <a href="https://www.youtube.com/@archqa"><strong>ArchQA</strong></a> - a YouTube channel where I break down the fundamental architecture behind every decision, so you don't just copy the code - you understand why it works.</p>
<hr />
<h2>🎬 Why Video?</h2>
<p>I've spent a lot of time trying to make my articles as practical as possible. Code blocks, correct vs. incorrect examples, real project structures. But there's a gap that text can't fully close.</p>
<p>When you read about setting up a Page Object Model, you get the pattern. But when someone explains <em>why</em> that pattern exists - what problem it solves, what falls apart without it, and how it connects to the bigger architectural picture - that's when it truly clicks.</p>
<p><strong>Understanding the "why" is the difference between following instructions and making your own decisions.</strong> That's the experience I wanted to create. Not surface-level tutorials - detailed explanations of the reasoning behind every architectural choice, so you can apply the same thinking to your own projects.</p>
<hr />
<h2>📺 What's on the Channel</h2>
<p>The channel launches with three playlists, each covering topics my readers already care about - but going deeper into the "why" behind every decision.</p>
<p><strong>TypeScript for Automation QA (Without the Fluff)</strong></p>
<p>Not just syntax. The <a href="https://www.youtube.com/playlist?list=PLYxABk1YARBwGAnMRlwJCt-TZ8rjbjh48">video playlist</a> digs into why TypeScript matters for QA, how the type system protects your tests, and what architectural patterns make your automation code maintainable long-term. You'll understand the fundamentals, not just follow along.</p>
<p><strong>Playwright Framework</strong></p>
<p>Building a professional framework isn't about copying a folder structure. The <a href="https://www.youtube.com/playlist?list=PLYxABk1YARBy-GoFGWdUCUPxV9ZYOIzLx">video series</a> explains the architecture behind every layer - why Page Object Model is structured this way, why fixtures solve dependency injection, why certain patterns scale and others collapse. You'll walk away understanding the reasoning, so you can make your own informed decisions.</p>
<p><strong>Playwright Tips &amp; Tricks</strong></p>
<p>33 self-contained, practical techniques. Each <a href="https://www.youtube.com/playlist?list=PLYxABk1YARBwgGO1DtOFTcPJ39aDUvvsL">video</a> focuses on one tip - short, focused, and immediately applicable. API interception, schema validation with Zod, visual masking, time travel - the kind of things that take your framework from "works" to "works well".</p>
<hr />
<h2>🤝 Blog + Channel = The Full Picture</h2>
<p>The blog isn't going anywhere. If anything, the two formats make each other better.</p>
<p>The articles give you the practical reference - code blocks, patterns, step-by-step guides. The videos go deeper into the architecture and the thinking behind those patterns. Together, you get both the "how" and the "why."</p>
<p>If you want a structured path through all the written content, the <a href="https://idavidov.eu/roadmap">Complete Roadmap to QA Automation &amp; Engineering</a> has everything organized by series and learning order.</p>
<hr />
<h2>🚀 Come Along for the Ride</h2>
<p>The channel is brand new. No backlog of hundreds of videos, no algorithm magic. Just me and topics I genuinely care about explaining well.</p>
<p>If you've found value in the articles, I think you'll find even more in hearing the full reasoning behind the concepts.</p>
<p><a href="https://www.youtube.com/@archqa"><strong>Subscribe to ArchQA on YouTube</strong></a> - and if there's a topic you want to see covered as a video, let me know in the comments.</p>
<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> This is just the beginning. More playlists, more deep dives, and more practical content are on the way. See you on the channel.</p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[From Prompt to Passing Test: A Complete Agentic QA Session]]></title><description><![CDATA[Sound familiar? In the first article, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You st]]></description><link>https://idavidov.eu/from-prompt-to-passing-test</link><guid isPermaLink="true">https://idavidov.eu/from-prompt-to-passing-test</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[Agentic QA]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[QA]]></category><category><![CDATA[playwright]]></category><category><![CDATA[AI]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 25 Mar 2026 05:41:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/c23640cb-e625-4e78-83ad-329e054915ea.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sound familiar? In the <a href="https://idavidov.eu/the-scaffold-playwright-ai">first article</a>, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<p>In the <a href="https://idavidov.eu/what-is-agentic-qa">second article</a>, we saw what makes an AI agent different from a chatbot. It reads your code, takes actions, and works inside your project. But here's the catch: an agent is only as good as the instructions it follows.</p>
<p>In the <a href="https://idavidov.eu/claude-md-teaching-the-ai-your-rules">third article</a>, we saw how <code>CLAUDE.md</code> gives the agent its rules and workflow. But rules without depth only get you so far. "Use the Page Object Model" is a rule, but how exactly do you structure a page object? What's the difference between a locator getter and an action method? How do you compose components into page objects?</p>
<p>In the <a href="https://idavidov.eu/skills-domain-expertise-on-demand">fourth article</a>, we gave the agent deep expertise through skill files. Now it knows <em>how</em> to build page objects, selectors, and fixtures. But there's still a gap: the agent has never seen your application.</p>
<p>Everything in this series has been building toward this moment. You have a scaffold with clear structure. An AI agent that reads your rules. Skills that give it deep expertise. And in the <a href="https://idavidov.eu/explore-first-the-browser-use-workflow">fift article</a>, a browser tool that lets it see your application.</p>
<hr />
<p>Now let's watch it work.</p>
<hr />
<h2>🎬 The Scenario</h2>
<p>Your team has just shipped a user registration feature. You need:</p>
<ul>
<li><p>A page object for the registration form</p>
</li>
<li><p>A Faker factory for user data</p>
</li>
<li><p>A smoke test for the happy path</p>
</li>
<li><p>A regression test for validation errors</p>
</li>
</ul>
<p>Before agentic QA, this would be half a day of boilerplate. Inspecting the DOM, writing locators, setting up factories, wiring fixtures, writing tests, debugging failures.</p>
<p>With an agent, your entire input is this:</p>
<blockquote>
<p>"The app has a new user registration page at <code>/register</code>. Create a page object, a user factory, a smoke test, and regression tests for validation errors".</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/18956412-5922-4d1a-a3a4-67936a1be80e.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🔄 What the Agent Does: Step by Step</h2>
<p><strong>Step 1: Read the rules</strong></p>
<p>The agent opens <code>CLAUDE.md</code> first. It notes the mandatory rules, the workflow, and the skills it needs to load. For this task: <code>playwright-cli</code>, <code>selectors</code>, <code>page-objects</code>, <code>test-standards</code>, and <code>data-strategy</code>.</p>
<p><strong>Step 2: Explore the application</strong></p>
<pre><code class="language-plaintext">Agent → playwright-cli:
"Navigate to /register. List all form fields (role, label, placeholder),
buttons (role, name), headings, and any validation messages visible."
</code></pre>
<p>Discovery result:</p>
<pre><code class="language-plaintext">Heading: "Create your account" (role: heading, level: 1)
Input: label "First name" - required
Input: label "Last name" - required
Input: label "Email address" - required, type: email
Input: label "Password" - required, type: password, placeholder: "At least 8 characters"
Input: label "Confirm password" - required, type: password
Button: "Create account" (role: button)
Link: "Already have an account? Sign in" (role: link)
</code></pre>
<p><strong>Step 3: Check existing patterns</strong></p>
<p>The agent reads <code>pages/app/</code> to see how existing page objects are structured. It reads <code>test-data/factories/app/</code> to see how existing factories are formatted. It confirms the import paths and naming conventions.</p>
<p><strong>Step 4: Generate the page object</strong></p>
<pre><code class="language-typescript">// pages/app/registration.page.ts
import { Page } from "@playwright/test";

export class RegistrationPage {
  constructor(private readonly page: Page) {}

  get firstNameInput() {
    return this.page.getByLabel("First name");
  }

  get lastNameInput() {
    return this.page.getByLabel("Last name");
  }

  get emailInput() {
    return this.page.getByLabel("Email address");
  }

  get passwordInput() {
    return this.page.getByLabel("Password");
  }

  get confirmPasswordInput() {
    return this.page.getByLabel("Confirm password");
  }

  get createAccountButton() {
    return this.page.getByRole("button", { name: "Create account" });
  }

  get firstNameError() {
    return this.page.getByText("First name is required");
  }

  async register(user: RegistrationData) {
    await this.firstNameInput.fill(user.firstName);
    await this.lastNameInput.fill(user.lastName);
    await this.emailInput.fill(user.email);
    await this.passwordInput.fill(user.password);
    await this.confirmPasswordInput.fill(user.password);
    await this.createAccountButton.click();
  }

  async goto() {
    await this.page.goto("/register");
  }
}
</code></pre>
<p><strong>Step 5: Register the page object</strong></p>
<p>The agent adds <code>registrationPage</code> to <code>fixtures/pom/page-object-fixture.ts</code> so tests can access it via dependency injection.</p>
<p><strong>Step 6: Generate the factory</strong></p>
<pre><code class="language-typescript">// test-data/factories/app/user.factory.ts
import { faker } from "@faker-js/faker";

export function createUser() {
  const firstName = faker.person.firstName();
  const lastName = faker.person.lastName();
  return {
    firstName,
    lastName,
    email: faker.internet.email({ firstName, lastName }),
    password: faker.internet.password({ length: 12, memorable: false }),
  };
}
</code></pre>
<p><strong>Step 7: Generate the smoke test</strong></p>
<pre><code class="language-typescript">// tests/app/functional/registration.spec.ts
import { test, expect } from "fixtures/pom/test-options";
import { createUser } from "test-data/factories/app/user.factory";

test(
  "new user can register successfully",
  { tag: "@smoke" },
  async ({ registrationPage, dashboardPage }) =&gt; {
    const user = createUser();

    await test.step("Given the user is on the registration page", async () =&gt; {
      await registrationPage.goto();
    });

    await test.step("When they fill in valid details and submit", async () =&gt; {
      await registrationPage.register(user);
    });

    await test.step("Then they are redirected to the dashboard", async () =&gt; {
      await expect(dashboardPage.welcomeHeading).toBeVisible();
    });
  },
);
</code></pre>
<p><strong>Step 8: Generate the regression tests</strong></p>
<pre><code class="language-typescript">import validationCases from "test-data/static/app/registrationErrors.json";

for (const { field, value, error } of validationCases) {
  test(
    `shows error when ${field} is invalid`,
    { tag: "@regression" },
    async ({ registrationPage }) =&gt; {
      await registrationPage.goto();
      await registrationPage.submitWithInvalid(field, value);
      await expect(registrationPage.errorFor(field)).toHaveText(error);
    },
  );
}
</code></pre>
<p><strong>Step 9: Run the tests</strong></p>
<pre><code class="language-bash">npx playwright test tests/app/functional/registration.spec.ts
</code></pre>
<pre><code class="language-plaintext">Running 1 test using 1 worker
  ✓ new user can register successfully (2.3s)
  ✓ shows error when email is invalid (1.1s)
  ✓ shows error when password is too short (1.0s)
  ✓ shows error when first name is empty (0.9s)

  4 passed (6.2s)
</code></pre>
<p>The agent reports back: all tests pass. Task complete.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/b4c19b4b-bb51-4794-ac44-0b64d94e4d72.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>👀 What You Review</h2>
<p>You didn't write any of that. But your job isn't done. You review it.</p>
<p>What you're looking for:</p>
<ul>
<li><p><strong>Do the locators match what you see in the app?</strong> The agent used real discovery, but you know your application.</p>
</li>
<li><p><strong>Is the test actually testing the right thing?</strong> The happy path asserts the dashboard heading. Is that the right success indicator?</p>
</li>
<li><p><strong>Are the edge cases covered?</strong> The regression tests came from a static JSON file. Did the agent create sensible validation cases?</p>
</li>
<li><p><strong>Does it fit the codebase style?</strong> Compare against existing tests. Does this look like it belongs?</p>
</li>
</ul>
<p>This review takes 5-10 minutes. Writing everything from scratch would have taken half a day.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/d80e55ca-13f0-427c-8407-470eba494623.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🌱 Growing the Framework With AI</h2>
<p>This workflow doesn't just apply to new features. The same pattern works for:</p>
<ul>
<li><p><strong>Refactoring.</strong> "The navigation component was moved to a sidebar. Update the relevant page objects".</p>
</li>
<li><p><strong>New API endpoints.</strong> "The <code>/users</code> endpoint now returns a <code>role</code> field. Update the schema and any affected tests".</p>
</li>
<li><p><strong>Cleanup.</strong> "There are three page objects with duplicate navigation methods. Extract them into a shared component".</p>
</li>
</ul>
<p>The agent reads the current state of the codebase, makes targeted changes, runs the affected tests, and confirms nothing broke. You review the diff.</p>
<p>Over time, your role becomes less about writing tests and more about defining what <em>should</em> be tested. The thinking part of QA, not the typing part.</p>
<hr />
<h2>🎯 The Bigger Picture</h2>
<p>The scaffold, CLAUDE.md, the skills, the explore-first workflow are not a sorcery. They're just a well-designed system that makes it easy for an agent to do the right thing.</p>
<p>The insight at the heart of agentic QA is simple: <strong>AI is most useful when it has clear constraints</strong>. A blank slate produces inconsistent results. A scaffold with rules, skills, and a workflow produces output you can trust.</p>
<p>You're not replacing the QA engineer. You're giving the QA engineer a tireless, fast, rule-following colleague who never complains about writing boilerplate.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/1ecf8793-b4f4-44eb-8f74-5ec6101d712e.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🚀 Get Started</h2>
<p>You have complete instructions to get started with the AI-assisted development.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<hr />
<p>🙏🏻 <strong>Thank you for reading this series!</strong> If you've made it to the end, you now have a complete picture of how agentic QA works, from the scaffold that makes it possible to the moment the tests go green. The tools are here, the patterns are proven, and the only thing left is to start building.</p>
<p>If this series helped you, I'd love to hear about it. See you in the community.</p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Explore First: Why the Agent Looks Before It Writes]]></title><description><![CDATA[Sound familiar? In the first article, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You st]]></description><link>https://idavidov.eu/explore-first-the-browser-use-workflow</link><guid isPermaLink="true">https://idavidov.eu/explore-first-the-browser-use-workflow</guid><category><![CDATA[agentic]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Agentic QA]]></category><category><![CDATA[QA]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[software development]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[claude-code]]></category><category><![CDATA[playwright]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Fri, 20 Mar 2026 08:03:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/65343736-bfb6-4bde-80b2-63874d03cfd3.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sound familiar? In the <a href="https://idavidov.eu/the-scaffold-playwright-ai">first article</a>, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<p>In the <a href="https://idavidov.eu/what-is-agentic-qa">second article</a>, we saw what makes an AI agent different from a chatbot. It reads your code, takes actions, and works inside your project. But here's the catch: an agent is only as good as the instructions it follows.</p>
<p>In the <a href="https://idavidov.eu/claude-md-teaching-the-ai-your-rules">third article</a>, we saw how <code>CLAUDE.md</code> gives the agent its rules and workflow. But rules without depth only get you so far. "Use the Page Object Model" is a rule, but how exactly do you structure a page object? What's the difference between a locator getter and an action method? How do you compose components into page objects?</p>
<p>In the <a href="https://idavidov.eu/skills-domain-expertise-on-demand">fourth article</a>, we gave the agent deep expertise through skill files. Now it knows <em>how</em> to build page objects, selectors, and fixtures. But there's still a gap: the agent has never seen your application.</p>
<hr />
<p>Here's a scenario every automation engineer knows. You ask an AI to generate a page object for your login page. It confidently produces:</p>
<pre><code class="language-typescript">get loginButton() {
    return this.page.getByTestId('login-btn');
}
</code></pre>
<p>You run the test. It fails. The button doesn't have that test ID. It never did. The AI made it up because it had no way to know the real structure of your page.</p>
<p>This is the core problem with AI code generation for UI testing: <strong>the AI is writing about a UI it has never seen</strong>. The result is locators that look generally correct but don't work.</p>
<p>The scaffold's answer to this is a principle called <strong>explore first</strong>, and a tool called <code>playwright-cli</code>.</p>
<hr />
<h2>🤔 What Is playwright-cli?</h2>
<p><code>playwright-cli</code> is a browser automation CLI that lets the AI agent control a real browser. Navigate to URLs, read the page's DOM, discover element roles and labels, take screenshots, and extract structured information.</p>
<p>When an agent has <code>playwright-cli</code>, it doesn't have to guess what's on your login page. It can go look.</p>
<pre><code class="language-bash"># The agent runs something like this before generating code
playwright-cli "Navigate to https://myapp.com/login and list all interactive
elements: their role, accessible name, and any associated label text"
</code></pre>
<p>What comes back is a real inventory of the page:</p>
<pre><code class="language-yaml">- role: heading
  name: "Sign in"
  level: 1
- role: textbox
  name: "Email address"
  required: true
- role: textbox
  name: "Password"
  required: true
- role: button
  name: "Sign in"
- role: link
  name: "Forgot your password?"
- role: link
  name: "Create an account"
</code></pre>
<p>Now when the agent writes a page object, it uses real information:</p>
<pre><code class="language-typescript">get emailInput() {
    return this.page.getByLabel('Email address');
}

get passwordInput() {
    return this.page.getByLabel('Password');
}

get signInButton() {
    return this.page.getByRole('button', { name: 'Sign in' });
}
</code></pre>
<p>These locators work on the first run. No guessing, no iteration, no debugging brittle selectors.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/9293bf3c-7766-4ef3-9685-f54380c2ae7d.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🗺️ The Explore-First Workflow</h2>
<p>The scaffold's <code>CLAUDE.md</code> makes exploration a required step before any code generation:</p>
<pre><code class="language-text">For UI pages:
1. Use playwright-cli to navigate to the target URL
2. Discover: element roles, accessible names, label text, form structure
3. Note any dynamic content or state-dependent elements
4. Only then: generate the page object

For API endpoints:
1. Make a real request to the endpoint
2. Capture: field names, data types, optional vs required fields
3. Note the exact error response structure
4. Only then: generate the Zod schema
</code></pre>
<p><strong>Skip exploration only</strong> when the user has already provided the exact structure. Everything else, the agent goes to look first.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/3291fe0a-86fc-4c96-b01b-8e6f29fdc604.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🧩 What the Agent Discovers Beyond Selectors</h2>
<p>A browser exploration session doesn't just find locators. A thorough agent also discovers:</p>
<ul>
<li><p><strong>Navigation flows.</strong> What happens after you click "Sign in"? Where does the page go? What element should the test assert against to confirm success?</p>
</li>
<li><p><strong>Form validation.</strong> Does the form validate on blur or on submit? What do the error messages actually say? ("Email is required" or "Please enter your email address"?)</p>
</li>
<li><p><strong>Dynamic content.</strong> Is there a loading spinner? A toast notification? An element that only appears after an API call? These affect how the test should wait for state.</p>
</li>
<li><p><strong>Page structure.</strong> Is the "Settings" link in a sidebar, a dropdown, or a navigation bar? This determines whether it belongs in the page object or a shared component.</p>
</li>
</ul>
<p>All of this context shapes better tests, tests that reflect how the application actually works, not how the agent imagined it might work.</p>
<hr />
<h2>🔌 Exploration for APIs</h2>
<p>The same principle applies to API testing. Before the agent writes a Zod schema (if the API documentation is not available), it makes a real request to the endpoint:</p>
<pre><code class="language-http"># The agent calls the actual API
POST /auth/login
{ "email": "test@example.com", "password": "secret" }
</code></pre>
<p>Response:</p>
<pre><code class="language-json">{
  "id": 42,
  "email": "test@example.com",
  "firstName": "Test",
  "lastName": "User",
  "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
  "expiresAt": "2026-03-01T00:00:00.000Z"
}
</code></pre>
<p>Now the schema is built from reality:</p>
<pre><code class="language-typescript">export const LoginResponseSchema = z.strictObject({
  id: z.number(),
  email: z.string().email(),
  firstName: z.string(),
  lastName: z.string(),
  token: z.string(),
  expiresAt: z.string().datetime(),
});
</code></pre>
<p>Every field name is correct. Every type is verified. <code>z.strictObject()</code> means if the API later adds an unexpected field, the test flags it immediately.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/c5285461-8ebf-4083-8ec9-e8373e188ab2.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>⚠️ When Exploration Reveals Surprises</h2>
<p>Sometimes what the agent finds is not what you expected, and that's valuable information.</p>
<p>The label on the form says "E-mail" (with a hyphen), not "Email". The button says "Log in", not "Login". The error message says "Your password is incorrect." with a trailing period. These small differences matter for locators and assertions.</p>
<p>Without exploration, the agent would have guessed and been wrong. With exploration, it finds the truth, and so do you.</p>
<hr />
<h2>🧑‍💻 Your Role in the Explore-First Loop</h2>
<p>With this workflow, your job shifts. You're not writing locators. You're directing exploration.</p>
<p><strong>Before:</strong> You open DevTools, inspect the DOM, copy selectors, paste them, test them, adjust them.</p>
<p><strong>After:</strong> You tell the agent what page to explore and what to generate. You review the output, confirm the locators look right, run the tests.</p>
<p>The agent does the tedious part. You do the thinking part.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/d4838304-67bc-4b8e-aaec-0ca8dbdb67e1.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> The explore-first principle is what makes AI-generated tests reliable instead of plausible. In the final article, we bring everything together: a complete end-to-end example of an agentic QA session from prompt to passing test.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Skills: Domain Expertise on Demand]]></title><description><![CDATA[Sound familiar? In the first article, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You st]]></description><link>https://idavidov.eu/skills-domain-expertise-on-demand</link><guid isPermaLink="true">https://idavidov.eu/skills-domain-expertise-on-demand</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[agents]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Agentic QA]]></category><category><![CDATA[framework]]></category><category><![CDATA[playwright]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Fri, 13 Mar 2026 06:07:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/f5cfa581-aa11-4b57-8e60-6940312edc8e.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sound familiar? In the <a href="https://idavidov.eu/the-scaffold-playwright-ai">first article</a>, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<p>In the <a href="https://idavidov.eu/what-is-agentic-qa">second article</a>, we saw what makes an AI agent different from a chatbot. It reads your code, takes actions, and works inside your project. But here's the catch: an agent is only as good as the instructions it follows.</p>
<p>In the <a href="https://idavidov.eu/claude-md-teaching-the-ai-your-rules">third article</a>, we saw how <code>CLAUDE.md</code> gives the agent its rules and workflow. But rules without depth only get you so far. "Use the Page Object Model" is a rule, but how exactly do you structure a page object? What's the difference between a locator getter and an action method? How do you compose components into page objects?</p>
<hr />
<p><code>CLAUDE.md</code> gives the agent its rules and workflow. But rules without depth only get you so far. "Use the Page Object Model" is a rule, but how exactly do you structure a page object? What's the difference between a locator getter and an action method? How do you compose components into page objects?</p>
<p>This is where <strong>skill files</strong> come in.</p>
<hr />
<h2>🤔 What Is a Skill File?</h2>
<p>A skill file is a detailed, focused markdown document that covers one area of your testing framework in depth. Where <code>CLAUDE.md</code> tells the agent <em>what</em> to do, skills tell the agent <em>how</em> to do it.</p>
<p>The scaffold comes with 13 skills, all living in <code>.claude/skills/</code>:</p>
<pre><code class="language-markdown">.claude/skills/
├── selectors/SKILL.md        → Locator priority, forbidden patterns, examples
├── page-objects/SKILL.md     → POM structure, getters vs actions, registration
├── fixtures/SKILL.md         → DI pattern, fixture creation, merging
├── test-standards/SKILL.md   → Test structure, imports, tagging, steps
├── api-testing/SKILL.md      → apiRequest fixture, schema validation
├── data-strategy/SKILL.md    → Factories vs static JSON, Faker usage
├── type-safety/SKILL.md      → Zod schemas, TypeScript strict mode
├── enums/SKILL.md            → Enum conventions and naming
├── config/SKILL.md           → Config patterns, environment variables
├── helpers/SKILL.md          → Helper functions, auth helpers
├── browser-use/SKILL.md      → Live app exploration with browser automation
├── common-tasks/SKILL.md     → Prompt templates, verification checklist
└── refactor-values/SKILL.md  → Safe refactoring of enums and static data
</code></pre>
<p>Each one is loaded on demand, not all at once. The agent reads only the skill that's relevant to what it's currently doing.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/d43001b5-0d29-43f1-a93a-318c5c307852.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🧠 Why Not Put Everything in CLAUDE.md?</h2>
<p>A single giant instruction file has a real problem. The further down the page a rule lives, the less likely the AI is to apply it consistently. Context windows have limits, and even when they don't, a 5,000 word instruction file is harder to reason about than a 300 word one.</p>
<p>Skills solve this with <strong>lazy loading</strong>. The agent reads the orchestrator (<code>CLAUDE.md</code>) first. Short, high-level, fast. When it needs to work on a specific area, it reads the relevant skill. Deep expertise, loaded exactly when needed.</p>
<pre><code class="language-markdown">Agent working on a page object:
  → CLAUDE.md: "Read selectors and page-objects skills for pages/**"
  → Loads selectors/SKILL.md
  → Loads page-objects/SKILL.md
  → Now has full expertise for this specific task
  → Generates code confidently and correctly
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/e2aa6e0a-f772-499e-9fa9-21c9a43eabce.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>📖 What's Inside a Skill File?</h2>
<p>Let's look at what the <code>selectors</code> skill actually contains:</p>
<p><strong>The priority rule with reasoning:</strong></p>
<pre><code class="language-markdown">Priority order:

1. getByRole() - Tests behaviour, survives CSS/ID changes
2. getByLabel() - Tied to accessible form labels
3. getByPlaceholder() - Tied to input hint text
4. getByText() - Matches visible text content
5. getByTestId() - Last resort, when no semantic option exists

Forbidden: XPath, CSS class selectors (.btn-primary)
</code></pre>
<p><strong>Examples of correct vs incorrect usage:</strong></p>
<pre><code class="language-typescript">// ✅ Correct
page.getByRole("button", { name: "Submit Order" });
page.getByLabel("Email address");

// ❌ Wrong: fragile, will break on CSS changes
page.locator(".submit-btn");
page.locator("#email-input");
page.locator('xpath=//button[@class="btn"]');
</code></pre>
<p><strong>Edge cases:</strong></p>
<pre><code class="language-markdown">- Buttons with icons only: use getByRole('button', { name: /icon label/i })
- Dynamic text: use getByRole with partial name match
- Inside a shadow DOM: note the limitation, escalate to human review
</code></pre>
<p>A skill file is essentially what you'd tell a new team member on their first code review. "We do it this way, here's why, here are the gotchas".</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/b6a40039-5d02-4f43-88f4-593c5c49fcb7.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🔗 The <code>common-tasks</code> Skill</h2>
<p>One skill deserves special attention: <code>common-tasks</code>. It contains <strong>prompt templates</strong>, pre-written instructions for the most frequent tasks an agent performs:</p>
<pre><code class="language-markdown">## Create a Page Object

Before starting:

1. Read selectors/SKILL.md and page-objects/SKILL.md
2. Navigate to the page using browser-use
3. Discover: button names, label text, heading roles

Generate:

- pages/{area}/{name}.page.ts
- Register in fixtures/pom/page-object-fixture.ts

Verification checklist:

- [ ] No XPath
- [ ] No hardcoded strings in test files
- [ ] No `any` type
- [ ] Linting passes
- [ ] Tests run and pass
</code></pre>
<p>This is the agent's playbook. Instead of reasoning from scratch about how to create a page object, it has a structured checklist to follow. The output is consistent because the process is consistent.</p>
<hr />
<h2>📝 Writing Your Own Skill Files</h2>
<p>The skills that ship with the scaffold are a starting point. As your project grows, you'll discover new conventions that need documenting, and new anti-patterns that need blocking.</p>
<p>A good skill file has four parts:</p>
<ol>
<li><p><strong>The rule:</strong> What the agent should do (or not do) in this area</p>
</li>
<li><p><strong>The reasoning:</strong> Why the rule exists (helps the agent generalize)</p>
</li>
<li><p><strong>Code examples:</strong> Correct and incorrect, clearly labeled</p>
</li>
<li><p><strong>Edge cases:</strong> The situations where the rule gets tricky</p>
</li>
</ol>
<p>Keep them short. A skill file that is short and concise will get used. One that is long and verbose won't.</p>
<hr />
<h2>🔄 Skills as a Team Knowledge Base</h2>
<p>Here's something that's easy to miss. Skill files aren't just for AI. They're documentation of your team's decisions.</p>
<p>When a new colleague joins your team, they can read the skill files to understand why things are done the way they are. When you make a decision in a code review ("we always use <code>z.strictObject()</code> because here's what happened that one time"), you add it to the relevant skill file. The AI and the humans stay aligned.</p>
<p>Over time, the <code>.claude/skills/</code> folder becomes a living record of your team's accumulated QA knowledge.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/42dc4968-ada5-4491-b455-db87b86261ee.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> Skills are what separate a consistent AI-generated codebase from a chaotic one. But rules and skills only get the agent so far. It also needs to see the actual application before writing code for it. That's the subject of the next article.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[CLAUDE.md: Teaching the AI Your Rules]]></title><description><![CDATA[Sound familiar? In the first article, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You st]]></description><link>https://idavidov.eu/claude-md-teaching-the-ai-your-rules</link><guid isPermaLink="true">https://idavidov.eu/claude-md-teaching-the-ai-your-rules</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[Agentic QA]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[#ai-tools]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Fri, 06 Mar 2026 05:58:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/50ae29c6-743a-4718-9f68-ba319cf5e18f.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sound familiar? In the <a href="https://idavidov.eu/the-scaffold-playwright-ai">first article</a>, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<p>In the <a href="https://idavidov.eu/what-is-agentic-qa">second article</a>, we saw what makes an AI agent different from a chatbot. It reads your code, takes actions, and works inside your project. But here's the catch: an agent is only as good as the instructions it follows.</p>
<hr />
<p>Every team has conventions. Use this selector pattern. Put files here. Never do that. Import from this file, not that one.</p>
<p>The problem is that conventions live in people's heads. A new team member, human or AI, doesn't know them until they make a mistake. With humans, you do a code review and explain. With AI, you get the same mistake every single time, because it resets to zero with every new conversation.</p>
<p><code>CLAUDE.md</code> solves this. It's a file that the AI reads at the start of every session, every time, before generating a single line of code.</p>
<hr />
<h2>🤔 What Is CLAUDE.md?</h2>
<p>It's a markdown file at the root of the project, but it's not documentation for humans. It's an <strong>instruction set for an AI agent</strong>.</p>
<p>When Claude Code (or any Claude-based agent) opens your project, it reads <code>CLAUDE.md</code> first. Everything in that file shapes how the agent behaves throughout the session. What patterns it follows, what it checks before writing, what it refuses to do.</p>
<p>Think of it as onboarding documentation that actually gets read.</p>
<blockquote>
<p><strong>Not just for Claude.</strong> The scaffold provides the same rules in three formats: <code>CLAUDE.md</code> for Claude Code, <code>.cursor/rules/</code> for Cursor, and <code>.github/copilot-instructions.md</code> for GitHub Copilot. Same architecture, different file paths. This article uses <code>CLAUDE.md</code> as the example, but the pattern applies regardless of which AI tool you use.</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/3327fc18-896b-434d-8176-e90e00907685.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🏛️ The Constitution Pattern</h2>
<p>The scaffold organizes <code>CLAUDE.md</code> around three tiers of rules, a structure borrowed from how legal systems separate mandatory law from guidance to prohibition:</p>
<pre><code class="language-plaintext">MUST    → Mandatory. No exceptions, no debate.
SHOULD  → Recommended. The agent prefers this unless there's a reason not to.
WON'T   → Forbidden. The agent refuses to do this regardless of what's asked.
</code></pre>
<p>This three-tier structure is important. If everything is mandatory, the agent has no room to exercise judgment. If everything is flexible, the agent defaults to its own habits, which may not match yours. The tiers give the agent a clear mental model of what's negotiable and what isn't.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/33c4d2bb-c6e4-46c3-ad7c-741a41670830.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>✅ The MUST Rules</h2>
<p>These are non-negotiable. The scaffold's mandatory rules cover the things that, if violated, break the architecture:</p>
<pre><code class="language-markdown">| Rule                 | Requirement                                                   |
| -------------------- | ------------------------------------------------------------- |
| Dependency Injection | Never use `new PageObject(page)` in tests                     |
| Imports              | Import `test` and `expect` from fixtures/pom/test-options.ts  |
| Selectors            | getByRole() &gt; getByLabel() &gt; getByPlaceholder() &gt; getByText() |
| Type Safety          | Use Zod schemas in fixtures/api/schemas/, no `any` type       |
| Strict Schemas       | Always use z.strictObject(), never z.object()                 |
| Assertions           | Web-first assertions only, never waitForTimeout()             |
</code></pre>
<p>Notice how specific these are. "Use semantic selectors" is too vague. The agent has to interpret what that means. "Use <code>getByRole()</code> before <code>getByLabel()</code> before <code>getByPlaceholder()</code>" is unambiguous. The agent follows a decision tree.</p>
<hr />
<h2>💡 The SHOULD Rules</h2>
<p>These are recommendations for when the agent has a choice:</p>
<pre><code class="language-markdown">| Rule            | Recommendation                                                |
| --------------- | ------------------------------------------------------------- |
| Explore First   | Navigate to the page or endpoint before generating code       |
| Data Generation | Use Faker via factories for all happy-path test data          |
| Test Isolation  | Tests should be independent. Use beforeEach, not shared state |
| Test Steps      | Use Given/When/Then structure with test.step()                |
</code></pre>
<p>"Explore First" is here rather than in MUST because there are legitimate cases where you've already provided the element structure. The agent uses its judgment.</p>
<hr />
<h2>🚫 The WON'T Rules</h2>
<p>These are hard blocks, things the agent should refuse to do even if you ask nicely:</p>
<pre><code class="language-markdown">| Rule                 | Violation                                      |
| -------------------- | ---------------------------------------------- |
| No XPath             | Never use XPath selectors                      |
| No Hard Waits        | Never use page.waitForTimeout()                |
| No `any`             | Never use TypeScript's any type                |
| No Multiple Tags     | Each test has exactly ONE tag                  |
| No Hardcoded Content | Never hardcode test strings. Use Faker instead |
| No Loose Schemas     | Never use z.object(), always z.strictObject()  |
</code></pre>
<p>The WON'T list is where institutional knowledge lives. These are the mistakes your team has made (or seen others make) and decided to never repeat. Writing them down here means the AI inherits that hard-won experience.</p>
<hr />
<h2>🗺️ The Workflow Section</h2>
<p>Beyond rules, <code>CLAUDE.md</code> also defines a <strong>step-by-step workflow</strong> the agent follows for every code generation task:</p>
<pre><code class="language-plaintext">1. Read this file (always loaded)
2. Explore the application (navigate or make API requests)
3. Find existing code to follow as a pattern
4. Use fixtures, never manual instantiation
5. Generate data with factories
6. Check against the WON'T rules
7. Run the tests. Don't report done until they pass
</code></pre>
<p>Step 7 is particularly important. Without it, the agent might generate code that looks right but fails at runtime. With it, the agent runs the tests and confirms they pass before telling you the task is done.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/81e9566b-ac2a-4535-b413-124ed3bf89f9.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>📁 The Skills Index</h2>
<p><code>CLAUDE.md</code> also acts as a map to the skills files. Rather than cramming every detail into one file (which would be overwhelming), it tells the agent: <em>"When you're working on page objects, read the page-objects skill. When you're working on API schemas, read the api-testing skill."</em></p>
<pre><code class="language-markdown">| Skill          | Read When Working On               |
| -------------- | ---------------------------------- |
| selectors      | pages/\*\*                         |
| page-objects   | pages/\*\*                         |
| fixtures       | fixtures/\*\*                      |
| test-standards | tests/\*\*                         |
| api-testing    | fixtures/api/**, tests/**/api/\*\* |
| data-strategy  | test-data/\*\*                     |
</code></pre>
<p>This keeps <code>CLAUDE.md</code> focused and fast to parse, while the detailed expertise lives in dedicated skill files. That brings us to the next article.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/7f20f0a7-fab1-4e09-98cb-59327c586ad9.webp" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>✍️ Writing Your Own CLAUDE.md</h2>
<p>You don't have to start from scratch. The scaffold's <code>CLAUDE.md</code> is a template. To adapt it to your project:</p>
<ol>
<li><p><strong>Change the file paths</strong> to match your actual folder names</p>
</li>
<li><p><strong>Add your team's specific WON'Ts</strong>: the anti-patterns you've personally seen cause problems</p>
</li>
<li><p><strong>Update the workflow</strong> if your project has a setup step or a specific CI check to run</p>
</li>
<li><p><strong>Keep it honest</strong>: don't add rules you don't actually enforce, or the agent will contradict your existing codebase</p>
</li>
</ol>
<p>The goal isn't a perfect document on day one. It's a living file that gets better every time you catch the agent doing something you don't want.</p>
<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> <code>CLAUDE.md</code> is the file that turns a general AI into a specialized team member. But rules alone aren't enough. The agent also needs deep expertise for specific tasks. That's what skill files provide. See you in the next one.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[What Is Agentic QA and Why It Changes Everything]]></title><description><![CDATA[Sound familiar? In the first article, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You st]]></description><link>https://idavidov.eu/what-is-agentic-qa</link><guid isPermaLink="true">https://idavidov.eu/what-is-agentic-qa</guid><category><![CDATA[agentic AI]]></category><category><![CDATA[Agentic QA]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[playwright]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 04 Mar 2026 06:12:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/8789bd6f-0120-42c2-96eb-f245fcbd6c5c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sound familiar? In the <a href="https://idavidov.eu/the-scaffold-playwright-ai">first article</a>, we set up a project scaffold designed for AI. But a good structure only gets you so far if the AI is just a code suggester. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<hr />
<p>"I asked Claude Code to write a test and it gave me something that kind of works"</p>
<p>Sound familiar? That's AI as a <strong>code suggester</strong>. Useful, but not transformative. You still have to know what to ask, verify what it wrote, adapt it to your project, and repeat for every file.</p>
<p>Agentic QA is different. Instead of an AI that answers questions, you have an AI that <strong>takes actions</strong>. It reads your codebase, opens a browser, navigates your application, discovers elements, and generates code that fits directly into your existing structure.</p>
<p>The difference is the difference between a search engine and an employee.</p>
<hr />
<h2>🤔 Agent vs Chatbot: What's the Actual Difference?</h2>
<p>A chatbot responds to a single message. It has no memory of your project, no access to your files, and no ability to take actions beyond generating text.</p>
<p>An <strong>agent</strong> operates in a loop:</p>
<ol>
<li><p>It receives a goal (create a test suite for the registration page)</p>
</li>
<li><p>It takes actions to gather information (reads files, navigates the browser)</p>
</li>
<li><p>It uses what it found to produce an output (generates the page object, Faker factory, test suite, etc.)</p>
</li>
<li><p>It verifies the output (runs the tests, checks for linting errors)</p>
</li>
<li><p>It iterates if something is wrong</p>
</li>
</ol>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/0775a2c5-54d3-4f78-9d5f-fc0871238ace.png" alt="" style="display:block;margin:0 auto" />

<p>The agent isn't just writing code. It's operating inside your development environment, making decisions based on real information.</p>
<pre><code class="language-plaintext">You: Add a e2e test for the checkout flow

Agent:
  → Opens browser, navigates to the product page
  → Discovers: all the necessary actions to complete the checkout flow
  → Reads pages/ to understand existing page object conventions
  → Reads tests/ to understand the test structure and imports
  → Reads .claude/skills/test-standards to check test standards
  → Generates checkout.page.ts, checkout.spec.ts, factory functions, Zod schemas, etc.
  → Runs the test: npx playwright test checkout.spec.ts
  → Confirms: 1 passed, 0 failed
  → If something is wrong, it reads the error and iterates again
</code></pre>
<p>That entire loop happens without you writing a line. The agent is doing all the work.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/5c869e5a-773a-4149-a655-ef7a7a13d67b.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🎯 Why Test Automation Is a Perfect Fit</h2>
<p>Not every software task is well-suited for agents. But test automation has properties that make it almost ideal:</p>
<ul>
<li><p><strong>It's highly structured.</strong> Tests follow patterns. Page objects follow patterns. Data factories follow patterns. Patterns are exactly what AI agents are good at recognizing and reproducing.</p>
</li>
<li><p><strong>It's verifiable.</strong> After generating a test, the agent can run it. A passing test is objective confirmation that the output is correct. The agent doesn't have to guess. It can check.</p>
</li>
<li><p><strong>It's repetitive.</strong> Writing a page object for the tenth page in your app involves the same thinking as the first. That repetition is tedious for humans and trivial for agents.</p>
</li>
<li><p><strong>It requires codebase context.</strong> A good test has to fit the existing project. An agent that can read your files produces output that integrates cleanly. A chatbot produces output you have to adapt manually.</p>
</li>
</ul>
<pre><code class="language-plaintext">Important to note that "Human in the loop" is still mandatory. The agent is not a replacement for a human. It is a tool to help you do your job faster and better. You still need to review the output and decide what to do next.
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/121b9d42-7780-4204-b962-1b869a2852f3.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🧠 The Mental Shift</h2>
<p>The biggest change is not technical. It is how you think about your role.</p>
<p><strong>Before agentic QA:</strong> You are the writer. You design the test, write the page object, wire up the fixture, create the factory, run the tests.</p>
<p><strong>With agentic QA:</strong> You are the reviewer and director. You built the architecture. You define the goal, review the output, catch anything the agent missed, and decide what to test next.</p>
<p>This doesn't mean less work. It means different work, higher-level work. You spend more time thinking about <em>what</em> to test and less time typing boilerplate.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/0d07501c-16ca-46e4-92b8-9cc720e02557.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🛠️ What Makes an Agent Reliable?</h2>
<p>Here's the thing nobody tells you. A raw AI agent, pointed at your codebase with no guidance, will produce inconsistent results. Sometimes good, sometimes generic, sometimes wrong in subtle ways.</p>
<p>What makes an agent reliable is <strong>context and constraints</strong>:</p>
<ul>
<li><p>Rules about what it must always do</p>
</li>
<li><p>Rules about what it must never do</p>
</li>
<li><p>Detailed expertise for specific tasks</p>
</li>
<li><p>A workflow it follows before generating code</p>
</li>
</ul>
<p>In this scaffold, all of that lives in two places: <code>CLAUDE.md</code> (the orchestrator) and <code>.claude/skills/</code> (the expertise files). Together they turn a general-purpose AI into something that behaves like a senior QA engineer who has been on your project for months.</p>
<p>That's exactly what the next two articles are about.</p>
<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> The concept of agentic QA sounds futuristic, but the tools exist today and the scaffold is built to use them. Next up: how <code>CLAUDE.md</code> works and why it's the most important file in the project.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[The Scaffold: Playwright Project Structure Built for AI]]></title><description><![CDATA[Have you ever started a new Playwright project and spent the first two days just figuring out where things go? Where do selectors live? How do page objects get into tests? Where does test data belong?]]></description><link>https://idavidov.eu/the-scaffold-playwright-ai</link><guid isPermaLink="true">https://idavidov.eu/the-scaffold-playwright-ai</guid><category><![CDATA[playwright]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[AI]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Agentic QA]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Sun, 01 Mar 2026 06:36:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/ef473ddd-08fc-46e7-b1d9-f4a718c4314b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever started a new Playwright project and spent the first two days just figuring out where things go? Where do selectors live? How do page objects get into tests? Where does test data belong?</p>
<p>Most teams answer these questions ad hoc, and end up with a different answer every time. After a few months the codebase looks like that - a mix of conventions, inconsistencies, and copy-pasted patterns that no one quite owns.</p>
<p>A <strong>scaffold</strong> solves this before it starts. But the one we're exploring here was designed with a twist. It wasn't just built for humans. It was built for AI agents to work with.</p>
<hr />
<h2>🤔 What Is a Scaffold?</h2>
<p>A scaffold is a pre-built project structure that answers all the "where does this go?" questions upfront. Before you write a single test, you already have:</p>
<ul>
<li><p>A folder for page objects</p>
</li>
<li><p>A folder for test data</p>
</li>
<li><p>A single import point for all fixtures</p>
</li>
<li><p>Conventions for selectors, naming, and test structure</p>
</li>
</ul>
<p>Think of it like a city grid. Before buildings go up, the streets are laid. You always know how to get from A to B because the plan was made in advance.</p>
<pre><code class="language-plaintext">pages/          → UI page objects and components
tests/          → Test scenarios, organized by area and type
test-data/      → Factories (dynamic) and static JSON (edge cases)
fixtures/       → Dependency injection: wires everything together
enums/          → No hardcoded strings anywhere
config/         → URLs and environment settings
.claude/        → AI instruction files: skills and the orchestrator
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/1e3dd36a-54b0-4f8e-a259-ed6ee1d4b111.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🧱 What the Scaffold Gives You</h2>
<p>Beyond folder structure, the scaffold comes with working patterns for every layer of a test suite:</p>
<ul>
<li><p><strong>Page Object Model:</strong> UI pages represented as TypeScript classes. Locators and actions in one place. When a selector changes, you update one file. <a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-pom-as-fixture-and-auth-user-session">POM</a></p>
</li>
<li><p><strong>Fixtures and Dependency Injection:</strong> Page objects arrive in tests already instantiated. No <code>new LoginPage(page)</code> boilerplate, no manual setup.</p>
</li>
<li><p><strong>Type-Safe API Testing:</strong> Zod - a runtime type validation library - validates API response shapes. If the backend changes a field name, the test fails immediately. <a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-api-fixtures">Zod</a></p>
</li>
<li><p><strong>Smart Test Data:</strong> Faker - a library for generating realistic dummy data - creates unique values every run. Static JSON handles edge cases and invalid inputs.</p>
</li>
<li><p><strong>Strict Linting:</strong> ESLint and Prettier enforce conventions automatically. Pre-commit hooks block anything that doesn't meet the standard. If you want to see this in action, the <a href="https://idavidov.eu/never-commit-broken-code-again-a-guide-to-eslint-and-husky-in-playwright">Never Commit Broken Code Again</a> article walks through the full setup.</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/8823afc4-64c1-4bb9-9eec-da9525ee1f22.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🤖 But Here's the Real Point</h2>
<p>All of that is table stakes for a modern Playwright project. What makes this scaffold different is the <code>.claude/</code> folder.</p>
<p>On the root level of the project, there is a file called <code>CLAUDE.md</code> and a <code>.claude/</code> folder. Inside it lives a separate folder for each skill, along with a file for each one. Together, they define a complete instruction set for an AI agent. Every rule, every pattern, every forbidden anti-pattern is written down in a format that an AI reads before generating a single line of code.</p>
<p><strong>The scaffold was designed so that an AI agent can build and extend it correctly, without supervision.</strong></p>
<p>The rest of this series is about exactly that: how agentic QA works, what those files contain, and how you can use AI to build a professional test automation framework faster than you thought possible.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/fad36549-3241-4f52-a907-18460dc2e04c.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🔌 Works With Your Tools</h2>
<p>The scaffold isn't locked to a single AI tool. The same rules and skills are provided in three formats:</p>
<ul>
<li><p><strong>Claude Code</strong> reads <code>CLAUDE.md</code> and <code>.claude/skills/</code></p>
</li>
<li><p><strong>Cursor</strong> reads <code>.cursor/rules/</code> and <code>.cursor/skills/</code></p>
</li>
<li><p><strong>GitHub Copilot</strong> reads <code>.github/copilot-instructions.md</code> and <code>.github/instructions/</code></p>
</li>
</ul>
<p>Same architecture, same conventions, same guardrails. You pick the tool you prefer and the scaffold works with it.</p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/5d274d0f-7b49-4390-b7c3-396fc76520b7.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>🐳 One Command to Start</h2>
<p>The scaffold ships with a Dev Container configuration. If you have Docker installed, you open the project and everything is ready: Node, Playwright browsers, Python, <code>browser-use</code>, and AI CLIs. No manual setup, no dependency hunting, no "it works on my machine". One command, fully configured environment.</p>
<hr />
<p>🙏🏻 <strong>Thank you for reading!</strong> This article was the setup. Starting from the next one, we get into the part that changes how you think about test automation. That's the AI side of it. See you there.</p>
<p>You can find the Public README.md file for the scaffold on GitHub: <a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Playwright Scaffold</a></p>
<p>You can get access to the private GitHub repository here: <a href="https://buymeacoffee.com/idavidov/e/513835">Get Access</a></p>
<blockquote>
<p>Every coffee you buy ☕ directly contributes to keeping this resource open and growing for everyone.</p>
</blockquote>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" style="display:block;margin:0 auto" /></a></p>
]]></content:encoded></item><item><title><![CDATA[Start Here: The Complete Roadmap to QA Automation & Engineering]]></title><description><![CDATA[Welcome! If you are looking for a structured path to mastering Test Automation, TypeScript for Automation QA, or Quality Engineering strategies, you have arrived at the right place.
Due to recent plat]]></description><link>https://idavidov.eu/roadmap</link><guid isPermaLink="true">https://idavidov.eu/roadmap</guid><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Sun, 18 Jan 2026 14:26:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768746166031/f65c7bf1-06d1-4274-87ee-0ad591feac29.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome! If you are looking for a structured path to mastering Test Automation, TypeScript for Automation QA, or Quality Engineering strategies, you have arrived at the right place.</p>
<p>Due to recent platform updates, some article series may appear out of order in the sidebar. This page serves as the <strong>definitive, chronological index</strong> for all my major educational series.</p>
<p>Bookmark this page (or pin it) to keep track of your learning progress.</p>
<hr />
<h2>🎬 Now on YouTube</h2>
<p>The articles give you the patterns and the code. But I always felt something was missing - the deeper architectural reasoning behind every decision. That's why I started <a href="https://www.youtube.com/@archqa">ArchQA</a>, my YouTube channel where I break down the fundamentals so you don't just follow along, you understand why.</p>
<p>Three playlists are live:</p>
<p><a href="https://www.youtube.com/playlist?list=PLYxABk1YARBwGAnMRlwJCt-TZ8rjbjh48">TypeScript for Automation QA</a></p>
<p><a href="https://www.youtube.com/playlist?list=PLYxABk1YARBy-GoFGWdUCUPxV9ZYOIzLx">Playwright Framework</a></p>
<p><a href="https://www.youtube.com/playlist?list=PLYxABk1YARBwgGO1DtOFTcPJ39aDUvvsL">Playwright Tips &amp; Tricks</a></p>
<p><a href="https://www.youtube.com/@archqa?sub_confirmation=1">Subscribe</a> to catch every new video.</p>
<hr />
<h2>🤖 Series: Agentic QA</h2>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/be8c6d43-1ed7-4004-8de6-c9b850ca7c8b.png" alt="" style="display:block;margin:0 auto" />

<p><em>A beginner-friendly series about using AI agents to build and maintain a professional Playwright test automation framework - from project scaffold to passing tests.</em></p>
<ol>
<li><p><a href="https://idavidov.eu/the-scaffold-playwright-ai">The Scaffold: Playwright Project Structure Built for AI</a></p>
<ul>
<li>A pre-built Playwright project structure designed for AI agents to build and extend correctly</li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/what-is-agentic-qa">What Is Agentic QA and Why It Changes Everything</a></p>
<ul>
<li>The difference between an AI that answers questions and one that takes actions in your codebase</li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/claude-md-teaching-the-ai-your-rules">CLAUDE.md: Teaching the AI Your Rules</a></p>
<ul>
<li>How a single markdown file turns a general AI into a team member that follows your conventions</li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/skills-domain-expertise-on-demand">Skills: Domain Expertise on Demand</a></p>
<ul>
<li>How focused skill files give your AI agent deep expertise exactly when it needs it</li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/explore-first-the-browser-use-workflow">Explore First: Why the Agent Looks Before It Writes</a></p>
<ul>
<li>How exploration gives your AI agent real page context before it writes a single locator</li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/from-prompt-to-passing-test">Prompt to Passing Test: A Complete Agentic QA Session</a></p>
<ul>
<li>Watch an AI agent go from a single prompt to page objects, factories, and passing Playwright tests</li>
</ul>
</li>
</ol>
<hr />
<h2>🏗️ Series: Building a Playwright Framework Step-by-Step</h2>
<img src="https://idavidov.eu/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1760448023077%2F82701b65-9af7-43c6-a486-1466921c208a.png&amp;w=3840&amp;q=75" alt="Building Playwright Framework Step By Step" />

<p><em>A hands-on, code-heavy guide to building a scalable test automation framework from scratch using Playwright and TypeScript.</em></p>
<ol>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-initial-setup"><strong>Initial Setup</strong></a></p>
<ul>
<li><em>Setting the foundation for a professional framework.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-create-user-snippets"><strong>Create User Snippets</strong></a></p>
<ul>
<li><em>Boosting productivity with snippets in IDEs.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-setup-environment-variables"><strong>Setup Environment Variables</strong></a></p>
<ul>
<li><em>Managing sensitive data and configurations securely.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-setup-design-pattern"><strong>Setup Design Pattern</strong></a></p>
<ul>
<li><em>Structuring your project for maintainability.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-pom-as-fixture-and-auth-user-session"><strong>Implementing POM as Fixture and Auth User Session</strong></a></p>
<ul>
<li><em>Advanced Page Object Model usage and handling state.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-ui-tests"><strong>Implementing UI Tests</strong></a></p>
<ul>
<li><em>Writing robust end-to-end UI scenarios.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-api-fixtures"><strong>Implementing API Fixtures</strong></a></p>
<ul>
<li><em>Setting up reusable API components.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-api-tests"><strong>Implementing API Tests</strong></a></p>
<ul>
<li><em>Validating backend logic directly.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/building-playwright-framework-step-by-step-implementing-cicd"><strong>Implementing CI/CD</strong></a></p>
<ul>
<li><em>Automating execution with continuous integration pipelines.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/never-commit-broken-code-again-a-guide-to-eslint-and-husky-in-playwright">Never Commit Broken Code Again: A Guide to ESLint and Husky in Playwright</a></p>
<ul>
<li>Enforcing code quality automatically.</li>
</ul>
</li>
</ol>
<hr />
<h2>📘 Series: TypeScript for Automation QA</h2>
<img src="https://idavidov.eu/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1749725571785%2F4717ad30-6c75-4d37-882a-d64c33be4392.jpeg&amp;w=3840&amp;q=75" alt="TypeScript for Automation QA" />

<p><em>Stop guessing and start typing. This series bridges the gap between basic scripting and professional software engineering.</em></p>
<ol>
<li><p><a href="https://idavidov.eu/your-first-steps-in-typescript-a-practical-roadmap-for-automation-qa"><strong>Your First Steps in TypeScript</strong></a></p>
<ul>
<li><em>Why TS matters for QA and how to start.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/how-to-use-arrays-and-objects-in-typescript-for-powerful-qa-automation-scripts"><strong>How to Use Arrays and Objects</strong></a></p>
<ul>
<li><em>Data manipulation essentials.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/how-to-master-fundamental-typescript-logic-for-smarter-automation-qa"><strong>Master Fundamental TypeScript Logic</strong></a></p>
<ul>
<li><em>Writing smarter, logic-driven tests.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/a-practical-guide-to-typescript-custom-types-for-qa-automation"><strong>A Practical Guide to Custom Types</strong></a></p>
<ul>
<li><em>Defining strict contracts for your data.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/stop-writing-brittle-tests-your-blueprint-for-a-scalable-typescript-pom"><strong>Stop Writing Brittle Tests: Scalable POM</strong></a></p>
<ul>
<li><em>Refactoring Page Objects for stability.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/stop-writing-flaky-tests-your-foundational-guide-to-async-in-playwright"><strong>Stop Writing Flaky Tests: Async in Playwright</strong></a></p>
<ul>
<li><em>Handling promises and awaits correctly.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/how-to-build-a-scalable-qa-framework-with-advanced-typescript-patterns"><strong>Build a Scalable QA Framework with Advanced Patterns</strong></a></p>
<ul>
<li><em>Taking your architecture to the next level.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/understanding-object-oriented-programming-in-the-context-of-automation-qa"><strong>Understanding OOP in Automation QA</strong></a></p>
<ul>
<li><em>Applying Object-Oriented principles to tests.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/upgrade-playwright-tests-typescript-mixin-design-pattern-guide"><strong>Upgrade Playwright Tests: Mixin Design Pattern</strong></a></p>
<ul>
<li><em>A guide to flexible class composition.</em></li>
</ul>
</li>
</ol>
<hr />
<h2>🧠 Series: Quality Engineering</h2>
<img src="https://idavidov.eu/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1756886865540%2F10777283-f921-4222-9fd5-9305fdf0ba6a.png&amp;w=3840&amp;q=75" alt="Quality Engineering" />

<p><em>Moving beyond "writing scripts" to defining quality culture, strategy, and leadership.</em></p>
<ol>
<li><p><a href="https://idavidov.eu/the-complete-quality-engineering-roadmap"><strong>The Complete Quality Engineering Roadmap</strong></a></p>
<ul>
<li><em>The big picture of modern QA careers.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset"><strong>Creating a Robust Software Quality Foundation</strong></a></p>
<ul>
<li><em>Establishing culture, roles, and mindset.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/proactive-strategies-for-pre-development-success-requirements-stories-and-planning"><strong>Proactive Strategies for Pre-Development Success</strong></a></p>
<ul>
<li><em>Shift-left testing and requirements planning.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/mastering-in-sprint-quality-for-faster-releases-ci-code-reviews-and-collaboration"><strong>Mastering In-Sprint Quality</strong></a></p>
<ul>
<li><em>Code reviews, collaboration, and faster releases.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/developing-a-powerful-test-automation-strategy-frameworks-cicd-and-e2e-tests"><strong>Developing a Powerful Test Automation Strategy</strong></a></p>
<ul>
<li><em>Aligning tools and frameworks with business goals.</em></li>
</ul>
</li>
</ol>
<hr />
<h2>⚡ Series: Productivity</h2>
<img src="https://idavidov.eu/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1749132591928%2F33d89025-8aeb-4571-a0e0-368cc2e9788e.avif&amp;w=3840&amp;q=75" alt="Productivity" />

<p><em>Tools to make you a more effective engineer.</em></p>
<ol>
<li><p><a href="https://idavidov.eu/one-file-to-rule-them-all-cursor-windsurf-and-vs-code"><strong>One File to Rule Them All: Cursor, Windsurf, and VS Code</strong></a></p>
<ul>
<li><em>Optimizing your IDE setup.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/the-ai-shift-why-specialized-models-are-the-next-wave-for-tech-teams"><strong>The AI Shift: Why Specialized Models are the Next Wave</strong></a></p>
<ul>
<li><em>How AI is reshaping technical teams.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/how-to-separate-ai-coding-hype-from-reality-for-real-world-qa-and-development"><strong>Hyped vs. Reality: AI in QA and Development</strong></a></p>
<ul>
<li><em>Pragmatic approaches to AI tools.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/how-to-use-ai-for-test-case-generation-a-practical-guide-to-empower-your-qa-team"><strong>AI for Test Case Generation</strong></a></p>
<ul>
<li><em>Practical guides to empowering your team.</em></li>
</ul>
</li>
<li><p><a href="https://idavidov.eu/youre-not-chatting-with-ai-youre-giving-it-a-job-heres-how"><strong>You're Not Chatting with AI, You're Giving it a Job</strong></a></p>
<ul>
<li><em>Prompt engineering techniques for engineers.</em></li>
</ul>
</li>
</ol>
<hr />
<h2>🚀 Personal Project</h2>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768747328126/5bccd92d-07e4-4b53-95ab-2e1e8067cfa5.jpeg" alt="" style="display:block;margin:0 auto" />

<h3><a href="https://idavidov.eu/test-case-generator"><strong>Gemini-Powered Test Case Generator App</strong></a></h3>
<img src="https://idavidov.eu/_next/image?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1752419819713%2F7c68fec5-4f78-471d-85f2-9bb1a4132488.jpeg&amp;w=3840&amp;q=75" alt="From Frustration to Automation: How I Built a Gemini-Powered Test Case Generator App" />

<ul>
<li><em>Free Test Case Generator, utilizing Gemini by providing API Token</em></li>
</ul>
<h3><a href="https://buymeacoffee.com/idavidov/e/499589">The AI-Native Playwright Scaffold: Built for the Orchestrator Pattern with Cursor Rules</a></h3>
<img src="https://cdn.buymeacoffee.com/uploads/rewards/2026-01-16/1/123553_Gemini_Generated_Image_xmv33ixmv33ixmv3.png@1200w_0e.png" alt="Playwright Scaffold" />

<ul>
<li><p><em>Production-ready test automation framework built with TypeScript and Playwright</em></p>
</li>
<li><p><em>Providing a solid foundation for UI, API, and E2E testing.</em></p>
</li>
<li><p><em>Features a Page Object Model architecture, fixture-based dependency injection, Zod schema validation for APIs, and pre-configured authentication handling.</em></p>
</li>
<li><p><em>Includes ESLint, Prettier, and Husky for code quality, along with multi-browser support and comprehensive HTML reporting with traces and screenshots.</em></p>
</li>
</ul>
<h3><a href="http://buymeacoffee.com/idavidov/e/501071">Master Playwright: 33 Tips &amp; Tricks for Robust Automation</a></h3>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768914677618/425baa7a-9489-40c9-994f-1f954703d17e.jpeg" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p>33 self-contained, copy-paste ready TypeScript snippets designed to instantly upgrade your Playwright framework.</p>
</li>
<li><p>Organized into 7 practical categories, it teaches you advanced techniques like API interception, schema validation with Zod, global authentication setup, time travel, and visual masking.</p>
</li>
<li><p>All in a clear format you can immediately apply to your real-world projects.</p>
</li>
<li><p>Start building robust, stable, and intelligent quality tests today.</p>
</li>
</ul>
<h3><a href="https://buymeacoffee.com/idavidov/e/513835">AI-Native Playwright Scaffold: 13 Skills. Zero Config. One Command</a></h3>
<p><a href="https://github.com/idavidov13/Playwright-Scaffold-AI-Assisted-Development-Public">Public Repository</a></p>
<img src="https://cdn.hashnode.com/uploads/covers/684094468ee5eff3cfc91033/5fbfad1e-5cf3-4682-87b8-a276ae254d3b.jpg" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>3 Modular AI Skills:</strong> AI activates specific rules (POM, API, data-strategy) only on matching files. No more flat, generic prompts.</p>
</li>
<li><p><strong>The "Constitution":</strong> Strict guardrails (No XPath, no hard waits, strict Zod) enforced across AI instructions, ESLint, and Husky.</p>
</li>
<li><p><strong>Zero-Config Dev Container:</strong> One-click Docker setup with Node, Playwright, Python, and AI CLIs pre-installed.</p>
</li>
<li><p><strong>Browser-First AI:</strong> Built-in tools let your AI explore the live application's real DOM to generate accurate locators instead of guessing.</p>
</li>
<li><p><strong>Enterprise Architecture:</strong> Zod 4 type-safe API tests, clean Playwright fixture injection, and Faker-driven, parallel-safe data strategies.</p>
</li>
</ul>
<hr />
<h3>📬 Stay Updated</h3>
<p>If you found these roadmaps helpful, subscribe to the newsletter to get notified when new chapters are added to these series.</p>
<p><a href="https://www.buymeacoffee.com/idavidov"><img src="https://img.buymeacoffee.com/button-api/?text=Buy%20me%20a%20coffee&amp;emoji=%F0%9F%91%A8%E2%80%8D%F0%9F%92%BB&amp;slug=idavidov&amp;button_colour=FFDD00&amp;font_colour=000000&amp;font_family=Comic&amp;outline_colour=000000&amp;coffee_colour=ffffff" alt="Buy me a coffee" /></a></p>
]]></content:encoded></item><item><title><![CDATA[The AI Shift: Why Specialized Models are the Next Wave for Tech Teams]]></title><description><![CDATA[A Guide for Developers, QA, and Team Leaders on Moving Beyond General-Purpose AI
In the world of software development, new trends hit like waves. And the current AI wave is a tsunami.
Just like with cloud computing, mobile-first, or Agile, this trend...]]></description><link>https://idavidov.eu/the-ai-shift-why-specialized-models-are-the-next-wave-for-tech-teams</link><guid isPermaLink="true">https://idavidov.eu/the-ai-shift-why-specialized-models-are-the-next-wave-for-tech-teams</guid><category><![CDATA[AI]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[innovation]]></category><category><![CDATA[technology]]></category><category><![CDATA[Digital Transformation]]></category><category><![CDATA[llm]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Thu, 13 Nov 2025 09:26:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763025628735/6ac2ac19-d7ad-450d-8a1e-e23ef66efacc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>A Guide for Developers, QA, and Team Leaders on Moving Beyond General-Purpose AI</strong></p>
<p>In the world of software development, new trends hit like waves. And the current AI wave is a tsunami.</p>
<p>Just like with cloud computing, mobile-first, or Agile, this trend is governed by the classic <strong>Technology Adoption Curve</strong>.</p>
<p>History shows us that a significant portion of professionals (roughly 50%) fall into the <strong>"Late Majority"</strong> or <strong>"Laggards"</strong> categories. These are the groups who resist, wait, or just hope the new, <em>disruptive</em> way of doing things is a temporary fad.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763017443787/1e44eb37-fc08-45f2-b7a5-9a5ebdcf13bf.png" alt class="image--center mx-auto" /></p>
<p>But as tech professionals, we know it's never a good career move to ignore the elephant in the room.</p>
<p>The <strong>"Innovators"</strong> and <strong>"Early Adopters"</strong> understand this. They're already using AI to drive real ROI. They aren't the ones worried about layoffs. Instead they're the ones creating new value.</p>
<p>But here’s the uncomfortable truth. Just as the Laggards are finally starting to use ChatGPT for basic tasks, the trend is already shifting.</p>
<hr />
<h2 id="heading-the-5-groups-on-the-adoption-curve">🌊 The 5 Groups on the Adoption Curve</h2>
<p>To understand where you and your team stand, it helps to know the classic definitions. Every new technology is adopted in this order:</p>
<ul>
<li><p><strong>Innovators (2.5%):</strong> The visionaries and tinkerers. They are actively building the new tech itself.</p>
</li>
<li><p><strong>Early Adopters (13.5%):</strong> Tech leaders and evangelists who see the potential and are willing to experiment with new tools to gain a competitive edge.</p>
</li>
<li><p><strong>Early Majority (34%):</strong> The practical-minded group. They adopt a new technology once its benefits have been proven by the Early Adopters. This is when the tech "crosses the dip"</p>
</li>
<li><p><strong>Late Majority (34%):</strong> The skeptics. They only adopt new tech when it's become the new standard, often out of necessity or peer pressure.</p>
</li>
<li><p><strong>Laggards (16%):</strong> The resistors. They are highly resistant to change and are the very last to adopt, often when the old way is no longer supported.</p>
</li>
</ul>
<p>The first wave, driven by massive, general-purpose models like GPT-4, Gemini, and Claude, is now being adopted by the Early Majority. But the <em>next</em> wave is already being built by the Innovators.</p>
<hr />
<h2 id="heading-the-king-is-dead-the-peak-of-giant-ai">👑 The King is Dead: The Peak of Giant AI</h2>
<p>The "first King" was the <strong>massive, all-purpose model</strong>. The leap from GPT-2 to GPT-3 was staggering. The leap to GPT-3.5 and 4.0 gave us powerful, human-like chat interfaces that changed everything.</p>
<p>But now, we're seeing <strong>diminishing returns</strong>.</p>
<p>The difference in practical output between the latest models (like GPT-4 and its successors) is becoming smaller, while the cost to train and run them is exponentially higher. They are fantastic generalists, but they are not specialized masters.</p>
<p>Think of it this way:</p>
<blockquote>
<p>A great chef doesn't use a Swiss Army knife to run a world-class kitchen. A Swiss Army knife is a brilliant <em>general</em> tool, but it can't outperform a specialized sashimi knife, a boning knife, or a paring knife for specific, high-stakes tasks.</p>
</blockquote>
<p>We are entering the <strong>"Chef's Knife"</strong> era of AI.</p>
<hr />
<h2 id="heading-long-live-the-king-the-rise-of-specialized-ai">🚀 Long Live the King: The Rise of Specialized AI</h2>
<p>The new "King" is <strong>specialization</strong>.</p>
<p>The future isn't just one giant model trying to do everything. It's a collection of smaller, specific, and hyper-efficient models tailored for precise contexts and needs.</p>
<p>These specialized tools are built to do one thing perfectly, rather than a million things "pretty well”.</p>
<h3 id="heading-why-specialized-models-win">Why Specialized Models Win</h3>
<p>For developer, QA, and leadership workflows, specialized AI offers clear advantages:</p>
<ul>
<li><p><strong>⚡ Peak Performance &amp; Accuracy:</strong> A model trained <em>only</em> on your private 2-million-line codebase will always be better at refactoring that code than a general model trained on the public internet.</p>
</li>
<li><p><strong>💰 Lower Cost:</strong> Running a smaller, focused model is significantly cheaper than paying for API calls to a massive, general-purpose one.</p>
</li>
<li><p><strong>🔒 Enhanced Security &amp; Privacy:</strong> You can often run these models locally or in your own Virtual Private Cloud (VPC), meaning your proprietary code and sensitive data never leave your control.</p>
</li>
<li><p><strong>💨 Speed:</strong> Specialized models are optimized for one task, making them faster and less resource-intensive.</p>
</li>
</ul>
<p><strong>Examples of this trend are everywhere:</strong></p>
<ul>
<li><p><strong>For Developers:</strong> AI tools trained specifically on UI/UX best practices to generate front-end code, or assistants fine-tuned on your specific database schema.</p>
</li>
<li><p><strong>For QA:</strong> Agents designed to generate test cases from your requirements, or models that learn your app's flow to intelligently generate end-to-end test scripts.</p>
</li>
<li><p><strong>For Leaders:</strong> A custom tool that analyzes your team's pull requests and project management data to help predict project bottlenecks before they happen.</p>
</li>
</ul>
<hr />
<h2 id="heading-how-to-catch-the-next-wave">🏄 How to Catch the Next Wave</h2>
<p>I truly believe that AI won't replace all humans.</p>
<p>But I am <strong>100% sure</strong> that professionals who understand how to leverage the <strong>right AI for the right job</strong> will replace those who don't.</p>
<p>The only way to stay ahead is to get your hands dirty. The key to success is <strong>relentless experimenting and testing</strong>. A good surfer doesn't waste time watching the wave they just missed. They get busy positioning for the next one.</p>
<p>Here’s your action plan:</p>
<ol>
<li><p><strong>Start Small: Master Your Prompts.</strong> Treat prompt engineering as a core skill. Don't just ask basic questions. Learn to curate your prompts with deep context, few-shot examples, and specific role-playing. This is the first step from being a <em>consumer</em> of AI to being a <em>power user</em>.</p>
</li>
<li><p><strong>Go Big: Build Your Own Tools.</strong> Start thinking about your team's unique problems. What's a repetitive, high-value task that a general AI struggles with? Start designing and implementing specific tools for your team. This could be as simple as a fine-tuned model using an open-source framework or as complex as a custom-built agent.</p>
</li>
</ol>
<hr />
<h2 id="heading-your-next-move">💡 Your Next Move</h2>
<p>The "Late Majority" will wait for permission. The "Laggards" will hope it all goes away.</p>
<p>The winners, the <strong>Early Adopters and Early Majority</strong>, will be the ones who see this shift happening right now.</p>
<p>Don't rely on hopes. Do the hard work of experimenting today so you and your team can be the ones receiving the benefits tomorrow.</p>
]]></content:encoded></item><item><title><![CDATA[Developing a Powerful Test Automation Strategy - Frameworks, CI/CD & E2E Tests]]></title><description><![CDATA[We've built our feature on a solid foundation (Phase 1), with a clear blueprint (Phase 2), and with quality implemented in during construction (Phase 3). Now, it's time for the final, rigorous inspection before we open the doors to the public.
Welcom...]]></description><link>https://idavidov.eu/developing-a-powerful-test-automation-strategy-frameworks-cicd-and-e2e-tests</link><guid isPermaLink="true">https://idavidov.eu/developing-a-powerful-test-automation-strategy-frameworks-cicd-and-e2e-tests</guid><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[development]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 15 Oct 2025 06:11:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760356655907/960c95b4-ad81-4260-95cd-57f22ddde9c0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We've built our feature on a solid foundation (<a target="_blank" href="https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset">Phase 1</a>), with a clear blueprint (<a target="_blank" href="https://idavidov.eu/proactive-strategies-for-pre-development-success-requirements-stories-and-planning">Phase 2</a>), and with quality implemented in during construction (<a target="_blank" href="https://idavidov.eu/mastering-in-sprint-quality-for-faster-releases-ci-code-reviews-and-collaboration">Phase 3</a>). Now, it's time for the final, rigorous inspection before we open the doors to the public.</p>
<p>Welcome to <strong>Phase 4: Formal Testing &amp; Automation</strong>. 🤖</p>
<p>This phase isn't about <em>finding</em> quality. It's about <em>validating</em> it at scale. The goal is to build a strategic, automated safety net that provides the team with high confidence before every release. It’s about leveraging technology to ensure the product is not only functional but also stable, performant, and secure.</p>
<hr />
<h3 id="heading-implementing-the-end-to-end-e2e-testing-strategy">Implementing the End-to-End (E2E) Testing Strategy</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A deliberate plan for a <strong>small</strong> number of automated tests that simulate complete, critical user journeys from start to finish, just as a real user would.</p>
</li>
<li><p><strong>WHY it matters:</strong> The goal is to get the <strong>maximum value from the minimum resources</strong>. E2E tests are powerful, but they are also the most expensive to run and maintain. A strategy of trying to automate everything with E2E tests will quickly lead to a slow, flaky, and unmanageable test suite that no one trusts.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Start with the money:</strong> Identify the 3-5 user journeys that are most critical to your business. These are often part of the Acceptance Criteria for major features, like the user registration flow, the main checkout process, or creating a core document.</p>
</li>
<li><p><strong>Use a risk-based approach:</strong> Ask, "What's the most damaging thing that could break?" and add tests for those scenarios. The goal is to have a small suite of E2E tests that, if they pass, give you high confidence that the core business is functional.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760357109060/4365fd2c-cd4b-4b0a-ae60-ea8d899ccb50.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-building-a-scalable-amp-maintainable-automation-framework">Building a Scalable &amp; Maintainable Automation Framework</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The underlying architecture of your test code, designed with the fundamental intention that the application you're testing <strong>will constantly change</strong>.</p>
</li>
<li><p><strong>WHY it matters:</strong> The number one reason test automation projects fail is because they become a maintenance nightmare. If your tests are brittle and break with every minor UI change, your team will spend more time fixing old tests than writing new ones, and the project will eventually be abandoned.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Architect for change:</strong> The most important principle is the <strong>separation of concerns</strong>. The test logic (the "what," e.g., "log in") must be separate from the page interactions (the "how," e.g., <code>click('button')</code>).</p>
</li>
<li><p><strong>Use design patterns:</strong> Patterns like the Page Object Model are popular because they enforce this separation. By doing this, if a button's ID changes, you only have to update it in one place, not in 50 different test scripts. This makes your framework resilient and easy to maintain.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760357172562/8c2e0dad-fbf2-44b8-8cc5-c239a6be32ce.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-a-stable-amp-consistent-test-environment-strategy">A Stable &amp; Consistent Test Environment Strategy</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A dedicated, production-like environment for running automated tests that is reliable, predictable, and isolated from the chaos of active development.</p>
</li>
<li><p><strong>WHY it matters:</strong> A flaky test environment makes your test results meaningless. If a test fails, the team must be 99% confident that it's a real bug in the application, not a random glitch in the environment. Without this trust, the entire automation suite loses its value.</p>
</li>
<li><p><strong>HOW to do it:</strong> This is a <strong>whole-team responsibility</strong>.</p>
<ul>
<li><p><strong>Automate the infrastructure:</strong> Use Infrastructure as Code (e.g., Terraform, Ansible) to define and deploy your environments so they are consistent every time.</p>
</li>
<li><p><strong>Manage the data:</strong> Have automated processes to refresh the environment with clean, sanitized data on a regular basis.</p>
</li>
<li><p><strong>Establish clear rules:</strong> Define who can deploy to the environment and when, to prevent unexpected changes from derailing test runs.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-robust-test-data-management">Robust Test Data Management</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The strategy for ensuring every single automated test has the precise, clean, and isolated data it needs to run successfully, every single time.</p>
</li>
<li><p><strong>WHY it matters:</strong> Garbage in, garbage out. Poor data management is the leading cause of flaky tests. If one test changes a piece of data that another test depends on, you'll be stuck debugging frustrating false negatives.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Make tests self-contained:</strong> The industry best practice is to have tests create their own data.</p>
</li>
<li><p><strong>Use APIs for setup and teardown:</strong> Before a test runs, use an API call to create the exact user, product, or state it needs. After the test is finished, use another API call to delete that data. This ensures every test is independent and can be run in any order without side effects.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-integrating-automation-into-the-cicd-pipeline">Integrating Automation into the CI/CD Pipeline</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A tiered strategy for running the right tests at the right time to get fast, relevant feedback without slowing down development.</p>
</li>
<li><p><strong>WHY it matters:</strong> Running a 45-minute E2E test suite on every commit would bring development to a halt. A tiered approach intelligently balances the need for speed with the need for confidence.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>On Pull Request:</strong> Run the fastest tests: linters, unit tests, and a small "smoke suite" of critical API checks. The goal is feedback in <strong>under 5 minutes</strong>.</p>
</li>
<li><p><strong>On Merge to Main Branch:</strong> Run a larger "regression suite" of integration and UI tests that cover more functionality. The goal is feedback in <strong>under 30 minutes</strong>.</p>
</li>
<li><p><strong>Nightly/Scheduled:</strong> Run everything else: the full E2E suite, performance tests, and security scans. This is the final, deep validation that runs when it won't block developers.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760357185845/2437a0c4-61b6-4dd0-aa7c-2a7603509c90.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-defining-a-performance-testing-baseline">Defining a Performance Testing Baseline</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The process of measuring and recording your application's performance under a simulated load to establish a "normal" benchmark.</p>
</li>
<li><p><strong>WHY it matters:</strong> You can't know if your application is getting slower if you don't know how fast it is today. This baseline is used to detect performance regressions <em>before</em> your customers complain.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Pick a critical flow:</strong> Choose something like the API login or a key search query.</p>
</li>
<li><p><strong>Run a simple load test:</strong> Use an accessible tool to simulate a realistic load (e.g., 50 users for 5 minutes).</p>
</li>
<li><p><strong>Measure and record:</strong> Capture the key metrics: <strong>average response time (latency)</strong> and <strong>requests per second (throughput)</strong>. This is your baseline. Integrate this test into your nightly build to ensure these numbers don't get worse over time.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-implementing-a-basic-security-testing-checklist">Implementing a Basic Security Testing Checklist</h3>
<ul>
<li><p><strong>WHAT it is:</strong> Integrating automated tools that scan your code (SAST) and your running application (DAST) for common security vulnerabilities listed in resources like the OWASP Top 10.</p>
</li>
<li><p><strong>WHY it matters:</strong> Security is a critical component of quality. Automatically catching a common vulnerability like <strong>SQL Injection</strong> or <strong>Cross-Site Scripting (XSS)</strong> in your pipeline is infinitely cheaper than dealing with a data breach.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><strong>Integrate free tools:</strong> Add a tool like OWASP ZAP to your CI/CD pipeline (the nightly build is a great place for it). This provides a valuable first layer of defense and can catch low-hanging fruit without needing a dedicated security expert.</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-cross-browser-amp-cross-device-testing-strategy">Cross-Browser &amp; Cross-Device Testing Strategy</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A deliberate plan, based on real user data, for ensuring your application works correctly on the browsers and devices your customers actually use.</p>
</li>
<li><p><strong>WHY it matters:</strong> Developers often work exclusively in one browser. This can easily lead to CSS or JavaScript bugs that break the experience for a significant portion of your user base on other browsers or mobile devices.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Let data drive your decisions:</strong> This should be decided upfront based on <strong>customer needs</strong>. Use your analytics tools (like Google Analytics) to identify the top browsers, operating systems, and screen sizes that represent 90%+ of your traffic.</p>
</li>
<li><p><strong>Focus your efforts:</strong> Prioritize your testing on that specific set of configurations.</p>
</li>
<li><p><strong>Use cloud services for scale:</strong> Leverage a service like BrowserStack or Sauce Labs to run your automated tests across all your target configurations in parallel, giving you broad coverage without a massive time investment.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-conclusion-building-unshakeable-confidence">Conclusion: Building Unshakeable Confidence</h3>
<p>Phase 4 is about building unshakeable confidence in your product through smart, strategic validation. This automated safety net doesn't just catch bugs. It allows your team to develop and release with greater speed and less fear.</p>
<p>With our product now thoroughly inspected and secured, it's time for the moment of truth. In our final article, we'll explore <strong>Phase 5: Release &amp; Post-Release</strong>, where our software meets the real world and our quality journey continues.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering In-Sprint Quality for Faster Releases - CI, Code Reviews & Collaboration]]></title><description><![CDATA[We have our cultural foundation Essential Steps for Creating a Robust Software Quality Foundation - Culture, Roles & Mindset and our architectural blueprints Proactive Strategies for Pre-Development Success - Requirements, Stories & Planning. Now, it...]]></description><link>https://idavidov.eu/mastering-in-sprint-quality-for-faster-releases-ci-code-reviews-and-collaboration</link><guid isPermaLink="true">https://idavidov.eu/mastering-in-sprint-quality-for-faster-releases-ci-code-reviews-and-collaboration</guid><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[development]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Fri, 10 Oct 2025 12:04:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758695944708/8df8f5d8-0bd9-44da-afc6-ba944d43d0e9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We have our cultural foundation <a target="_blank" href="https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset">Essential Steps for Creating a Robust Software Quality Foundation - Culture, Roles &amp; Mindset</a> and our architectural blueprints <a target="_blank" href="https://idavidov.eu/proactive-strategies-for-pre-development-success-requirements-stories-and-planning">Proactive Strategies for Pre-Development Success - Requirements, Stories &amp; Planning</a>. Now, it's time to pick up the tools and start building. Welcome to the construction site.</p>
<p><strong>Phase 3: In-Development Quality</strong> is all about the practices that happen <em>during</em> a sprint. This isn't a phase that happens at the end. Instead it's the engine room of quality, running continuously. The core principles here are <strong>fast feedback loops</strong> and <strong>deep collaboration</strong>. These are the activities that ensure quality is built directly into the product as it's being assembled.</p>
<hr />
<h3 id="heading-a-fast-and-reliable-continuous-integration-ci-pipeline">A Fast and Reliable Continuous Integration (CI) Pipeline</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A CI pipeline is an automated process, triggered on every code change, that builds the software and runs a suite of automated tests to ensure the change didn't break anything. It's the team's first line of defense.</p>
</li>
<li><p><strong>WHY it matters:</strong> The value of a CI pipeline is directly tied to its speed and reliability.</p>
<ul>
<li><p><strong>Slow Feedback Kills Momentum:</strong> If a developer has to wait 30 minutes for feedback, they've already moved on to another task. When the failure notification finally arrives, they've lost all mental context, making the fix five times harder. A fast pipeline (under 10 minutes) provides immediate feedback while the code is still fresh in their mind.</p>
</li>
<li><p><strong>Flaky Tests Destroy Trust:</strong> A flaky pipeline that fails randomly is worse than no pipeline at all. When the team can't trust the results, they start ignoring failures. This completely defeats the purpose of having an automated safety net.</p>
</li>
</ul>
</li>
<li><p><strong>HOW to achieve it:</strong></p>
<ul>
<li><p><strong>Parallelize your tests:</strong> Run different suites of tests simultaneously to drastically cut down execution time.</p>
</li>
<li><p><strong>Optimize your build:</strong> Use caching for dependencies so you aren't downloading the internet on every run.</p>
</li>
<li><p><strong>Enforce a zero-tolerance flaky test policy:</strong> If a test is flaky, it's a bug. Immediately quarantine it or fix it.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758695970842/0b2e3632-a2ad-49f0-9512-25ee341cad75.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-enforced-code-quality-standards-amp-static-analysis">Enforced Code Quality Standards &amp; Static Analysis</h3>
<ul>
<li><p><strong>WHAT it is:</strong> Using automated tools, known as linters (like ESLint for JavaScript), to automatically scan code for stylistic errors, potential bugs, and adherence to team-defined standards.</p>
</li>
<li><p><strong>WHY it matters:</strong> This is about cognitive load. Humans are slow, inconsistent, and error-prone when performing repetitive, detail-oriented tasks like checking for correct indentation or unused variables. Offloading this work to a machine frees up precious human brainpower during code reviews to focus on what truly matters: the logic, the architecture, and the user experience.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Integrate into the editor:</strong> Configure the linter to run directly in every developer's code editor, providing instant, private feedback as they type.</p>
</li>
<li><p><strong>Make it a required check:</strong> Enforce the standards by making the linting step a mandatory pass/fail check before every commit. This ensures no non-compliant code ever makes it into the repository.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-mandatory-effective-peer-reviews-pull-requests">Mandatory, Effective Peer Reviews (Pull Requests)</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The practice where at least one other team member must thoughtfully review and approve a developer's code changes before they can be merged into the main codebase.</p>
</li>
<li><p><strong>WHY it matters:</strong> While finding bugs is a benefit, the primary goal of a code review is <strong>knowledge sharing</strong>. It's the single most effective way to spread context throughout the team, prevent knowledge silos, improve code consistency, and collectively elevate the team's skills.</p>
</li>
<li><p><strong>HOW to make them effective:</strong> The tone and focus of the comments are everything. A good review is a dialogue, not a judgment.</p>
<ul>
<li><p><strong>❌ Bad Comment:</strong> "This is inefficient. Use a hash map." (A command)</p>
</li>
<li><p><strong>✅ Good Comment:</strong> "This is an interesting approach. I'm wondering if a hash map might be more performant here for the lookup. What are your thoughts on that?" (A collaborative question)</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758696018954/b9321323-6956-4991-9fd2-de29da9ff554.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-in-sprint-dev-qa-collaboration-amp-pairing">In-Sprint Dev-QA Collaboration &amp; Pairing</h3>
<ul>
<li><p><strong>WHAT it is:</strong> Direct, real-time collaboration between developers and QAs throughout the lifecycle of a feature, most powerfully realized through <strong>pair-testing</strong> sessions.</p>
</li>
<li><p><strong>WHY it matters:</strong> This practice demolishes the "us vs. them" wall. It creates a tight, immediate feedback loop that resolves ambiguities in minutes instead of days. It fosters a powerful sense of shared ownership over the quality of the feature.</p>
</li>
<li><p><strong>HOW to do it:</strong> The pair-testing session is the key ritual.</p>
<ol>
<li><p><strong>Schedule a short meeting:</strong> Book a 15-minute session once the developer has a working version of the feature.</p>
</li>
<li><p><strong>Dev demonstrates:</strong> The developer shares their screen and explains <em>what</em> they built and <em>how</em> it's implemented. This is a mini knowledge-transfer session.</p>
</li>
<li><p><strong>Test together:</strong> Both the dev and QA perform a few key tests together, discussing the results in real-time.</p>
</li>
<li><p><strong>Handoff:</strong> This session brings instant clarity and serves as the perfect handoff. The QA can then take over to perform deeper, more comprehensive exploratory testing, already armed with full context.</p>
</li>
</ol>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758696025589/3682fd52-da0d-4389-a136-1fde3ea1561a.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-early-and-continuous-exploratory-testing">Early and Continuous Exploratory Testing</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A creative, unscripted, and human-driven approach to testing. It's a "tour" of the feature where the tester uses their knowledge, curiosity, and intuition to discover how the software actually behaves. It’s about investigation and learning, not just script execution.</p>
</li>
<li><p><strong>WHY it matters:</strong> Automation is excellent at <em>verifying</em> that the software does what you expect. Exploratory testing is essential for <em>discovering</em> what the software does when you do something unexpected. It allows you to <code>feel</code> the application, uncovering usability flaws, confusing workflows, and complex state-related bugs that automation will always miss.</p>
</li>
<li><p><strong>HOW to do it:</strong></p>
<ul>
<li><p><strong>Start early:</strong> As soon as a piece of functionality is available, start exploring it.</p>
</li>
<li><p><strong>Use charters:</strong> Give your testing a mission. Instead of just randomly clicking, create a goal like, "Investigate how the application handles a user losing their network connection while updating their profile."</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-definition-of-ready-for-qareview">Definition of "Ready for QA/Review"</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A simple checklist, formally part of the team's Definition of Done (DoD), that confirms a piece of work is truly ready for deeper testing and review.</p>
</li>
<li><p><strong>WHY it matters:</strong> This checklist prevents the frustrating "ping-pong" where a ticket is passed back and forth because of missing information, a broken build, or unmet requirements. It creates a smooth, efficient, and respectful handoff process.</p>
</li>
<li><p><strong>HOW to implement it:</strong> This checklist is the final gate before the pair-testing session. It should live as a template in your tickets or pull requests and include items like:</p>
<ul>
<li><p>✅ All Acceptance Criteria have been implemented.</p>
</li>
<li><p>✅ The CI pipeline is green.</p>
</li>
<li><p>✅ The feature is deployed to the shared testing environment.</p>
</li>
<li><p>✅ A 15-minute pair-testing session has been scheduled.</p>
</li>
</ul>
</li>
</ul>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758696099940/930b5a63-9915-44c4-9a1a-b269cd05f6bd.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-conclusion-the-engine-of-quality">Conclusion: The Engine of Quality</h3>
<p>Phase 3 is the engine room where the blueprints from Phase 2 become a tangible, working product. Powered by the fast feedback of CI and the deep collaboration of pairing and code reviews, these in-sprint practices are what transform a plan into high-quality, functional software.</p>
<p>Now that we've built the feature with quality baked in, it's time to validate our confidence at scale. In the next article, we'll dive into <strong>Phase 4: Formal Testing &amp; Automation</strong>, where we'll build the strategic, automated safety net that protects our product and our users.</p>
]]></content:encoded></item><item><title><![CDATA[Proactive Strategies for Pre-Development Success - Requirements, Stories & Planning]]></title><description><![CDATA[In our previous article Essential Steps for Creating a Robust Software Quality Foundation - Culture, Roles & Mindset, we laid the cultural foundation for quality. We established that quality is a shared mindset, not just a department. But a strong fo...]]></description><link>https://idavidov.eu/proactive-strategies-for-pre-development-success-requirements-stories-and-planning</link><guid isPermaLink="true">https://idavidov.eu/proactive-strategies-for-pre-development-success-requirements-stories-and-planning</guid><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[development]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Thu, 02 Oct 2025 05:16:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758691805574/0c45d973-bd8a-481e-af7a-6f411413b6e1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In our previous article <a target="_blank" href="https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset">Essential Steps for Creating a Robust Software Quality Foundation - Culture, Roles &amp; Mindset</a>, we laid the cultural foundation for quality. We established that quality is a shared mindset, not just a department. But a strong foundation is useless without a solid plan. A nothing huge is built on hopes and good intentions. Instead it's built from a detailed architectural blueprint.</p>
<p>Welcome to <strong>Phase 2: Pre-Development</strong>. This is the architect's table.</p>
<p>This phase is about designing quality into your product from the very start. Every activity here is designed to prevent entire classes of bugs before a single line of code is written. This is where an hour of careful planning saves ten hours of painful debugging later. Let's dive into it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758691830387/86e5010f-227e-44e8-9bfd-5631ab9922fe.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-crafting-high-quality-testable-user-stories">Crafting High-Quality, Testable User Stories</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A high-quality user story clearly and concisely describes a feature from an end-user's perspective. It's not a technical task list, but a statement of user value.</p>
</li>
<li><p><strong>WHY it matters:</strong> Vague stories are the root of most misunderstandings. They force developers and QAs to make assumptions, which leads to building the wrong thing, extensive rework, and features that completely miss the user's needs.</p>
</li>
<li><p><strong>HOW to do it:</strong> You can use the standard BDD (Behavior Driven Development) style <code>As a [persona], I want [action], so that I can [achieve an outcome]</code> format. This structure forces clarity. Now, compare a bad story to a good one:</p>
<ul>
<li><p><strong>❌ Bad:</strong> "User login"</p>
</li>
<li><p><strong>✅ Good:</strong> "As a <strong>registered user</strong>, I want to <strong>log in with my email and password</strong> so that I can <strong>access my account dashboard</strong>."</p>
</li>
</ul>
</li>
</ul>
<p>    The second version is instantly testable and leaves no room for guessing who the user is or what the goal is. A good approach is to use the <strong>INVEST</strong> criteria (Independent, Negotiable, Valuable, Estimable, Small, Testable) as a checklist for every story.</p>
<hr />
<h3 id="heading-defining-concrete-acceptance-criteria-acs">Defining Concrete Acceptance Criteria (ACs)</h3>
<ul>
<li><p><strong>WHAT they are:</strong> Acceptance Criteria are a set of specific, binary (pass/fail) conditions that a user story must meet to be considered "done". They are the formal contract for the feature.</p>
</li>
<li><p><strong>WHY they matter:</strong> The greatest strength of binary ACs is that they <strong>eliminate assumptions</strong>. When an AC is subjective ("The page should load fast"), it leads to arguments. When it's binary ("The page must load in under 2 seconds on a 4G connection"), it's a simple, verifiable fact. This robustness is key to shipping with confidence.</p>
</li>
<li><p><strong>HOW to write them:</strong> Focus on clear, testable outcomes. While a format like Gherkin (<code>Given/When/Then</code>) can be useful, the principle is more important than the syntax.</p>
<ul>
<li><p><strong>❌ Bad:</strong> "The login process should be user-friendly." (Subjective)</p>
</li>
<li><p><strong>✅ Good:</strong> "When the user enters an incorrect password, the error message 'Invalid credentials' is displayed." (Pass/Fail)</p>
</li>
<li><p><strong>✅ Good:</strong> "Upon successful login, the user is redirected to the <code>/dashboard</code> page." (Pass/Fail)</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-implementing-three-amigos-sessions">Implementing "Three Amigos" Sessions</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A focused working session bringing together the three key perspectives: <strong>Product</strong> (what is the problem to solve?), <strong>Development</strong> (how can we build a solution?), and <strong>Quality</strong> (how could this solution break?).</p>
</li>
<li><p><strong>WHY it matters:</strong> This session is the ultimate defense against ambiguity. It demolishes silos and ensures a shared understanding of the feature, its requirements, and its risks <em>before</em> the first line of code is written. It’s a high-leverage activity that prevents countless hours of wasted work.</p>
</li>
<li><p><strong>HOW to run it:</strong></p>
<ul>
<li><p><strong>Keep it small:</strong> It's the "Three Amigos", not the "Thirty Amigos". Only the three core perspectives should be present.</p>
</li>
<li><p><strong>Have a strict agenda:</strong> Start with the initial user story requirements and ACs as a foundation. The goal is to brainstorm the story requirements and to set ACs foundation.</p>
</li>
<li><p><strong>Define responsibilities:</strong> The Product representative clarifies requirements, the Developer discusses implementation strategy, and the QA probes for edge cases and testability. The goal is to leave the session with crystal-clear, agreed-upon ACs.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758691896878/66882832-b72f-4057-a93e-6f38656f46a5.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-identifying-edge-cases-amp-negative-paths-upfront">Identifying Edge Cases &amp; Negative Paths Upfront</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The practice of systematically thinking through what happens when things go wrong or when users behave unexpectedly (e.g., entering bad data, losing network, clicking things out of order). This is a critical activity for <strong>story grooming sessions</strong>.</p>
</li>
<li><p><strong>WHY it matters:</strong> Thinking about negative paths is a core "shift-left" activity. Uncovering a major edge case in a 15-minute grooming session is infinitely cheaper and faster than discovering it through a production bug that impacts thousands of users. This is where a QA's mindset adds immense value.</p>
</li>
<li><p><strong>HOW to do it:</strong> The QA should lead this by asking probing "What if...?" questions.</p>
<ul>
<li><p>"What if the user tries to register with an email address that already exists?"</p>
</li>
<li><p>"What if the API call fails while they are submitting the form?"</p>
</li>
<li><p>"What if the user's session times out while they have items in their cart?"</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-defining-and-integrating-non-functional-requirements-nfrs">Defining and Integrating Non-Functional Requirements (NFRs)</h3>
<ul>
<li><p><strong>WHAT they are:</strong> NFRs define <em>how</em> the system should operate, rather than <em>what</em> it should do. The most common examples are <strong>performance, security, and accessibility</strong>.</p>
</li>
<li><p><strong>WHY it matters:</strong> Leaving NFRs until the end is a recipe for disaster. You can't "add" security or performance to a product after it's built. It has to be designed in. Thinking about NFRs while planning the functional story is crucial because you have the <strong>full context</strong> to make smart architectural decisions.</p>
</li>
<li><p><strong>HOW to do it:</strong> NFRs can be tracked as their own stories, but they must be discussed alongside the related functional story. For example, while grooming a "File Upload" story, ask:</p>
<ul>
<li><p><strong>Performance:</strong> "What is the maximum file size we need to support? What's the target upload time?"</p>
</li>
<li><p><strong>Security:</strong> "What file types are allowed? How will we scan for viruses?"</p>
</li>
<li><p><strong>Accessibility:</strong> "How will a screen reader announce the upload progress?"</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-risk-based-prioritization-of-quality-efforts">Risk-Based Prioritization of Quality Efforts</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A strategy to focus your most intense testing efforts on the areas of the application that pose the greatest risk to the business and your users.</p>
</li>
<li><p><strong>WHY it matters:</strong> You can't test everything with 100% depth, since it's not practical or economical. This approach ensures your limited time is spent where it matters most. It’s about being effective, not just busy.</p>
</li>
<li><p><strong>HOW to do it:</strong> Use a simple <strong>Impact vs. Likelihood</strong> matrix to categorize features and guide your testing strategy.</p>
<ul>
<li><p><strong>High Impact / High Likelihood</strong> (e.g., the main payment flow): This requires deep, rigorous testing, including end-to-end automation and extensive exploratory testing.</p>
</li>
<li><p><strong>High Impact / Low Likelihood</strong> (e.g., data restoration from a backup): This needs a clear test plan for key scenarios but doesn't require exhaustive daily testing.</p>
</li>
<li><p><strong>Low Impact / High Likelihood</strong> (e.g., a form validation error): This is a perfect candidate for lightweight, automated checks.</p>
</li>
<li><p><strong>Low Impact / Low Likelihood</strong> (e.g., updating a profile's bio): This can be covered sufficiently by quick exploratory testing.</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758692044296/00d61019-6512-423b-a94d-d60a5a796a5f.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-making-quality-a-factor-in-story-estimation">Making Quality a Factor in Story Estimation</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The practice of including all quality-related activities (writing unit tests, developing automation test scripts, pair testing, exploratory testing, etc.) within the story point estimation for every user story.</p>
</li>
<li><p><strong>WHY it matters:</strong> <strong>Testing is not a separate phase, It's part of the development work.</strong> Creating separate "testing tickets" breaks the "Whole Team" mentality and hides the true cost of delivering a feature. Including it in the estimate makes it clear that a story isn't "done" until it's "done and high-quality".</p>
</li>
<li><p><strong>HOW to do it:</strong> During estimation, the team must explicitly discuss the effort required for quality. A developer might say a story is "3 points", and the QA can then ask, "Does that estimate include the time to write the new automated tests and for us to do a 30-minute exploratory session together?" This conversation ensures the full scope of work is accounted for.</p>
</li>
</ul>
<hr />
<h3 id="heading-early-test-data-amp-environment-planning">Early Test Data &amp; Environment Planning</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The strategic process of defining the specific data and infrastructure needed to properly test a feature, and doing so <em>before</em> development begins.</p>
</li>
<li><p><strong>WHY it matters:</strong> "Blocked by test data" or "The test environment is broken" are two of the most common and frustrating bottlenecks in the development cycle. They are almost always preventable with upfront planning.</p>
</li>
<li><p><strong>HOW to do it:</strong> Before a story is considered "ready for development", the team must be able to answer these questions:</p>
<ul>
<li><p><strong>Data:</strong> What specific data states do we need? (e.g., A brand new user? A user with 1000+ orders? A user with an expired subscription? A user in a different country?)</p>
</li>
<li><p><strong>Environment:</strong> Does this feature depend on a new third-party service or a change in our infrastructure? If so, how will we make that available for testing?</p>
</li>
</ul>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758691852504/1d3c9662-c9b8-40af-bc32-869d2ad0320e.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-conclusion-from-foundation-to-framework">Conclusion: From Foundation to Framework</h3>
<p>Phase 2 transforms the cultural values from Phase 1 into a concrete plan of action. By focusing on clarity, collaboration, and risk mitigation <em>before</em> development, you build a sturdy framework that prevents defects and ensures you're building the right product, the right way.</p>
<p>With the blueprints now finalized, it's time to pick up the tools and start the construction. In our next article, we'll dive into <strong>Phase 3: In-Development Quality</strong>, where we'll explore the in-sprint practices that ensure quality is built in, not bolted on.</p>
]]></content:encoded></item><item><title><![CDATA[Upgrade Playwright Tests: TypeScript Mixin Design Pattern Guide]]></title><description><![CDATA[In the world of test automation, especially with a tool as powerful as Playwright, we often face a classic architectural challenge - the BasePage object. It starts small, but soon it grows into a monolithic "God object" containing methods for every s...]]></description><link>https://idavidov.eu/upgrade-playwright-tests-typescript-mixin-design-pattern-guide</link><guid isPermaLink="true">https://idavidov.eu/upgrade-playwright-tests-typescript-mixin-design-pattern-guide</guid><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[Testing]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[automation]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[playwright]]></category><category><![CDATA[Design]]></category><category><![CDATA[design patterns]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Mon, 29 Sep 2025 07:45:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759128473241/3a77f7b8-d26f-455d-8aa9-3f51c95eccc1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the world of test automation, especially with a tool as powerful as Playwright, we often face a classic architectural challenge - the <code>BasePage</code> object. It starts small, but soon it grows into a monolithic "God object" containing methods for every shared component across the application (navigation, footers, modals).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759128609231/08d129f4-185c-4901-9215-0a690cabba90.png" alt class="image--center mx-auto" /></p>
<p>This leads to deep and rigid inheritance chains. An <code>ArticlePage</code> might inherit from <code>NavPage</code>, which inherits from <code>BasePage</code>. This is not only complex but also violates the single responsibility principle.</p>
<p>What if we could build our Page Objects with components just like we build modern web apps? This is where the <strong>Mixin Design Pattern</strong> comes in, favoring flexible <strong>composition over rigid inheritance</strong>.</p>
<hr />
<h3 id="heading-whats-the-problem-with-inheritance">What's the Problem with Inheritance?</h3>
<p>Traditional inheritance (<code>class A extends B</code>) is powerful, but it has limits. A class can only extend <strong>one</strong> other class.</p>
<p>So, what happens when your <code>ArticlePage</code> needs navigation <em>and</em> table-handling logic, and your <code>HomePage</code> needs navigation <em>and</em> table logic? You can't <code>extend NavPage, TablePage</code>. This forces you to either cram everything into a single <code>BasePage</code> or create convoluted inheritance chains.</p>
<hr />
<h3 id="heading-the-mixin-solution-composition">The Mixin Solution: Composition</h3>
<p>The <strong>Mixin Design Pattern</strong> lets you create reusable "feature packs". They are small classes focused on a single piece of functionality (like navigation or table handling). You can then "mix" these feature packs into any Page Object that needs them.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759128683543/d93f33ed-e81f-44e2-99af-2ba5b93386d3.png" alt class="image--center mx-auto" /></p>
<p>This approach allows you to:</p>
<ul>
<li><p><strong>Share functionality</strong> across unrelated classes.</p>
</li>
<li><p><strong>Avoid deep inheritance chains</strong>, keeping your code flat and manageable.</p>
</li>
<li><p><strong>Keep Page Objects clean</strong> and focused on their primary purpose.</p>
</li>
</ul>
<p>In modern front-end development, which is heavily component-based, our testing frameworks should evolve too. By using the Mixin Design Pattern, we can build a more modular, reusable, and maintainable test suite.</p>
<hr />
<h3 id="heading-implementing-mixins-in-playwright-with-typescript">🚀 Implementing Mixins in Playwright with TypeScript</h3>
<p>Let's walk through a practical example of adding shared navigation functionality to a <code>HomePage</code> object without using inheritance. All examples are made for tha app that I am always using - <a target="_blank" href="https://conduit.bondaracademy.com/">Conduit</a> (huge shoutouts to mr. Artem Bondar) and there is a complete <a target="_blank" href="https://github.com/idavidov13/Mixin-Design-Pattern">Repository</a>.</p>
<h4 id="heading-step-1-the-applymixins-helper-function">Step 1: The <code>applyMixins</code> Helper Function</h4>
<p>TypeScript doesn't have a native <code>mixin</code> keyword. So, first, we need a small utility function that does the heavy lifting. This function copies the properties and methods from our "feature packs" to our target class.</p>
<p>Think of this as the engine of our pattern.</p>
<p>TypeScript</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// helpers/mixin.ts</span>

<span class="hljs-comment">/**
 * Mixin function to combine multiple base classes into one.
 * This function copies all properties and methods from source classes to the target class.
 *
 * @param derivedCtor - The target class that will receive the mixed-in functionality.
 * @param constructors - Array of source classes to mix in.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">applyMixins</span>(<span class="hljs-params">derivedCtor: <span class="hljs-built_in">any</span>, constructors: <span class="hljs-built_in">any</span>[]</span>): <span class="hljs-title">void</span> </span>{
    constructors.forEach(<span class="hljs-function">(<span class="hljs-params">baseCtor</span>) =&gt;</span> {
        <span class="hljs-comment">// Get all property names from the source class prototype</span>
        <span class="hljs-built_in">Object</span>.getOwnPropertyNames(baseCtor.prototype).forEach(<span class="hljs-function">(<span class="hljs-params">name</span>) =&gt;</span> {
            <span class="hljs-comment">// Skip constructor to avoid conflicts</span>
            <span class="hljs-keyword">if</span> (name !== <span class="hljs-string">'constructor'</span>) {
                <span class="hljs-comment">// Copy the property descriptor (including getters, setters, methods)</span>
                <span class="hljs-built_in">Object</span>.defineProperty(
                    derivedCtor.prototype,
                    name,
                    <span class="hljs-built_in">Object</span>.getOwnPropertyDescriptor(baseCtor.prototype, name) ||
                        <span class="hljs-built_in">Object</span>.create(<span class="hljs-literal">null</span>)
                );
            }
        });
    });
}
</code></pre>
<h4 id="heading-step-2-create-the-feature-pack-navpage">Step 2: Create the "Feature Pack" - <code>NavPage</code></h4>
<p>Next, we create our mixin. In this case, it's a <code>NavPage</code> class that encapsulates all locators and methods for interacting with the website's main navigation bar, a component present on almost every page.</p>
<p>This class is our reusable piece of code.</p>
<p>TypeScript</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// pages/clientSite/baseClasses/navPage.ts</span>

<span class="hljs-keyword">import</span> { Page, Locator, expect } <span class="hljs-keyword">from</span> <span class="hljs-string">'@playwright/test'</span>;

<span class="hljs-comment">/**
 * This is the page object for the Navigation functionality.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> NavPage {
    <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">protected</span> page: Page</span>) {}

    <span class="hljs-comment">// ... Locators for navBar, homePageLink, settingsButton, etc.</span>

    get signInNavigationLink(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'link'</span>, { name: <span class="hljs-string">'Sign in'</span> });
    }
    get emailInput(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'textbox'</span>, { name: <span class="hljs-string">'Email'</span> });
    }
    get passwordInput(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'textbox'</span>, { name: <span class="hljs-string">'Password'</span> });
    }
    get signInButton(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'button'</span>, { name: <span class="hljs-string">'Sign in'</span> });
    }

    <span class="hljs-comment">/**
     * Logs in the user.
     */</span>
    <span class="hljs-keyword">async</span> logIn(email: <span class="hljs-built_in">string</span>, password: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.navigateToSignInPage();
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.emailInput.fill(email);
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.passwordInput.fill(password);
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.signInButton.click();
        <span class="hljs-keyword">await</span> expect(
            <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'navigation'</span>).getByText(process.env.USER_NAME!)
        ).toBeVisible();
    }

    <span class="hljs-comment">// ... other methods like navigateToSignInPage(), logOut(), etc.</span>
}
</code></pre>
<h4 id="heading-step-3-compose-the-homepage-with-the-navpage-mixin">Step 3: Compose the <code>HomePage</code> with the <code>NavPage</code> Mixin</h4>
<p>Now for the magic. We create our <code>HomePage</code> object, which is responsible only for elements unique to the home page. Then, we use our <code>applyMixins</code> helper to blend in the <code>NavPage</code> functionality.</p>
<p>TypeScript</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// pages/clientSite/homePage.ts</span>

<span class="hljs-keyword">import</span> { Page, Locator, expect } <span class="hljs-keyword">from</span> <span class="hljs-string">'@playwright/test'</span>;
<span class="hljs-keyword">import</span> { applyMixins } <span class="hljs-keyword">from</span> <span class="hljs-string">'../../helpers/mixin'</span>;
<span class="hljs-keyword">import</span> { NavPage } <span class="hljs-keyword">from</span> <span class="hljs-string">'./baseClasses/navPage'</span>;

<span class="hljs-comment">/**
 * This is the page object for the Home Page.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> HomePage {
    <span class="hljs-keyword">constructor</span>(<span class="hljs-params"><span class="hljs-keyword">protected</span> page: Page</span>) {}

    get homeBanner(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByRole(<span class="hljs-string">'heading'</span>, { name: <span class="hljs-string">'conduit'</span> });
    }
    get yourFeedBtn(): Locator {
        <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.page.getByText(<span class="hljs-string">'Your Feed'</span>);
    }

    <span class="hljs-comment">/**
     * Navigates to the home page as Guest.
     */</span>
    <span class="hljs-keyword">async</span> navigateToHomePageGuest(): <span class="hljs-built_in">Promise</span>&lt;<span class="hljs-built_in">void</span>&gt; {
        <span class="hljs-keyword">await</span> <span class="hljs-built_in">this</span>.page.goto(process.env.URL <span class="hljs-keyword">as</span> <span class="hljs-built_in">string</span>);
        <span class="hljs-keyword">await</span> expect(<span class="hljs-built_in">this</span>.homeBanner).toBeVisible();
    }
}

<span class="hljs-comment">/**
 * Interface declaration that tells TypeScript that HomePage
 * also has all the methods and properties from NavPage.
 * This enables IntelliSense and type checking.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> HomePage <span class="hljs-keyword">extends</span> NavPage {}

<span class="hljs-comment">/**
 * Apply the mixin at runtime.
 * This actually copies all NavPage methods to HomePage's prototype.
 */</span>
applyMixins(HomePage, [NavPage]);
</code></pre>
<p>Notice two key parts here:</p>
<ol>
<li><p><code>export interface HomePage extends NavPage {}</code>: This is crucial for <strong>TypeScript's type-checking</strong>. It tells the compiler, "Trust me, an instance of <code>HomePage</code> will also have all the methods of <code>NavPage</code>", giving us that sweet, sweet IntelliSense autocompletion.</p>
</li>
<li><p><code>applyMixins(HomePage, [NavPage])</code>: This is the runtime execution. It takes all the methods from <code>NavPage</code> and attaches them to <code>HomePage</code>. If you want to attach methods from more than one <code>baseClass</code>, you can list them in the array.</p>
</li>
</ol>
<h4 id="heading-step-4-use-the-composed-page-object-in-a-test">Step 4: Use the Composed Page Object in a Test</h4>
<p>Finally, let's see it in action. In our test file, we can instantiate <code>HomePage</code> and call methods from <code>NavPage</code> directly on it, like <code>logIn()</code>.</p>
<p>TypeScript</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// tests/auth.setup.ts</span>

<span class="hljs-keyword">import</span> { test <span class="hljs-keyword">as</span> setup } <span class="hljs-keyword">from</span> <span class="hljs-string">'../fixtures/pom/test-options'</span>;

setup(<span class="hljs-string">'auth user'</span>, <span class="hljs-keyword">async</span> ({ homePage, page }) =&gt; {
    <span class="hljs-keyword">await</span> setup.step(<span class="hljs-string">'create logged in user session'</span>, <span class="hljs-keyword">async</span> () =&gt; {
        <span class="hljs-keyword">await</span> homePage.navigateToHomePageGuest();

        <span class="hljs-comment">// Calling a method from NavPage on a HomePage instance!</span>
        <span class="hljs-keyword">await</span> homePage.logIn(process.env.EMAIL!, process.env.PASSWORD!);

        <span class="hljs-keyword">await</span> page.context().storageState({ path: <span class="hljs-string">'.auth/userSession.json'</span> });
    });
});
</code></pre>
<p>It just works! Our <code>HomePage</code> is clean and focused, yet it has all the power of the navigation functionality when needed.</p>
<hr />
<h3 id="heading-a-note-on-potential-downsides">A Note on Potential Downsides</h3>
<p>While powerful, this pattern has trade-offs. The main one is the potential for <strong>method name collisions</strong>. If you mix in two classes that both define a method called <code>clickSubmit()</code>, the last one applied will overwrite the previous ones. Additionally, it can sometimes be less obvious where a method originates compared to a clear <code>extends</code> chain.</p>
<hr />
<h3 id="heading-summary">Summary</h3>
<p>By embracing a <strong>composition over inheritance</strong> mindset with the Mixin pattern, you can build a far more flexible, modular, and maintainable Playwright test automation framework.</p>
<p>Your Page Objects become cleaner, your code becomes more reusable, and you avoid the architectural headache of the monolithic <code>BasePage</code>. This approach better mirrors modern, component-based web development and leads to a more robust and scalable test suite.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759128652507/59622f00-209c-4425-b1c7-eae34f9eef84.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Essential Steps for Creating a Robust Software Quality Foundation - Culture, Roles & Mindset]]></title><description><![CDATA[In our previous article (The Complete Quality Engineering Roadmap), we laid out a five-phase roadmap for building quality into software. We argued that you can't test your way to a great product, simply because quality has to be built from the ground...]]></description><link>https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset</link><guid isPermaLink="true">https://idavidov.eu/essential-steps-for-creating-a-robust-software-quality-foundation-culture-roles-and-mindset</guid><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA]]></category><category><![CDATA[Testing]]></category><category><![CDATA[automation]]></category><category><![CDATA[engineering]]></category><category><![CDATA[Quality Engineering]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Testing]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Tue, 23 Sep 2025 17:37:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758648553104/64f4fc8d-25fd-4b0c-adb4-24bf7a2a21b3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In our previous article (<a target="_blank" href="https://idavidov.eu/the-complete-quality-engineering-roadmap">The Complete Quality Engineering Roadmap</a>), we laid out a five-phase roadmap for building quality into software. We argued that you can't test your way to a great product, simply because quality has to be built from the ground up. Now, we're going to start that foundation.</p>
<p>This isn't just a philosophical discussion. This is a practical guide to <strong>Phase 1: Culture, Roles, and Mindset</strong>. We'll break down each foundational principle and give you a clear framework for what it is, why it's a non-negotiable part of success, and how you can actually implement it with your team, starting today.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758648903108/91e18b50-4484-415a-a86e-45a60fba9ca0.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-a-shared-definition-of-quality">A Shared Definition of "Quality"</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A shared definition of quality is a clear, explicit agreement across the entire team (product, design, development, and QA) on the specific attributes that make your product "good". It’s a consensus on the target you are all aiming for.</p>
</li>
<li><p><strong>WHY it matters:</strong> Without a shared definition, everyone operates on assumptions. A developer might define quality as "bug-free, performant code". A product manager might see it as "shipping features that meet business goals". When these definitions clash, you get chaos and a product that excels at nothing. A shared definition aligns everyone's efforts toward the same outcome.</p>
</li>
<li><p><strong>HOW to create it:</strong> Run a "Quality Definition" workshop. Get everyone in a room and ask probing questions:</p>
<ol>
<li><p>For our customers, what does quality <em>feel</em> like? Is it speed? Reliability? Ease of use? Security?</p>
</li>
<li><p>Who are our main competitors, and what is their quality bar? Where must we be better?</p>
</li>
<li><p>Write down the results. List the top 3-5 attributes that are a must for your product and post them visibly for the whole team.</p>
</li>
</ol>
</li>
</ul>
<hr />
<h3 id="heading-quality-is-everyones-responsibility">Quality is Everyone's Responsibility</h3>
<ul>
<li><p><strong>WHAT it is:</strong> This is the principle that quality is not the job of a single department ("QA team"). Instead, it is a core competency and a daily responsibility for every single person involved in building the product.</p>
</li>
<li><p><strong>WHY it matters:</strong> When quality is siloed, it becomes a gate at the end of the process. This creates an "us vs. them" mentality, where developers "throw code over the wall" for QAs to "catch the bugs". This is incredibly inefficient, as issues are found far too late. Shared responsibility makes quality an ongoing activity, not a final inspection.</p>
</li>
<li><p><strong>HOW to implement it:</strong></p>
<ul>
<li><p>Integrate quality into rituals. During sprint planning, ask "How will we ensure this story is high-quality?" not just "How long will it take?"</p>
</li>
<li><p>Eliminate the "Ready for QA" column. Treat quality as a continuous conversation where developers and QAs collaborate throughout the development of a feature.</p>
</li>
<li><p>Share quality metrics with everyone. Make bug counts, performance data, and uptime visible to the entire team. What gets measured and seen gets improved.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-psychological-safety">Psychological Safety</h3>
<ul>
<li><p><strong>WHAT it is:</strong> Psychological safety is a shared belief that team members can take interpersonal risks without fear of being punished or humiliated. It's the confidence to speak up, ask questions, admit mistakes, or challenge the status quo.</p>
</li>
<li><p><strong>WHY it matters:</strong> This is the bedrock of a healthy culture. Without it, you are just gambling. Team members will hide mistakes and stay silent about risks. This silence is where bugs, security vulnerabilities, and bad architectural decisions are born. Psychological safety turns silence into valuable information.</p>
</li>
<li><p><strong>HOW to build it:</strong> This is primarily the leader's responsibility.</p>
<ul>
<li><p>Model vulnerability. Acknowledge your own mistakes. "I was wrong about that assumption" is one of the most powerful things a leader can say.</p>
</li>
<li><p>Treat questions as a gift. When someone asks a "dumb" question, thank them. It's likely others had the same question but were afraid to ask.</p>
</li>
<li><p>Focus on the process, not the person. When a mistake happens, ask "How can we prevent this in the future?" not "Why did you do that?"</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-visible-leadership-buy-in">Visible Leadership Buy-In</h3>
<ul>
<li><p><strong>WHAT it is:</strong> This is tangible, consistent proof from leadership that quality is a top-tier business priority, not just a talking point. It’s <strong>action</strong>, not just words.</p>
</li>
<li><p><strong>WHY it matters:</strong> Teams take their cues from leadership. If a leader says quality is important but then consistently pushes the team to cut corners to meet a deadline, the message is clear - speed matters more. Visible buy-in empowers teams to do the right thing.</p>
</li>
<li><p><strong>HOW to spot it (and do it):</strong></p>
<ul>
<li><p>Budgeting: Leaders allocate time and money to address technical debt. They approve the purchase of better tools and training.</p>
</li>
<li><p>Prioritization: When a critical bug is found, they prioritize fixing it over starting the next new feature.</p>
</li>
<li><p>Reinforcement: They publicly praise teams for a smooth, high-quality release, not just for shipping a feature fast. They back the QA's decision when a serious risk is flagged.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-the-blameless-retrospective">The Blameless Retrospective</h3>
<ul>
<li><p><strong>WHAT it is:</strong> A structured meeting following an incident (like a production outage) where the entire focus is on understanding the systemic causes and improving the process. Blame is never the objective.</p>
</li>
<li><p><strong>WHY it matters:</strong> A culture of blame stimulates hiding the truth. If people are afraid of punishment, they will never be fully transparent about what happened. Blamelessness creates an environment where the team can dissect a failure honestly to find the root cause and ensure it never happens again.</p>
</li>
<li><p><strong>HOW to run it:</strong></p>
<ul>
<li><p>Have a neutral facilitator. This person's job is to keep the conversation focused on the process.</p>
</li>
<li><p>Ask "how," not "who." The guiding question is always, "How did our process allow this to happen?"</p>
</li>
<li><p>Focus on actionable system improvements. The output should be things like "Add a new CI check" or "Improve our deployment runbook," not "Dave needs to be more careful".</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-redefining-the-qa-role-from-gatekeeper-to-coach">Redefining the QA Role: From Gatekeeper to Coach</h3>
<ul>
<li><p><strong>WHAT it is:</strong> The evolution of a Quality Assurance professional from a "bug finder" at the end of the line to a "quality coach" and advocate embedded within the team from the very beginning.</p>
</li>
<li><p><strong>WHY it matters:</strong> Finding bugs late is expensive. The gatekeeper model creates bottlenecks and an competitive relationship. A coaching model prevents entire classes of bugs from ever being written by clarifying requirements and identifying risks upfront. It is proactive, not reactive.</p>
</li>
<li><p><strong>HOW to make the shift:</strong></p>
<ul>
<li><p>Get QAs in the room, early. QAs must be part of initial planning and "Three Amigos" sessions to ask probing questions before a line of code is written.</p>
</li>
<li><p>Focus on enablement. The QA's job is to enable developers to test better by championing best practices, maintaining test infrastructure, and pairing with them.</p>
</li>
<li><p>Change the metrics. Success isn't "bugs found." It's "preventing bugs" and "increasing team confidence in the release".</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-defining-the-developers-role-true-ownership">Defining the Developer's Role: True Ownership</h3>
<ul>
<li><p><strong>WHAT it is:</strong> This is the principle that developers are the primary owners of the quality of their own code. The QA role is a partnership, not a safety net for sloppy work.</p>
</li>
<li><p><strong>WHY it matters:</strong> The person writing the code has the most context to build quality in from the start. Relying on an external check creates a diffusion of responsibility. When developers own quality, they write more resilient, maintainable, and well-tested code.</p>
</li>
<li><p><strong>HOW to foster it:</strong></p>
<ul>
<li><p>Go beyond unit tests. Ownership includes writing good logs, considering edge cases, and ensuring code is clean and maintainable.</p>
</li>
<li><p>Implement Pair Testing. Have a developer and QA test a feature together. This builds empathy and creates a powerful, immediate feedback loop.</p>
</li>
<li><p>Make quality part of code reviews. Reviews should check for more than just style. They should ask "Is this functionality robust? Is it well-tested?"</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-understanding-the-domain-in-depth">Understanding the Domain In-Depth</h3>
<ul>
<li><p><strong>WHAT it is:</strong> This means the QA is not just an expert in testing techniques, but also a deep expert in the business domain, the product's features, and the customer's problems. They are the "glue" between business and technology.</p>
</li>
<li><p><strong>WHY it matters:</strong> You can't test what you don't understand. Deep domain knowledge allows a QA to identify risks that developers or product owners might miss. They can design tests that simulate real-world usage, not just technical correctness.</p>
</li>
<li><p><strong>HOW to develop it:</strong></p>
<ul>
<li><p>Be relentlessly curious. Research the biggest user pain points for a similar products.</p>
</li>
<li><p>Ask "why" five times. Understand the root business need behind every feature.</p>
</li>
<li><p>Become the user. Use your product regularly, trying to accomplish the same goals your customers do.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3 id="heading-establish-equal-authority">Establish Equal Authority</h3>
<ul>
<li><p><strong>WHAT it is:</strong> This means that the QA's voice and professional judgment on matters of quality risk carry the same weight as a developer's voice on technical implementation. It is a partnership of equals.</p>
</li>
<li><p><strong>WHY it matters:</strong> If there's a power imbalance, quality concerns will always be overruled by delivery pressure. The QA becomes the "little brother" whose warnings are ignored. This leads to a culture where known risks are accepted and shipped. Equal authority ensures a balanced decision-making process.</p>
</li>
<li><p><strong>HOW to achieve it:</strong></p>
<ul>
<li><p>Structurally: QA should report through the same management chain as development, not as a subordinate function.</p>
</li>
<li><p>In practice: In meetings, leaders must explicitly give equal speaking time and weight to QA perspectives. A developer shouldn't be able to merge code if the QA has raised a valid, blocking concern that hasn't been addressed.</p>
</li>
</ul>
</li>
</ul>
<hr />
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758648600580/b9a3eafd-b635-42e2-87ae-b1719edba40a.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-conclusion-the-groundwork-is-laid">Conclusion: The Groundwork is Laid</h3>
<p>Quality Engineering is a mindset, not a role. By establishing this cultural foundation, you stop treating quality as an accident and start building it by design.</p>
<p>With this solid cultural foundation in place, you are no longer guessing about quality. You are ready to design it. The next step is to implement “Proactive Strategies for Pre-Development Success - Requirements, Stories &amp; Planning” that prevents bugs before a single line of code is ever written.</p>
]]></content:encoded></item><item><title><![CDATA[The Complete Quality Engineering Roadmap]]></title><description><![CDATA[Let’s be honest. The biggest problem in our industry isn’t a lack of tools or talent. It’s a fundamental misunderstanding of what quality actually is, how you achieve it, and what it really costs when you ignore it.
Teams talk about "shifting left" a...]]></description><link>https://idavidov.eu/the-complete-quality-engineering-roadmap</link><guid isPermaLink="true">https://idavidov.eu/the-complete-quality-engineering-roadmap</guid><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[Testing]]></category><category><![CDATA[engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[Quality Engineering]]></category><category><![CDATA[Quality Assurance Engineering]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 03 Sep 2025 08:12:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756879533374/3f09136c-0d25-4a23-9cec-d0ec140993ef.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s be honest. The biggest problem in our industry isn’t a lack of tools or talent. It’s a fundamental misunderstanding of what <strong>quality</strong> actually is, how you achieve it, and what it really costs when you ignore it.</p>
<p>Teams talk about "shifting left" and "quality culture," but these are just empty buzzwords without a plan. You wouldn't build a house by putting up the roof first, right? You start with a solid foundation, then the walls, then the roof, and after all that you continue with the systems inside. Building quality is no different.</p>
<p>This isn't just a roadmap for QAs. It's for every single person involved in building software who is tired of the cycle of shipping fast and breaking things.</p>
<hr />
<h3 id="heading-phase-1-foundation-culture-roles-amp-mindset">Phase 1: Foundation (Culture, Roles &amp; Mindset)</h3>
<p>Everything starts here. You can hire the most expensive professionals and you can buy the most expensive tools, but if your culture is broken, you're just accelerating your ability to produce garbage. This phase is non-negotiable.</p>
<ul>
<li><p><strong>A Shared Definition of "Quality":</strong> Before you write a line of code, the team must agree on what "quality" means for your product. Is it speed? Reliability? A bug-free UI? If everyone has a different definition, you're all aiming at different targets.</p>
</li>
<li><p><strong>Quality is Everyone's Responsibility:</strong> The "Whole Team" approach is the only way. Quality is not a department you send things to. It's a collective standard that everyone upholds.</p>
</li>
<li><p><strong>Psychological Safety:</strong> This is non-negotiable. Team members must feel safe to say, "I'm concerned about this," without fear of blame. Without this, you're blind to the biggest risks.</p>
</li>
<li><p><strong>Visible Leadership Buy-In:</strong> Leaders must actively champion quality. When they prioritize fixing tech debt over rushing a new feature, they show the team what truly matters.</p>
</li>
<li><p><strong>The Blameless Retrospective:</strong> When something goes wrong, the question is never "Who did this?" It's "How did our <em>process</em> allow this to happen?" Every mistake is a full team responsibility, and the goal is to remediate the system, not the person.</p>
</li>
<li><p><strong>Redefining the QA Role:</strong> The shift from gatekeeper to quality coach is critical. <strong>Stop</strong> finding bugs at the end. <strong>Start</strong> asking questions at the beginning. You are a quality advocate embedded in the team from day one.</p>
</li>
<li><p><strong>Defining the Developer's Role:</strong> Developers own the quality of their code. Period. This means writing clean, maintainable code and owning foundational testing like unit and integration tests. The QA role is not a safety net for sloppy work. By inspection you cannot improve quality, you can only characterize it.</p>
</li>
<li><p><strong>Understanding the Domain In-Depth:</strong> QAs are <em>the glue</em> between business and technology. They should actively breaking down siloes on the team.</p>
</li>
<li><p><strong>Establish Equal Authority between Devs and QAs:</strong> They need to be on equal footing, otherwise it is little brother telling big brother what they can and cannot do.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756884979972/502abb8b-c9a4-4fd4-9690-5c74856ecee8.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-phase-2-pre-development-requirements-estimation-amp-planning">Phase 2: Pre-Development (Requirements, Estimation &amp; Planning)</h3>
<p>With a solid foundation, you can start drawing the blueprints. This is where you prevent entire classes of bugs before they are ever written.</p>
<ul>
<li><p><strong>Crafting High-Quality, Testable User Stories:</strong> A story that is vague or untestable is a recipe for disaster. Teams must establish clear standards for what makes a story "ready for development."</p>
</li>
<li><p><strong>Defining Concrete Acceptance Criteria (ACs):</strong> Good ACs are explicit and binary—they either pass or fail. This removes ambiguity and ensures everyone is on the same page about what "done" means.</p>
</li>
<li><p><strong>Implementing "Three Amigos" Sessions:</strong> These can't be passive review meetings. Make them active <em>working sessions</em>. The goal is to leave with a shared understanding and clearly defined ACs, not just to pretend we accomplished something.</p>
</li>
<li><p><strong>Identifying Edge Cases &amp; Negative Paths Upfront:</strong> This is a primary task for a QE. Your job is to think like a user who will do everything wrong. What happens with invalid input? What if the network fails? Answering these questions now saves days of rework later.</p>
</li>
<li><p><strong>Defining and Integrating Non-Functional Requirements (NFRs):</strong> Don't let performance, security, or accessibility be an afterthought. These NFRs must be discussed and integrated into user stories from the beginning.</p>
</li>
<li><p><strong>Risk-Based Prioritization of Quality Efforts:</strong> You can't test everything with the same depth. Use risk analysis to focus your efforts on the most critical and fragile parts of the application.</p>
</li>
<li><p><strong>Making Quality a Factor in Story Estimation:</strong> Quality isn't free. The effort for writing tests, pairing, and exploratory testing must be included in story points. It's a bad practice to miss this, as it just hides the true cost of work.</p>
</li>
<li><p><strong>Early Test Data &amp; Environment Planning:</strong> Before a single line of code is written, you should be asking, "How are we going to test this?" This means planning for the necessary test data and environment access upfront.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756884992717/3a13d6c7-ceee-44cc-b5b3-4fd21c29c85b.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-phase-3-in-development-in-sprint-quality">Phase 3: In-Development (In-Sprint Quality)</h3>
<p>This is the construction phase. Quality is built-in, not bolted on. These items are the engine of in-sprint quality.</p>
<ul>
<li><p><strong>A Fast and Reliable Continuous Integration (CI) Pipeline:</strong> Every commit should trigger an automated build and a fast set of tests. This provides immediate feedback and prevents integration hell.</p>
</li>
<li><p><strong>Enforced Code Quality Standards &amp; Static Analysis:</strong> Use automated tools like linters and static analysis to catch bugs and style issues before they ever reach a human reviewer. This is your first line of automated defense.</p>
</li>
<li><p><strong>Mandatory, Effective Peer Reviews (Pull Requests):</strong> Code reviews are not just for finding bugs. They are for sharing knowledge, ensuring maintainability, and upholding team standards. Make them mandatory and constructive.</p>
</li>
<li><p><strong>In-Sprint Dev-QA Collaboration &amp; Pairing:</strong> This is a game-changer. <strong>Pair-testing</strong>—a dev and QA working together on a feature—creates an immediate feedback loop and demolishes the "us vs. them" wall. It is extremely helpful.</p>
</li>
<li><p><strong>Early and Continuous Exploratory Testing:</strong> As soon as a piece of a feature is usable, a QA should be exploring it. This is creative, human-driven testing that finds the bugs automation will always miss.</p>
</li>
<li><p><strong>Definition of "Ready for QA/Review":</strong> This should be part of your Definition of Done. It's a simple checklist that prevents friction and ensures smooth handoffs.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756885005253/10d3695c-f0fc-45c4-8dae-b92ce1c45173.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-phase-4-formal-testing-amp-automation">Phase 4: Formal Testing &amp; Automation</h3>
<p>Notice how late this phase is? Building automation on a broken process is a waste of money. Only with the other phases in place you can build a strategy that provides real value.</p>
<ul>
<li><p><strong>Implementing the End-to-End (E2E) Testing Strategy:</strong> Don't try to automate everything. Focus E2E tests on the most critical user journeys that provide the highest value.</p>
</li>
<li><p><strong>Building a Scalable &amp; Maintainable Automation Framework:</strong> The two silent killers of any automation effort are <strong>bad architecture</strong> and poor <strong>test data management</strong>. I use <strong>Playwright with TS</strong> and the Page Object Model, but the tool is less important than the principles of clean, maintainable design.</p>
</li>
<li><p><strong>A Stable &amp; Consistent Test Environment Strategy:</strong> A flaky test environment makes your test results meaningless. The environment must be reliable and as production-like as possible.</p>
</li>
<li><p><strong>Robust Test Data Management:</strong> If you spend more time managing test data than writing tests, your strategy has failed. You need a clear, repeatable process for getting the data you need.</p>
</li>
<li><p><strong>Integrating Automation into the CI/CD Pipeline:</strong> Strategically run your tests. Run quick smoke tests on every PR, a larger regression suite on every merge to main, and full E2E runs before a release.</p>
</li>
<li><p><strong>Defining a Performance Testing Baseline:</strong> You don't need a massive performance team to start. Run baseline load tests on critical user flows to ensure you don't introduce major regressions.</p>
</li>
<li><p><strong>Implementing a Basic Security Testing Checklist:</strong> Integrate automated security scans (SAST/DAST) into your pipeline to catch common vulnerabilities early.</p>
</li>
<li><p><strong>Cross-Browser &amp; Cross-Device Testing Strategy:</strong> Define what browsers and devices you officially support and have a clear strategy for ensuring a consistent experience across them.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756885019689/32002a4a-98eb-4a99-bd5b-fda50ccc7406.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-phase-5-release-amp-post-release">Phase 5: Release &amp; Post-Release</h3>
<p>Your job isn't done when the feature ships. The ultimate measure of quality is how your product behaves in the hands of real users.</p>
<ul>
<li><p><strong>A Staged Rollout Strategy:</strong> Minimize risk with canary releases or blue-green deployments. Roll out new features to a small percentage of users first to ensure stability before a full release.</p>
</li>
<li><p><strong>Comprehensive Production Monitoring &amp; Alerting:</strong> This is your product's insurance. It should tell you there's a problem long before a customer does.</p>
</li>
<li><p><strong>Effective Log Management &amp; Analysis:</strong> When an issue occurs, structured, searchable logs are your best tool for rapid root cause analysis.</p>
</li>
<li><p><strong>User-Facing Feedback Channels:</strong> Make it easy for users to tell you when something is wrong. An in-app feedback form or a dedicated channel is a direct line to the user experience.</p>
</li>
<li><p><strong>A Culture of Continuous Improvement:</strong> Use all the data from production—monitoring alerts, user feedback, performance metrics—to feed back into the development process.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756885032202/1af5854f-f50d-4449-bc1c-97a3b18cee08.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-conclusion">Conclusion</h3>
<p>Quality Engineering is a mindset, not a role. For a proper implementation in the software development lifecycle, we, as QEs, must follow best practices and improve the process step-by-step as we spread the values and benefits of it.</p>
<p>And if you're at a startup thinking, "This is too slow and heavy for us" let me ask you a simple question.</p>
<p>You created this startup to win in the market, right? If you put garbage in the market, you will receive the same from it.</p>
]]></content:encoded></item><item><title><![CDATA[How to Separate AI Coding Hype from Reality for Real-World QA and Development]]></title><description><![CDATA[A while ago I posted on LinkedIn that I think we’ve officially passed from the EUPHORIA phase to the ANXIETY phase of AI CODING - link.
Preface
One side hustle of mine is Investing. Initially, I was fascinated by it’s THEORY and PSYCHOLOGY. One of th...]]></description><link>https://idavidov.eu/how-to-separate-ai-coding-hype-from-reality-for-real-world-qa-and-development</link><guid isPermaLink="true">https://idavidov.eu/how-to-separate-ai-coding-hype-from-reality-for-real-world-qa-and-development</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[QA]]></category><category><![CDATA[qa testing]]></category><category><![CDATA[Testing]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[llm]]></category><category><![CDATA[LLM's ]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[chatbot]]></category><category><![CDATA[development]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Fri, 29 Aug 2025 13:58:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1756475834389/b2ba9cdd-320b-4666-802c-a18224c56f6a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A while ago I posted on LinkedIn that I think we’ve officially passed from the <code>EUPHORIA</code> phase to the <code>ANXIETY</code> phase of <code>AI CODING</code> - <a target="_blank" href="https://www.linkedin.com/posts/ivdavidov_ai-coding-side-activity-7357719391209193472-i9eE?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAABQfLKEB_rCWodo7aw90L62hFYcBkga7-Dg">link</a>.</p>
<h3 id="heading-preface">Preface</h3>
<p>One side hustle of mine is Investing. Initially, I was fascinated by it’s <code>THEORY</code> and <code>PSYCHOLOGY</code>. One of the most famous quotes in the field is: <strong>"The trend is your friend."</strong></p>
<p>I absolutely belief that <code>DELAYED GRATIFICATION</code> is the path everyone should embrace. But…we are living in a world, where the latency of 3s, can drive us crazy. We want everything, here, now. We don’t want to sweat, we don’t want to work hard. Instead, we just want vacations, dopamine and a coctail - just for style. Familiar?</p>
<p>In every field there are so few people that actually succeed. No hard statistics, but probably 1 %. Why? Cuz they are just doing that 99% of the people refuse to do.</p>
<h3 id="heading-ai-coding-ltgt-investing">AI Coding &lt;=&gt; Investing</h3>
<p>Let’s talk a bit for the elephant in the room. To be honest, we are all here for that, right?</p>
<p>I just want to add something about the <code>TREND</code> from the <code>INVESTING</code> stand point, because I am trying to make <em>“bad”</em> reference to <code>AI CODING</code>. Here's an explanation of what a <code>TREND</code> is - a direction in which something is developing or changing.</p>
<p><em>Trend Stages as Emotion</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756448322478/9d9af948-f98a-4972-8be7-4486b2442308.png" alt class="image--center mx-auto" /></p>
<p>For quite some time, everyone has been milking the cow named <code>AI CODING</code>. There is nothing wrong with it, but as always, it is a trend, and it moves in waves.</p>
<p>On the surface, it is fantastic, but let’s not forget about the <strong>technical debt</strong> we bring to the table by using it.</p>
<p>Back a few weeks ago, when I stated that we are officially passed the <code>EUPHORIA</code> stage and we entered the <code>ANXIETY</code> one. I am using LLMs to learn, brainstorm and develop automation frameworks for more than 2 years, and as a QA I am always skeptical. I try to verify everything that is presented to me. Sometimes, the truth is well covered, and sometimes, it is actually denied. That’s why I do believe that the <code>AI CODING</code> has a dark side attached to it. I already mentioned the technical debt, but the bigger issues are that it ruins problem solving skills, critical thinking and actual collaboration between engineers. The worst part is that we start thinking we DO NOT need juniors. Are we thinking that current Mid and Senior Developers are enough to lead the Earth to greatness? Seriously?</p>
<h3 id="heading-llms-in-qa-field">LLMs in QA Field</h3>
<p>I try many experiments with Playwright, it’s MCP server, different LLM’s, different IDEs and I can stated that all these tools are helpful. They limit the manual work that I have to do, and they speed my productivity (coding, test case preparation and writing documentation) 3-4 times. But….and there is a BIG BUT, this actually doesn’t speed releasing. It doesn’t speed releasing, because, as before these tools, I spend most of my time discusing requirements, technical implementation, bugs, features, etc.</p>
<p>Currently, I am using Cursor and claude-sonnet-4/chatGPT 5/gemini 2.5 pro as an LLMs, but I limit it in the following:</p>
<ol>
<li><p>Tab autocompletion - this workflow gives me the opportunity to keep everything under control, but still improves my speed drastically.</p>
</li>
<li><p>Develop Factory and Helper fuctions - the LLMs are great in short, specific, one function tasks. You can describe your problem well enough, and someone on the world has already solve your exact problem. LLM’s job is just to find the right piece of code and provide it to you. Just like a few years back, but without manually grinding <strong>google</strong> and <strong>Stack Overflow.</strong></p>
</li>
<li><p>Brainstorming Ideas - no doubt that LLMs are great partners in that.</p>
</li>
<li><p>Creating documentation about testing and coding - again no doubt that LLMs are great partners in that.</p>
</li>
</ol>
<p>But what about all the hype about MCP servers giving context to the LLMs, Agentic tools for creating tests alone, and tools, that create test cases from natural language? All these things work only on POC/Demo sites. <code>MARKETING SUGAR COATING</code>.</p>
<p>Nothing of these works in actual Real-World Complex App in development, because the worst thing that can happen to an LLM is to change the context (aka change the requirements of already developed/tested feature). Just do it a few times, and watch the “Michelin Italian Chef” named ${your choosen LLM name} preparing a brilliant piece of spaghetti for you.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>My current feeling is telling me that we already passed the <code>ANXIETY</code> phase and we are officialy in <code>DENIAL</code> phase of the wave. This absolute doesn’t mean that LLMs for coding is not helpful. <strong>We just start to realize their real value….and real COST.</strong></p>
<p>Before fallin in the trap, ask your self one simple question. Is this an <code>ORGANIC GROWTH</code> or it is a <code>BUBBLE</code>?</p>
<p><em>Visual Representation of forming and bursting a bubble</em></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756450269232/31546b24-8154-4bc8-963b-e8026ffe18c4.png" alt class="image--center mx-auto" /></p>
<p>If you are actually changing the World (doesn’t matter the World or your World) by the use of any LLM, don’t stop! Just be aware of PROS and CONS of it.</p>
]]></content:encoded></item><item><title><![CDATA[My First Live Session - Developing Playwright Framework for REST API Testing]]></title><description><![CDATA[Session Notes
Below, you can find my notes and presentation, I used to prepare for my first Live Session. Hope you can find them helpful, since the lecture was held in Bulgarian.
Greetings
Hey friends! It's awesome to be here at SoftUni. Thanks for j...]]></description><link>https://idavidov.eu/my-first-live-session-developing-playwright-framework-for-rest-api-testing</link><guid isPermaLink="true">https://idavidov.eu/my-first-live-session-developing-playwright-framework-for-rest-api-testing</guid><category><![CDATA[Testing]]></category><category><![CDATA[automation]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[playwright]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[quality]]></category><category><![CDATA[framework]]></category><category><![CDATA[APIs]]></category><category><![CDATA[api]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Tue, 26 Aug 2025 13:24:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753248555220/9f097694-15cc-4ebe-b377-5b06a02db0f7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-session-notes">Session Notes</h2>
<p>Below, you can find my notes and presentation, I used to prepare for my first Live Session. Hope you can find them helpful, since the lecture was held in Bulgarian.</p>
<h2 id="heading-greetings">Greetings</h2>
<p>Hey friends! It's awesome to be here at SoftUni. Thanks for joining. We're going to talk about something I'm really passionate about today - building testing frameworks that people actually <em>want</em> to use.</p>
<p>The audience is from all levels, but my goal here isn't a boring lecture. I want to start a conversation and show you just how simple and elegant testing can be when you have the right tools and the right mindset.</p>
<h2 id="heading-agenda">Agenda</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753248738517/e0a5e601-747e-49c7-8b4b-bcb2ce486ae7.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-two-sides-of-an-application">The Two Sides of an Application</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753248805276/9f4eebc8-6cf1-4262-8a61-846046d44965.jpeg" alt class="image--center mx-auto" /></p>
<p>Alright, let's start with the absolute basics. Every modern application you use, from a social media app to your online banking, has two main parts.</p>
<p>First, you've got the <strong>Front-End</strong>. That's the part you see and interact with. The buttons, the forms, the pretty charts. It’s the user interface.</p>
<p>Then, you've got the <strong>Back-End</strong>. This is the engine. The brain. It's where the real work happens—all the business logic, the calculations, talking to the database.</p>
<h2 id="heading-communication-between-the-two-sides">Communication Between the Two Sides</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753248899546/a640a2f7-dbd9-4cc0-8fa0-63445d1a4c08.jpeg" alt class="image--center mx-auto" /></p>
<p>So how do they talk to each other? They use an <strong>API</strong>—an Application Programming Interface.</p>
<p>Based on user actions, the front-end makes a request, sends it to the back-end via API, then the back-end does the work and sends a response to the front-end again via API. Lastly, the front-end processes the response and visualize it to the customer.</p>
<h2 id="heading-where-and-what-do-the-bugs-hide">Where and What Do The Bugs Hide?</h2>
<h3 id="heading-front-end-vs-back-end-bug-distribution">Front-End vs Back-End Bug Distribution</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753249045505/39c25adf-b6f3-4291-a239-7a77a204b855.jpeg" alt class="image--center mx-auto" /></p>
<p>The data can vary significatly due to different factors such as complexity, team maturity, domain, etc. but we can safely assume that the distribution is somewhere in the range of 70% FE vs 30% BE. And it is logical, because there is no such factors as User Environment, Front-End tech Stack, Interactive Natature of the FE, in the BE.</p>
<h3 id="heading-the-nature-of-bugs-a-volume-vs-severity-profile">The Nature of Bugs: A Volume vs. Severity Profile</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753249613432/71186cdd-e793-4694-864e-694788d36431.jpeg" alt class="image--center mx-auto" /></p>
<p>Despite the fact that most bugs are in the FE, the sevier ones can be found in the BE. And it makes sense! That’s where the complexity is. That's where data gets validated, where security is handled, where the core business rules are executed. A bug in the UI is an inconvenience. A bug in the API can be a catastrophe—data corruption, security vulnerabilities, you name it.</p>
<p>The above absolutely NOT means that UI bugs are not important - we all can agree that if the user is not satisfied by his experience, most probably he won’t return to our app.</p>
<h3 id="heading-the-escalating-cost-of-bug-remediation-by-development-stage">The Escalating Cost of Bug Remediation by Development Stage</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753250443650/6967931c-5fbe-464f-a3a5-c67dce932684.jpeg" alt class="image--center mx-auto" /></p>
<p>It's no secret that the later a bug is found, the more costly the remediation actions will be.</p>
<h2 id="heading-api-testing">API Testing</h2>
<h3 id="heading-api-theory">API Theory</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753251011395/43a2cdb2-aad1-4f9a-a8dc-4ef5c10f4849.jpeg" alt class="image--center mx-auto" /></p>
<p>REST API communication is a Stateless Client-Server Communication, which means, every single request should contain the whole context.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753251296733/8ab73f3f-acef-42c3-a766-dc15ed4b6338.jpeg" alt class="image--center mx-auto" /></p>
<p>These are the main components of an REST API Request and Response.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252271362/6aaf0710-010b-4be2-9fbb-ae067d8ebc4f.jpeg" alt class="image--center mx-auto" /></p>
<p>We will begin with these from the Request. Endpoint is a combination of Base Url and the Path of the resource, you are trying to interact with.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252355203/a25c995f-1995-4c91-bd19-561ec34cc6c7.jpeg" alt class="image--center mx-auto" /></p>
<p>These are the most common Methods, used in REST API, responsible for CRUD (Create, Read, Update, Delete) operations.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252448127/6d65908d-f0c7-4053-8eb8-c30c2ee4b20e.jpeg" alt class="image--center mx-auto" /></p>
<p>The requst headers and body are used to specify what actions the User wants to perform - autorization and data.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252579181/b81c8488-a2c0-45b3-911e-f6b83fa2961d.jpeg" alt class="image--center mx-auto" /></p>
<p>The Status Code of the Response is always presented, and it can be used as a clear sign of the outcome. You can see on the image how the statuses are categorized.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252710918/e540fc1b-bf9a-4e63-ab89-5366ade912be.jpeg" alt class="image--center mx-auto" /></p>
<p>In addition to the Response Status Code, you can find Response Headers, and optionally - Response Body. This is the message Server delivers to the Client.</p>
<h3 id="heading-the-importance">The Importance</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753252822475/9d4d7787-1367-40f7-a6b4-ee66fd183797.jpeg" alt class="image--center mx-auto" /></p>
<p>With testing the APIs, we can verify that it works functionally, it is fully integrated in the system, it covers the security standarts, that are set and that the performance is acceptable.</p>
<h2 id="heading-the-power-of-a-unified-tool">The Power of a Unified Tool</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753253152415/bb333659-efcb-4c27-a767-ac0fdfd30cfd.jpeg" alt class="image--center mx-auto" /></p>
<p>Okay, so when you hear 'Playwright', you probably think of UI testing. Automating a browser, clicking buttons, filling forms... and you're right, it is absolutely world-class at that.</p>
<p>But its real power, the thing that changes the game, is using it as a single, unified tool for your entire testing suite. UI and API, together.</p>
<p>Why is that such a big deal? Four reasons.</p>
<h3 id="heading-reliability-and-speed">Reliability and Speed</h3>
<p>First, Reliability and Speed. Playwright enhances test reliability by incorporating auto-waiting mechanisms that ensure actions are only performed when elements are stable, which drastically reduces flaky failures common in other frameworks. By leveraging a modern architecture that allows for test parallelization across multiple browser contexts, Playwright achieves remarkable execution speed, significantly cutting down the time for running extensive regression suites.</p>
<h3 id="heading-cicd-simplification">CI/CD Simplification</h3>
<p>Second, CI/CD Simplification. If you use different tools for front-end and back-end testing, your deployment pipeline gets complicated. Two sets of dependencies, two test runners, two reporting formats. With Playwright for everything, it's one process. One command to run all your tests. One beautiful, consolidated report. Your pipeline becomes simpler, faster, and way easier to maintain.</p>
<h3 id="heading-seamless-end-to-end-testing">Seamless End-to-End Testing</h3>
<p>Third, Seamless End-to-End Testing. This is where it gets really cool. Imagine a single test that does this</p>
<ul>
<li><p>Step 1: Create a new user by sending a POST request directly to your API. It's fast and reliable.</p>
</li>
<li><p>Step 2: In the very next line of code, use the browser to log in as that new user through the UI.</p>
</li>
<li><p>Step 3: Verify their dashboard looks correct.</p>
</li>
</ul>
<h3 id="heading-developer-experience">Developer Experience</h3>
<p>Last, but not least, this is for those of you who've built testing frameworks before... we're going to solve the biggest challenge of all: Developer Experience, or DX.</p>
<p>It's not enough for a framework to just work. It has to be a joy to use. Today, we're going to build a custom API fixture that makes writing API tests almost effortless. It will be clean, readable, and type-safe. This is the secret to getting your whole team to write more, better tests.</p>
<h2 id="heading-developing-playwright-framework-for-rest-api-testing">Developing Playwright Framework for REST API Testing</h2>
<p>Finally, we can start with the “interesting” part of the session. As prerequisites. you will need:</p>
<ol>
<li><p>The Materials - <a target="_blank" href="https://github.com/idavidov13/SoftUni-PW-API-Framework-Materials">GitHub Link</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/idavidov13/SoftUni-PW-API-Framework-Materials">IDE - Cursor (Windsurf, VS Code)</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/idavidov13/SoftUni-PW-API-Framework-Materials">Application Under Test (AUT</a>) - <a target="_blank" href="https://conduit.bondaracademy.com/">Demo App</a></p>
</li>
<li><p>The Final Repo - <a target="_blank" href="https://github.com/idavidov13/SoftUni-PW-API-Framework-25.08.2025">GitHub Link</a></p>
</li>
</ol>
<p>So, using Playwright to make a simple API call is easy. You can find that in the docs. But that doesn't scale to a real project with dozens of developers and thousands of tests.</p>
<p>To do this right, you need a scalable and maintainable framework. And the whole point of a good framework is to make the right way to write tests the easy way.</p>
<h3 id="heading-developer-experience-dx">Developer Experience (DX)</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753257269794/ffc86a0b-f366-4062-8df3-90ab806cc960.jpeg" alt class="image--center mx-auto" /></p>
<p>Let’s start from improving DX, cuz we can do it before even we initialized our Playwright framework. You can see on the slide why it is important and how can we improve it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753257391322/7b6a1f3d-abbf-42d5-ab37-c089cd25155f.jpeg" alt class="image--center mx-auto" /></p>
<p>Next task is to build an Abstraction Layer for our API calls.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753257433263/24ceb977-03df-4820-8190-cbf1114717fa.png" alt class="image--center mx-auto" /></p>
<p>On the left hand side you can see our <code>plaun-function.ts</code> - a function that receives input parameters and outputs only what we need. And it is doing it in unified way.</p>
<p>On the right hand side you can see our <code>types.ts</code> - a file with all types, which are needed to make TypeScript happy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753257672562/035e95f7-44a5-40f6-bb6f-da6e5a06d3ca.jpeg" alt class="image--center mx-auto" /></p>
<p>The logical next step is to talk about Fixtures in Playwright and how powerful they are.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753257974649/6f7ed687-91a8-425b-960f-0bd0811fe393.jpeg" alt class="image--center mx-auto" /></p>
<p>Implementing Fixtures is quite easy - on the left hand side you can see how we extend the test base with our custom fixture.</p>
<p>On the right hand side it is an optional step, which I am showing, because it is used to merge all separate Custom Fixtures in one.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1753258101229/a9252ed4-9cd0-4c6f-b4ca-0a2a3826447c.jpeg" alt class="image--center mx-auto" /></p>
<p>Last, but not least, we should cover Zod - TypeScript-first validation library.</p>
<h2 id="heading-live-demostration-how-the-framework-is-developed-with-examples">Live Demostration how the Framework is Developed with Examples</h2>
<p>After cloning the initial repo, there is a step-by-step checklist in <code>README.md</code>, that will guide you in the process. If you want, you can jump straight into the final code.</p>
<h2 id="heading-closing-notes">Closing Notes</h2>
<p>I really hope that the following notes are helpful to you. As always, if there are any question, please let me know in LinkedIn. I will gladly try to answer to them.</p>
<p>You can find the recording of the session in <a target="_blank" href="https://www.youtube.com/watch?v=ZcumDI0eQIo&amp;ab_channel=SoftwareUniversity%28SoftUni%29">YouTube</a>. Just keep in mind that it is in Bulgarian.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Object-Oriented Programming in the Context of Automation QA]]></title><description><![CDATA[📱 Are you starting your journey in test automation? You might have written a few scripts and noticed something? They can get messy. Fast. As an application grows, test scripts that once worked perfectly become tangled as spaghetti, hard to read, and...]]></description><link>https://idavidov.eu/understanding-object-oriented-programming-in-the-context-of-automation-qa</link><guid isPermaLink="true">https://idavidov.eu/understanding-object-oriented-programming-in-the-context-of-automation-qa</guid><category><![CDATA[QA]]></category><category><![CDATA[automation]]></category><category><![CDATA[Testing]]></category><category><![CDATA[automation testing ]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[learning]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[Tutorial]]></category><category><![CDATA[oop]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Wed, 06 Aug 2025 05:36:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1754456710316/c0fd7dec-0469-466e-82ae-b7813a74e14a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>📱 Are you starting your journey in test automation? You might have written a few scripts and noticed something? They can get messy. Fast. As an application grows, test scripts that once worked perfectly become tangled as spaghetti, hard to read, and a nightmare to update. Believe me, we all have been there in some point. There is a rework in the application and suddenly, all test are failing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754457785529/69a563dc-4648-4fa7-9415-5566f9f8cdb8.jpeg" alt class="image--center mx-auto" /></p>
<p>What if there was a way to build your test suites like engineers build fantastic, scalable applications?</p>
<p>There are a few, and one of them is called <strong>Object-Oriented Programming (OOP)</strong>. This article is your guide to understanding OOP principles and applying them in TypeScript to write clean, powerful, and maintainable automated tests.</p>
<hr />
<h3 id="heading-what-is-object-oriented-programming">🤔 What is Object-Oriented Programming?</h3>
<p>At its core, OOP is a way of thinking about and organizing code. Instead of writing long, procedural scripts, you structure your code around <strong>objects</strong>.</p>
<p>Think of it like using LEGOs. You have different types of bricks (blueprints, or <a target="_blank" href="https://idavidov.eu/stop-writing-brittle-tests-your-blueprint-for-a-scalable-typescript-pom"><strong>Classes</strong></a> in OOP) that you can use to build specific things (structures, or <a target="_blank" href="https://idavidov.eu/how-to-use-arrays-and-objects-in-typescript-for-powerful-qa-automation-scripts"><strong>Objects</strong></a>). Each object has its own properties (like color) and things it can do (like connect to other bricks).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754457801910/06f68ce8-d25d-4eaa-8f78-11ffbb0c0e70.jpeg" alt class="image--center mx-auto" /></p>
<p>For QA automation, this means we stop thinking in terms of individual lines of code and start thinking in terms of application components, like a <code>LoginPage</code> object or a <code>User</code> object. This shift makes our test automation <strong>scalable</strong>, <strong>reusable</strong>, and much easier to <strong>maintain</strong>.</p>
<hr />
<h3 id="heading-the-four-pillars-of-oop-in-test-automation">🏛️ The Four Pillars of OOP in Test Automation</h3>
<p><strong>OOP</strong> stands on four main pillars. Let's break down each one with a practical example from the world of QA.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754457744179/5c55d3a1-7b1b-4d4c-850d-c28e53dcda23.jpeg" alt class="image--center mx-auto" /></p>
<h4 id="heading-1-encapsulation-bundling-data-and-actions">1. Encapsulation (📦): Bundling Data and Actions</h4>
<p><strong>Encapsulation</strong> means bundling the data (variables) and the methods (functions) that operate on that data into a single unit, or "object." It also means hiding the object's internal state from the outside.</p>
<p><strong>In QA Terms:</strong> Imagine a login page. It has UI elements (username field, password field, login button) and actions you can perform (entering text, clicking the button). Encapsulation means we create a <code>LoginPage</code> class that bundles the locators for those elements and the methods to interact with them.</p>
<p>If a locator changes, you only have to update it in <strong>one place</strong>: the <code>LoginPage</code> class. Your actual test script doesn't need to change at all!</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> LoginPage {
  <span class="hljs-comment">// Data (locators) are kept private</span>
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> usernameInput = <span class="hljs-string">'#username'</span>;
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> passwordInput = <span class="hljs-string">'#password'</span>;
  <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> loginButton = <span class="hljs-string">'#login-btn'</span>;

  <span class="hljs-comment">// Methods (actions) are public</span>
  <span class="hljs-keyword">public</span> enterUsername(username: <span class="hljs-built_in">string</span>) {
    <span class="hljs-comment">// Code to find element and type text</span>
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Typing '<span class="hljs-subst">${username}</span>' into <span class="hljs-subst">${<span class="hljs-built_in">this</span>.usernameInput}</span>`</span>);
  }

  <span class="hljs-keyword">public</span> enterPassword(password: <span class="hljs-built_in">string</span>) {
    <span class="hljs-comment">// Code to find element and type text</span>
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Typing password into <span class="hljs-subst">${<span class="hljs-built_in">this</span>.passwordInput}</span>`</span>);
  }

  <span class="hljs-keyword">public</span> clickLogin() {
    <span class="hljs-comment">// Code to find element and click</span>
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Clicking <span class="hljs-subst">${<span class="hljs-built_in">this</span>.loginButton}</span>`</span>);
  }
}
</code></pre>
<h4 id="heading-2-inheritance-reusing-common-code">2. Inheritance (🧬): Reusing Common Code</h4>
<p><strong>Inheritance</strong> allows you to create a new class (a "child") that inherits properties and methods from an existing class (a "parent"). This promotes code reuse.</p>
<p><strong>In QA Terms:</strong> Most pages in an application share common elements, like a header, a footer, or a navigation menu. Instead of re-writing the code for these elements on every single page object, we can create a <code>BasePage</code> and have other pages inherit from it.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Parent Class</span>
<span class="hljs-keyword">class</span> BasePage {
  <span class="hljs-keyword">public</span> getFooterText(): <span class="hljs-built_in">string</span> {
    <span class="hljs-comment">// Code to find footer element and get its text</span>
    <span class="hljs-keyword">return</span> <span class="hljs-string">"Copyright 2025 Intelligent Quality"</span>;
  }
}

<span class="hljs-comment">// Child Classes</span>
<span class="hljs-keyword">class</span> HomePage <span class="hljs-keyword">extends</span> BasePage {
  <span class="hljs-keyword">public</span> getWelcomeMessage(): <span class="hljs-built_in">string</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-string">"Welcome to our blog!"</span>;
  }
}

<span class="hljs-keyword">class</span> ProfilePage <span class="hljs-keyword">extends</span> BasePage {
  <span class="hljs-keyword">public</span> getUsername(): <span class="hljs-built_in">string</span> {
    <span class="hljs-keyword">return</span> <span class="hljs-string">"Ivan Davidov"</span>;
  }
}

<span class="hljs-comment">// Now, both HomePage and ProfilePage have access to getFooterText()!</span>
<span class="hljs-keyword">const</span> homePage = <span class="hljs-keyword">new</span> HomePage();
<span class="hljs-built_in">console</span>.log(homePage.getFooterText()); <span class="hljs-comment">// Outputs: "Copyright 2025 Intelligent Quality"</span>
<span class="hljs-keyword">const</span> profilePage = <span class="hljs-keyword">new</span> HomePage();
<span class="hljs-built_in">console</span>.log(profilePage.getFooterText()); <span class="hljs-comment">// Outputs: "Copyright 2025 Intelligent Quality"</span>
</code></pre>
<h4 id="heading-3-abstraction-hiding-complexity">3. Abstraction (☁️): Hiding Complexity</h4>
<p><strong>Abstraction</strong> means hiding the complex implementation details and showing only the essential features to the user. It simplifies a complex system by modeling classes appropriate to the problem.</p>
<p><strong>In QA Terms:</strong> Your test script should be simple and readable. It should describe <em>what</em> the test is doing, not <em>how</em> it's doing it.</p>
<p>We can create a high-level method like <code>login()</code> inside our <code>LoginPage</code> class. This one method will handle all the smaller steps: finding the username field, typing, finding the password field, typing, and clicking the login button. The test itself just needs to make a single, clear call.</p>
<pre><code class="lang-typescript"><span class="hljs-comment">// Let's add a high-level method to our LoginPage class</span>
<span class="hljs-keyword">class</span> LoginPage {
  <span class="hljs-comment">// ... previous private locators</span>

  <span class="hljs-keyword">public</span> enterUsername(username: <span class="hljs-built_in">string</span>) { <span class="hljs-comment">/* ... */</span> }
  <span class="hljs-keyword">public</span> enterPassword(password: <span class="hljs-built_in">string</span>) { <span class="hljs-comment">/* ... */</span> }
  <span class="hljs-keyword">public</span> clickLogin() { <span class="hljs-comment">/* ... */</span> }

  <span class="hljs-comment">// Abstraction in action!</span>
  <span class="hljs-keyword">public</span> login(username: <span class="hljs-built_in">string</span>, password: <span class="hljs-built_in">string</span>) {
    <span class="hljs-built_in">this</span>.enterUsername(username);
    <span class="hljs-built_in">this</span>.enterPassword(password);
    <span class="hljs-built_in">this</span>.clickLogin();
  }
}

<span class="hljs-comment">// Now, look how clean the test file is:</span>
<span class="hljs-comment">// In login.test.ts</span>
<span class="hljs-keyword">const</span> loginPage = <span class="hljs-keyword">new</span> LoginPage();
loginPage.login(<span class="hljs-string">"admin"</span>, <span class="hljs-string">"password"</span>); <span class="hljs-comment">// One line, super readable!</span>
</code></pre>
<h4 id="heading-4-polymorphism-one-action-many-forms">4. Polymorphism (🎭): One Action, Many Forms</h4>
<p><strong>Polymorphism</strong> allows a single action or method to be performed in different ways, depending on the object it is being performed on. The name literally means "many forms."</p>
<p><strong>In QA Terms:</strong> This is a more advanced concept, but imagine you have different types of users in your system (e.g., <code>GuestUser</code>, <code>PaidUser</code>) and you want to run a test that searches for an item. The search results page might look different for each user type.</p>
<p>With polymorphism, you could have a single <code>search()</code> function that correctly handles the results verification for whatever user type is currently logged in. To achieve this, we’ll define a common "contract" with an <a target="_blank" href="https://idavidov.eu/a-practical-guide-to-typescript-custom-types-for-qa-automation">interface</a>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">interface</span> User {
  verifySearchResults(): <span class="hljs-built_in">void</span>;
}

<span class="hljs-keyword">class</span> GuestUser <span class="hljs-keyword">implements</span> User {
  <span class="hljs-keyword">public</span> verifySearchResults() {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Verifying search results with 'Sign Up' banner..."</span>);
  }
}

<span class="hljs-keyword">class</span> PaidUser <span class="hljs-keyword">implements</span> User {
  <span class="hljs-keyword">public</span> verifySearchResults() {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Verifying search results with NO banner..."</span>);
  }
}

<span class="hljs-comment">// Your test can work with any user that fits the IUser interface</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">runSearchTest</span>(<span class="hljs-params">user: User</span>) </span>{
  <span class="hljs-comment">// ... code to perform search ...</span>
  user.verifySearchResults(); <span class="hljs-comment">// This will call the correct method</span>
}
</code></pre>
<hr />
<h3 id="heading-putting-it-all-together-the-page-object-model-pom">🧩 Putting It All Together: The Page Object Model (POM)</h3>
<p>If you've heard of a design pattern called the <a target="_blank" href="https://idavidov.eu/stop-writing-brittle-tests-your-blueprint-for-a-scalable-typescript-pom"><strong>Page Object Model (POM)</strong></a>, you've seen a perfect application of OOP principles.</p>
<ul>
<li><p><strong>Encapsulation:</strong> POM uses classes to represent pages, bundling locators and user interactions.</p>
</li>
<li><p><strong>Abstraction:</strong> Tests become clean, calling high-level methods like <code>loginPage.login()</code> instead of low-level browser commands.</p>
</li>
<li><p><strong>Inheritance:</strong> A <code>BasePage</code> is often used to handle common headers, footers, and other shared functionalities.</p>
</li>
<li><p><strong>Polymorphism:</strong> A single test function, like <code>verifySearchResults()</code>, can be written to work with different user types (e.g., <code>GuestUser</code>, <code>PaidUser</code>), where each type handles the verification in its own specific way.</p>
</li>
</ul>
<p>POM is the industry-standard result of applying OOP to test automation, leading to a robust and maintainable framework.</p>
<hr />
<h3 id="heading-pros-and-cons-of-oop-in-automation">✅❌ Pros and Cons of OOP in Automation</h3>
<p>OOP is powerful, but it's important to have a balanced view.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1754457823071/c6b87651-c913-46e1-87dc-9d7bdb2487b1.jpeg" alt class="image--center mx-auto" /></p>
<h4 id="heading-pros"><strong>Pros</strong></h4>
<ul>
<li><p><strong>✨ Maintainability:</strong> Code is far easier to update. Change a button's ID? You only edit one line in its page class.</p>
</li>
<li><p><strong>♻️ Reusability:</strong> Share code across tests with inheritance and utility classes, saving time and reducing duplication.</p>
</li>
<li><p><strong>📈 Scalability:</strong> Your test suite can grow to hundreds or thousands of tests without becoming a tangled mess.</p>
</li>
<li><p><strong>👓 Readability:</strong> Test cases read like a list of user actions, making them easy for anyone (even non-technical stakeholders) to understand.</p>
</li>
</ul>
<h4 id="heading-cons"><strong>Cons</strong></h4>
<ul>
<li><p><strong>⏳ Initial Overhead:</strong> Setting up a proper OOP framework takes more time upfront than writing a simple script.</p>
</li>
<li><p><strong>🧠 Learning Curve:</strong> It requires a solid understanding of these concepts, which can be a hurdle for those new to programming.</p>
</li>
<li><p><strong>🚨 Risk of Over-Engineering:</strong> For a tiny project, a full-blown OOP framework can be overkill. It's also possible to create anti-patterns like <strong>"God Objects"</strong>—a single, massive page object that tries to do everything, defeating the purpose of clean separation.</p>
</li>
</ul>
<hr />
<h3 id="heading-your-path-forward">🚀 Your Path Forward</h3>
<p>For manual testers and junior QAs, learning Object-Oriented Programming is not just an academic exercise. Instead this is a fundamental step toward becoming a highly effective automation engineer. It's the difference between writing disposable scripts and building a professional-grade, long-lasting test automation framework.</p>
<blockquote>
<p><strong>🙏🏻 Thank you for reading!</strong> Building robust, scalable automation frameworks is a journey best taken together. If you found this article helpful, consider joining a growing community of QA professionals 🚀 who are passionate about mastering modern testing.</p>
<p><a target="_blank" href="https://idavidov.eu/newsletter">Join the community and get the latest articles and tips by signing up for the newsletter.</a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[🤗Collaboration & Empathy]]></title><description><![CDATA[📱We've explored a workflow of essential skills, from asking questions to adapting to change. But there's a final, crucial ingredient that binds them all together: the ability to work with and understand other people.
💡Significant achievements are r...]]></description><link>https://idavidov.eu/collaboration-and-empathy</link><guid isPermaLink="true">https://idavidov.eu/collaboration-and-empathy</guid><category><![CDATA[teamwork]]></category><category><![CDATA[learning]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[skills]]></category><category><![CDATA[softskills]]></category><category><![CDATA[Collaboration]]></category><category><![CDATA[empathy]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Sun, 03 Aug 2025 05:56:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749816714887/d490ec99-d301-4be7-97ae-774f4a937e82.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>📱We've explored a workflow of essential skills, from asking questions to adapting to change. But there's a final, crucial ingredient that binds them all together: the ability to work with and understand other people.</p>
<p>💡Significant achievements are rarely solo efforts. <strong>The ultimate skill for bringing great work to fruition is Collaboration, fueled by its essential prerequisite: Empathy.</strong></p>
<hr />
<h2 id="heading-why-are-collaboration-amp-empathy-so-important">Why Are Collaboration &amp; Empathy So Important❓</h2>
<h3 id="heading-grounding-ai-in-human-needs">🤖 Grounding AI in Human Needs:</h3>
<p>AI can optimize a system, but it can't understand a user's frustration or a stakeholder's dream. Empathy—the ability to understand and share the feelings of others—is what ensures the solutions we build with technology are human-centric and truly valuable.</p>
<h3 id="heading-building-high-performing-teams">🧠 Building High-Performing Teams:</h3>
<p>Collaboration is more than just working in a group—it's about creating synergy where the collective output is greater than the sum of its parts. This is only possible in an environment of trust, respect, and mutual understanding, all of which are built on empathy.</p>
<h3 id="heading-creating-better-products-and-solutions">🛠️ Creating Better Products and Solutions:</h3>
<p>When you can empathize with your end-users, you build products they love. When you can empathize with your colleagues (developers, designers, marketers), you create a smoother workflow and solve problems more effectively.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749818065018/3c221d78-07f3-4c9f-a8cd-9599cf93f2e1.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-why-collaboration-amp-empathy-are-core-skills-for-qa-engineers">🧪 Why Collaboration &amp; Empathy are Core Skills for QA Engineers</h2>
<p>The stereotype of the lone, adversarial tester is obsolete. Modern QA is a deeply collaborative and empathetic role.</p>
<ul>
<li><p><strong>🕵️‍♂️ Building a Bridge with Developers:</strong> An empathetic QA engineer doesn't just "throw a bug report over the wall." They collaborate with developers, providing clear information and working together to find the root cause, framing it as a shared goal for quality.</p>
</li>
<li><p><strong>🧩 Championing the User's Perspective:</strong> Empathy is a QA engineer's most powerful tool. They must constantly put themselves in the user's shoes, asking, <strong>"How would a user feel if this happened? Is this experience confusing, frustrating, or delightful?"</strong></p>
</li>
<li><p><strong>🧭 Negotiating with Product Owners:</strong> When a bug is found, a QA engineer collaborates with the product owner to assess its impact. They use empathy to understand business pressures while clearly advocating for the user's experience.</p>
</li>
<li><p><strong>🤖 Fostering a Culture of Quality:</strong> Great QA professionals understand that quality is everyone's responsibility. They collaborate across the entire team to build quality into the process from the start, rather than just inspecting for it at the end.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749817578654/dc980fac-43e1-400f-8586-8659a6c05d80.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-how-is-this-skill-applied-in-other-it-jobs">🧑‍💻 How is this skill applied in other IT jobs?</h2>
<ul>
<li><p><strong>Backend Developers:</strong> They must collaborate effectively with frontend developers to design and agree upon robust API contracts. Empathy for the frontend team's needs ensures they build APIs that are easy to consume and efficient.</p>
</li>
<li><p><strong>UX Designers:</strong> Their entire role is founded on empathy for the user. They conduct research to deeply understand user needs and pain points. They then collaborate closely with developers to ensure the final product is not only beautiful but also technically feasible and intuitive.</p>
</li>
<li><p><strong>Product Managers:</strong> They are collaboration hubs. They must empathize with the perspectives of customers, engineers, designers, and business stakeholders to prioritize features and guide the product in a way that balances all competing needs.</p>
</li>
<li><p><strong>DevOps Engineers:</strong> They build bridges between development and operations. They must empathize with developers' need for speed and autonomy, while also collaborating with them to implement the security and stability controls that operations requires.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749817900516/9cf5fb3c-d692-45de-92d3-7766dd4501ab.jpeg" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-how-can-schools-teach-this-skill">🏫 How Can Schools Teach This Skill❓</h2>
<ul>
<li><p><strong>💡 Emphasize True Group Projects:</strong> Design projects where success depends on genuine collaboration, with shared goals and individual accountability. This teaches compromise, communication, and shared responsibility.</p>
</li>
<li><p><strong>🗣️ Teach Perspective-Taking:</strong> In history and literature, ask students to write from the perspective of different characters or historical figures. In civics, hold debates where students must argue for a viewpoint they disagree with.</p>
</li>
<li><p><strong>🛡️ Encourage Community Service:</strong> Engaging with the wider community exposes students to different life experiences and fosters a deeper sense of empathy for others.</p>
</li>
<li><p><strong>🤝 Practice Peer Feedback:</strong> Create a structured and supportive process for students to give and receive feedback. This teaches them to critique an idea constructively while respecting the person behind it.</p>
</li>
</ul>
<hr />
<h2 id="heading-technology-is-about-people">Technology is About People 🚀</h2>
<p>In the final analysis, we build technology to serve people. In an era increasingly dominated by algorithms and automation, our most enduring and uniquely human skills will be our ability to understand, connect with, and work alongside each other.</p>
<blockquote>
<p><strong>"If you want to go fast, go alone. If you want to go far, go together."</strong> — African Proverb</p>
</blockquote>
<hr />
<blockquote>
<p><strong>🙏🏻 Thank you for reading!</strong> Building robust, scalable automation frameworks is a journey best taken together. If you found this article helpful, consider joining a growing community of QA professionals 🚀 who are passionate about mastering modern testing.</p>
<p><a target="_blank" href="https://idavidov.eu/newsletter">Join the community and get the latest articles and tips by signing up for the newsletter.</a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[The Engineer's Pivot: Trading a €100M Portfolio for a Passion in Tech]]></title><description><![CDATA[Ever feel like you’re on a set path, but your heart is pulling you in a different direction? My career has been a wild journey - from designing massive engineering projects to architecting software testing from scratch.
My story isn't just about chan...]]></description><link>https://idavidov.eu/the-engineers-pivot-trading-a-100m-portfolio-for-a-passion-in-tech</link><guid isPermaLink="true">https://idavidov.eu/the-engineers-pivot-trading-a-100m-portfolio-for-a-passion-in-tech</guid><category><![CDATA[technology]]></category><category><![CDATA[learning]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[tech ]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Career]]></category><category><![CDATA[career advice]]></category><dc:creator><![CDATA[Ivan Davidov]]></dc:creator><pubDate>Mon, 28 Jul 2025 05:50:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1753681496813/a4c75f92-0a2b-4094-84b1-ef85895c3fa1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever feel like you’re on a set path, but your heart is pulling you in a different direction? My career has been a wild journey - from designing massive engineering projects to architecting software testing from scratch.</p>
<p>My story isn't just about changing jobs. It's about the relentless pursuit of finding work that I can be proud of, and having the freedom to make a real impact.</p>
<p>The core message I want to share is simple: <strong>follow your dreams and build a career that truly fulfills you.</strong></p>
<h3 id="heading-the-foundation-a-civil-engineering-start">🎓 The Foundation: A Civil Engineering Start</h3>
<p>When I left high school, the choice was between Civil Engineering and Computer Science. I chose the tangible world of construction, enrolling at the University of Architecture, Civil Engineering and Geodesy. I dreamed of being part of real-world building projects.</p>
<p>The first few years were a slog of dry theory, and my focus wavered. By the third year, when we got into the practical, real-world stuff, everything clicked. My passion for engineering ignited, leading to a pivotal internship designing a Waste Water Treatment Plant (WWTP).</p>
<p>Things moved fast. Before the internship even ended, I had a full-time offer as a Junior WWTP Engineer. At 23, I was working a demanding job while simultaneously writing my Master's thesis—a complete design for a mock WWTP. Thanks to incredible mentors, I not only graduated at 24, but won an award for the best thesis project in WWTP field of the year.</p>
<h3 id="heading-climbing-the-ladder-new-roles-new-responsibilities">📈 Climbing the Ladder: New Roles, New Responsibilities</h3>
<p>My early career was a rapid ascent in the engineering world.</p>
<ul>
<li><p>At 25, WWTP Shift Team Lead: A sudden change threw me into leadership. I went from following orders to leading a team, a shocking but formative experience.</p>
</li>
<li><p>At 27, Co-Lead Technology Engineer: My passion for design pulled me back. I joined a team to design and build a new WWTP, quickly getting promoted to Co-Lead.</p>
</li>
<li><p>At 28, Lead Technology Engineer: After the success of the last project, I was appointed to lead the design of a new WWTP</p>
</li>
<li><p>At 29, Project Portfolio Manager: I took on my biggest role yet, managing a portfolio of ecology projects worth over €100M as part of a major European program.</p>
</li>
</ul>
<p>I had achieved a level of success many would envy. But a questions started to bubble up: Was I truly making a difference in the way I wanted to, and is that the life I want to live?</p>
<h3 id="heading-the-hardest-and-smartest-choice-the-pivot-to-tech">🔄 The Hardest and Smartest Choice: The Pivot to Tech</h3>
<p>At 32, I realized I wasn't happy with my inability to influence the quality and processes around me. The engineering field felt rigid. I saw my friends in the IT industry building, innovating, and improving things with a freedom I craved.</p>
<p>I am the type of person who sits down one evening, reverse-engineers the goal, thinks of possible solutions, makes a choice, and sticks to it. So, I made the leap. The biggest challenge was a deep-seated fear that I couldn't achieve the same level of success I had in engineering. Starting over is daunting.</p>
<p>Since I had enough experience with learning new things and achieving success even when I wasn't 100% ready, I rolled up my sleeves and started from scratch, landing my first tech job as a Manual QA in the highly regulated pharmaceutical software space. It was the perfect place to learn the fundamentals.</p>
<h3 id="heading-how-engineering-made-me-a-better-tech-professional">⚙️ How Engineering Made Me a Better Tech Professional</h3>
<p>My background wasn't a detour; it was a launchpad. The skills I'd honed for a decade were directly transferable and became my secret weapon in tech:</p>
<ul>
<li><p><strong>Systems Thinking:</strong> Designing a WWTP is about understanding a massive, interconnected system. Software architecture requires the exact same mindset.</p>
</li>
<li><p><strong>Complex Problem-Solving:</strong> Untangling complex engineering challenges prepared me for debugging intricate software issues.</p>
</li>
<li><p><strong>Collaboration:</strong> Working simultaneously with several project teams taught me how to help people around me, how to communicate my ideas, and how to ask the hard questions</p>
</li>
<li><p><strong>Project Management:</strong> Managing a €100M portfolio taught me how to handle timelines, budgets, and expectations—skills essential for leading any software project.</p>
</li>
<li><p><strong>Dealing with Regulations:</strong> My experience in compliant engineering fields gave me a huge advantage in the regulated world of pharmaceutical software.</p>
</li>
</ul>
<h3 id="heading-finding-the-fire-the-freedom-to-build-and-improve">🔥 Finding the Fire: The Freedom to Build and Improve</h3>
<p>My pivot wasn't just about changing fields—it was about finding work that lit a fire inside me. At 33, an opportunity arose to become a <strong>Software Systems QA Owner.</strong> It was a demanding job, but my passion for automation kept me awake at nights. I was aware what I have to learn, so I can be ready when the time comes.</p>
<p>Soon after, at 34, I got the chance to lead the development of a testing framework from the ground up. This was it. While I loved the challenge of both civil and software engineering, tech gave me something crucial: the freedom to influence and elevate quality and process directly. I could finally architect solutions and see the immediate impact of my work.</p>
<h3 id="heading-my-advice-for-your-pivot">💡 My Advice For Your Pivot</h3>
<p>If you're in a non-tech field and dreaming of a change, <strong>ask yourself one tough question</strong>: <em>Are you ready to pay the price before you get the rewards?</em></p>
<p>A career change means starting over. It means late nights studying, facing rejection, and battling the fear that you've made a mistake. But if you are willing to put in the work, the reward—a career you are genuinely proud of—is worth every bit of the struggle.</p>
<h3 id="heading-heres-what-i-understand-and-embrace">🤯 Here's What I Understand and Embrace</h3>
<p>This journey has taught me some core truths that I now consider my guiding principles. They are in no specific order because every single one is as important as the others.</p>
<h4 id="heading-nothing-meaningful-is-achieved-alone">🤝 Nothing Meaningful is Achieved Alone</h4>
<p>I know that I would not be where I am <strong>WITHOUT my Teachers, Mentors, and Teammates</strong>. Success is a team sport, and every achievement is built on a foundation of shared knowledge and support.</p>
<h4 id="heading-there-are-no-shortcuts">💪 There Are No Shortcuts</h4>
<p>If you want a truly desirable outcome, you have to be willing to <strong>work hard</strong> for it. Lasting success doesn't come from cutting corners—it's earned through dedication and persistent effort.</p>
<h4 id="heading-understand-the-price-of-your-decisions">⚖️ Understand the Price of Your Decisions</h4>
<p>Every choice has a cost—a trade-off in time, energy, or opportunity. I believe we must <strong>understand the price we are going to pay</strong> for every decision we make to move forward wisely.</p>
<h4 id="heading-have-the-guts-to-make-hard-decisions">🦁 Have the Guts to Make Hard Decisions</h4>
<p>Real growth requires courage. This means being able to <strong>make the hard decisions</strong> when necessary and, just as importantly, taking full responsibility for them.</p>
<h4 id="heading-it-is-never-too-late">⏳ It Is Never Too Late</h4>
<p>Don't let your age or current path stop you. It is <strong>never too late to do what inspires you</strong>. I pivoted my entire career at 32, and it was the best professional choice I've ever made.</p>
<h4 id="heading-master-the-fundamentals">🏛️ Master the Fundamentals</h4>
<p>The <strong>fundamentals</strong> in every field are not just fancy theory you can skip. They are the absolute bedrock upon which all advanced skills and true understanding are built.</p>
<h4 id="heading-growth-requires-discomfort">🚀 Growth Requires Discomfort</h4>
<p>To grow as a professional, you <strong>must take opportunities you are not 100% ready for</strong>. It’s in that space of challenge and slight uncertainty that the most profound learning happens.</p>
<h4 id="heading-leverage-your-potential-by-teaching">🧑‍🏫 Leverage Your Potential by Teaching</h4>
<p>The best way to solidify your own expertise is to <strong>help and teach others</strong>. Lifting up your colleagues reinforces your knowledge and makes the entire team stronger.</p>
<h4 id="heading-strive-for-a-career-worth-sharing">🌟 Strive for a Career Worth Sharing</h4>
<p>Aim to build more than just a resume. Build a professional story of passion, impact, and growth—a journey that you would be <strong>proud to share</strong> with others as a source of inspiration.</p>
<h4 id="heading-stay-humble-and-curious">🧠 Stay Humble and Curious</h4>
<p>No matter how much you learn or achieve, the journey is never over. I strive to stay <strong>Humble</strong> enough to know there's always more to learn, and <strong>Curious</strong> enough to actively seek that knowledge.</p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Today, I am still learning, still experimenting, and still driven by that same passion to build better solutions. My journey has been anything but linear, but every step, from concrete plants to code, has been essential in building a career I can truly call my own.</p>
<blockquote>
<p><strong>🙏🏻 Thank you for reading!</strong> Building robust, scalable automation frameworks is a journey best taken together. If you found this article helpful, consider joining a growing community of QA professionals 🚀 who are passionate about mastering modern testing.</p>
<p><a target="_blank" href="https://idavidov.eu/newsletter">Join the community and get the latest articles and tips by signing up for the newsletter.</a></p>
</blockquote>
]]></content:encoded></item></channel></rss>