<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[InternetKatta | AWS | Programming | Learning | PHP | Angular]]></title><description><![CDATA[Write & Share What We learn | Learning can't measure because it is learning]]></description><link>https://www.internetkatta.com</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 17:51:54 GMT</lastBuildDate><atom:link href="https://www.internetkatta.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[AI Doesn't Need a Replacement. It Needs a Parent.]]></title><description><![CDATA[I've spent 15 years building systems. Shipping products. Debugging things at 2 AM when production is on fire and nobody knows why.
For the last year, I've been deep in AI tools — coding agents, cloud ]]></description><link>https://www.internetkatta.com/ai-doesnt-need-a-replacement-its-needs-a-aprent</link><guid isPermaLink="true">https://www.internetkatta.com/ai-doesnt-need-a-replacement-its-needs-a-aprent</guid><category><![CDATA[AI]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[AI Coding Agent]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Developer Tools]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Sat, 07 Mar 2026 16:46:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/5a97ab2a0430f87244d3d7ba/5a6e9a8b-0a34-4743-b2d6-9c68a5a4bfb6.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I've spent 15 years building systems. Shipping products. Debugging things at 2 AM when production is on fire and nobody knows why.</p>
<p>For the last year, I've been deep in AI tools — coding agents, cloud agents, UX design agents, code review agents, you name it. Not evaluating them from a distance. Using them. Daily. On real projects.</p>
<p>I'm not an AI expert. I'm not building foundation models or writing research papers. But I am someone who has used these tools long enough to see their patterns — where they shine, where they break, and what they actually need to work well.</p>
<p>And here's my one take that I don't see enough people talking about:</p>
<p><strong>The smarter AI gets, the more it needs humans. Not less.</strong></p>
<p>Let me explain with an analogy that hit me as an engineer and as a father.</p>
<h2><strong>My Son Is Smarter Than I Was at His Age. He Still Needs Me.</strong></h2>
<p>This generation of kids is incredible. They have access to everything. They learn faster. They figure things out that took us years to understand.</p>
<p>But here's the thing — my son still gets stuck. Not because he's not smart. Because he doesn't have context. He doesn't know what he doesn't know. He walks into ambiguity and freezes. He makes confident decisions that are completely wrong because he's missing one piece of experience he hasn't lived yet.</p>
<p>Sound familiar?</p>
<p>That's exactly how AI agents behave.</p>
<p>They are fast. They are capable. They can write code, analyze data, generate plans, and execute tasks that would take me hours.</p>
<p>I've seen a coding agent refactor an entire module in minutes. I've watched a code review agent catch bugs I missed. I've used cloud agents to spin up infrastructure that would have taken me a full day to configure manually.</p>
<p>But they still need a parent.</p>
<h2><strong>What Does "Parenting AI" Actually Look Like?</strong></h2>
<p>When I say AI needs a parent, I don't mean babysitting. I mean the same things a good parent does:</p>
<p><strong>Observation.</strong> You don't hover over your kid every second. But you watch. You notice patterns. You catch the moment something is going off track before it becomes a disaster. In production systems, we call this monitoring and observability. With AI agents, it's the same instinct — I've had coding agents confidently generate solutions that looked perfect on the surface but would have caused silent data loss in production. I caught it not because I read every line, but because something felt off. You set up the guardrails, you watch the outputs, you notice when something smells wrong.</p>
<p><strong>Intervention at ambiguity.</strong> A smart kid will try to push through uncertainty on their own. Sometimes that works. Sometimes they go deep into a wrong direction and waste hours. A good parent steps in at the right moment — not too early, not too late — and says "have you considered this?" That's the human role with AI agents. The agent will execute confidently. It's your job to know when that confidence is misplaced.</p>
<p><strong>Approval as a feature, not a bottleneck.</strong> In engineering, we have code reviews, deployment gates, approval workflows. Nobody calls those "bottlenecks" — they're checkpoints that prevent catastrophe. When an AI agent pauses and asks for human approval, that's not a failure of autonomy. That's good architecture.</p>
<p><strong>Gut-level judgment.</strong> This is the one nobody wants to talk about. After 15 years of building and breaking systems, I've developed a sense for when something is about to go wrong. I can't always explain it. It's pattern recognition built from thousands of production incidents, late-night debugging sessions, and projects that failed in ways nobody predicted. AI doesn't have that. It has data. It has probabilities. But it doesn't have the scar tissue that tells you "this feels off, let's pause."</p>
<h2><strong>The Real Risk Isn't AI Going Rogue. It's Humans Checking Out.</strong></h2>
<p>Here's what actually worries me.</p>
<p>It's not that AI agents will become too powerful. It's that humans will get lazy. We'll see the agent handling things well for weeks, and we'll stop reviewing. We'll skip the approval step. We'll trust the output without reading it.</p>
<p>It's exactly like the parent who stops checking homework because the kid "always gets it right." And then one day, the kid turns in something completely wrong, and nobody caught it.</p>
<p>The most dangerous failure mode isn't an AI that makes mistakes. It's a human who assumes it won't.</p>
<h2><strong>Experience Is the Moat</strong></h2>
<p>Everyone is talking about AI replacing developers, replacing engineers, replacing knowledge workers.</p>
<p>But here's what I've learned from 15 years in this industry: the hardest part of building software was never writing the code. It was knowing what to build. Knowing when to ship. Knowing when to stop. Knowing when the "technically correct" solution is practically wrong.</p>
<p>That's experience. That's judgment. That's what a parent brings that a child — no matter how brilliant — doesn't have yet.</p>
<p>AI agents are going to get smarter every year. They'll write better code than me. They'll analyze data faster than me. They'll generate solutions I wouldn't have thought of.</p>
<p>And they'll still need someone who has been through enough production fires to know when to say: "Wait. Let's think about this before we proceed."</p>
<p>That someone is you. Don't automate yourself out of that role.</p>
<h2><strong>The Bottom Line</strong></h2>
<p>The future of AI isn't human vs. machine. It's human <em>with</em> machine — where the human is the experienced parent, and the AI is the brilliant kid who still needs guidance.</p>
<p>If you're a builder, an engineer, a product person — your experience isn't becoming obsolete. It's becoming the most critical layer in the stack.</p>
<p>The agents will do the work. You'll make sure it's the right work.</p>
<p>That's not a limitation of AI. That's how good systems have always worked.</p>
<p><em>I'm a full-stack developer and product engineer with 15 years of building, maintaining, observing, and debugging systems. For the past year, I've been using AI agents daily not as an expert, but as a parent.</em></p>
]]></content:encoded></item><item><title><![CDATA[How I Replaced Prerender.io with My Own Serverless Renderer on AWS — For $0/Month
]]></title><description><![CDATA[The Problem That Started It All
A few months ago I published a post about using Prerender.io with Angular (https://www.internetkatta.com/how-i-fixed-seo-for-our-angular-spa-using-aws-amplify-prerender]]></description><link>https://www.internetkatta.com/how-i-replaced-prerenderio-with-own-serverless-renderer-on-aws</link><guid isPermaLink="true">https://www.internetkatta.com/how-i-replaced-prerenderio-with-own-serverless-renderer-on-aws</guid><category><![CDATA[serverless]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Amplify]]></category><category><![CDATA[Angular]]></category><category><![CDATA[SEO]]></category><category><![CDATA[AWS Community Builder]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 27 Feb 2026 05:49:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/5a97ab2a0430f87244d3d7ba/802aefb5-02b8-4981-9aaf-1addfef5d5ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>The Problem That Started It All</strong></h2>
<p>A few months ago I published a post about using <a href="http://Prerender.io">Prerender.io</a> with Angular (<a href="https://www.internetkatta.com/how-i-fixed-seo-for-our-angular-spa-using-aws-amplify-prerenderio"><strong>https://www.internetkatta.com/how-i-fixed-seo-for-our-angular-spa-using-aws-amplify-prerenderio</strong></a>). The approach worked, but when I checked my bill I was paying ₹5,000/month ( $49) to <a href="http://Prerender.io">Prerender.io</a> for essentially zero usage.</p>
<p>My app is an Angular SPA hosted on AWS Amplify. Angular renders everything client-side using JavaScript. Social bots like WhatsApp, LinkedIn, Googlebot, and Telegram don't execute JavaScript. They crawl your URL, get a blank HTML shell, and your link preview shows nothing. No title. No image. No description.</p>
<p>Prerender.io solves this by running a headless browser on their servers, rendering your page, and returning the fully-rendered HTML to bots. It works well. But at ₹5,000/month, I was paying for a service that was essentially idling — my platform was still early stage, getting very little traffic while I worked on getting traction.</p>
<p>That's ₹5,000/month for almost zero usage. No scaling. No pay-per-use. Just a flat fee.</p>
<p>I started asking: can I build this myself on AWS and pay only for what I actually use?</p>
<h2><strong>Understanding the Existing Setup</strong></h2>
<p>Before building anything, I needed to understand exactly what prerender.io was doing for me. The architecture flow was: Regular users bypass all of this entirely and hit Amplify directly. The key insight: <strong>prerender.io was just a CloudFront origin</strong>. The Lambda@Edge was doing the bot detection and routing. prerender.io itself was a black box sitting at the end of that route. If I could replace that black box with my own renderer, I wouldn't need to touch the bot detection logic at all.</p>
<img src="https://cdn.hashnode.com/uploads/covers/5a97ab2a0430f87244d3d7ba/3bd81b32-d49d-42ca-89f8-70204d0819c0.png" alt="" style="display:block;margin:0 auto" />

<h2><strong>Designing the Replacement</strong></h2>
<p>So, the requirements were clear:</p>
<ul>
<li><p>Serverless — pay only when a bot actually hits a page</p>
</li>
<li><p>No fixed monthly cost</p>
</li>
<li><p>Same output as prerender.io — fully rendered HTML</p>
</li>
<li><p>No changes to the Angular app</p>
</li>
<li><p>Minimal changes to the bot detection logic in Lambda<a href="https://hashnode.com/@david264" class="user-mention" data-type="mention" title="EDGE">EDGE</a></p>
</li>
</ul>
<p>The core idea was to replace this: Lambda@Edge origin-request → <a href="http://service.prerender.io">service.prerender.io</a> to new flow <strong>Puppeteer</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/5a97ab2a0430f87244d3d7ba/058ca4d7-14f0-48ec-bbd3-e3bc3b86db0a.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Why Puppeteer +</strong> <code>networkidle0</code><strong>?</strong></h3>
<p>Prerender.io works without any changes to the Angular app. It uses a headless Chrome browser that waits until the page has no network activity for 500ms (<code>networkidle0</code>). This gives Angular enough time to finish fetching data and rendering the DOM. The same approach works in our own Lambda — no Angular code changes needed.</p>
<h3><strong>Why S3 for Caching?</strong></h3>
<p>A rendered page doesn't change every second. An article published today will have the same meta tags tomorrow. Caching the rendered HTML in S3 means:</p>
<ul>
<li><p>First bot request for a URL: Puppeteer renders it (5–10 seconds, acceptable for bots)</p>
</li>
<li><p>Every subsequent bot request: S3 returns it in ~300ms</p>
</li>
<li><p>Cache TTL: 24 hours (configurable)</p>
</li>
</ul>
<h2><strong>This is how architecture look like.</strong></h2>
<img src="https://cdn.hashnode.com/uploads/covers/5a97ab2a0430f87244d3d7ba/afba30a4-42b9-49d5-9fff-bb8fdac0833c.png" alt="" style="display:block;margin:0 auto" />

<h2><strong>The Code</strong></h2>
<h3><strong>Lambda@Edge —</strong> <code>socialbots</code> <strong>function</strong></h3>
<p>This runs at CloudFront edge. The key change from the original prerender.io version is the last 5 lines of the origin-request block. Bot detection logic is untouched.</p>
<pre><code class="language-javascript">'use strict';

const INTERNAL_TOKEN = 'your-secret-token'; // same value as renderer Lambda INTERNAL_TOKEN env var

exports.handler = (event, context, callback) =&gt; {
    const request = event.Records[0].cf.request;

    if (request.headers['x-prerender-token'] &amp;&amp; request.headers['x-prerender-host']) {
        // ── ORIGIN-REQUEST: bot detected in viewer-request, now route to renderer ──

        if (request.headers['x-query-string']) {
            request.querystring = request.headers['x-query-string'][0].value;
        }

        // CRITICAL: When Lambda@Edge changes request.origin, CloudFront does NOT
        // automatically update the Host header. API Gateway rejects requests where
        // Host doesn't match its configured custom domain → ForbiddenException.
        // Must set Host explicitly before setting request.origin.
        request.headers['host'] = [{ key: 'Host', value: 'precache.myapp.com' }];

        request.origin = {
            custom: {
                domainName: 'precache.myapp.com', // ← was: service.prerender.io
                port: 443,
                protocol: 'https',
                readTimeout: 30,                           // ← was: 20 (Puppeteer needs up to 25s)
                keepaliveTimeout: 5,
                customHeaders: {
                    'x-prerender-token': [{                // auth token sent to renderer
                        key: 'X-Prerender-Token',
                        value: INTERNAL_TOKEN
                    }]
                },
                sslProtocols: ['TLSv1.2'],                 // ← was: TLSv1, TLSv1.1 (deprecated)
                path: ''                                   // ← was: '/https%3A%2F%2F' + host
            }
        };

    } else {
        // ── VIEWER-REQUEST: detect bots, set headers ── (completely unchanged)
        const headers = request.headers;
        const user_agent = headers['user-agent'];
        const host = headers['host'];

        if (user_agent &amp;&amp; host) {
            var prerender = /googlebot|adsbot\-google|Feedfetcher\-Google|bingbot|yandex|
                baiduspider|Facebot|facebookexternalhit|twitterbot|rogerbot|linkedinbot|
                embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|
                W3C_Validator|redditbot|applebot|whatsapp|flipboard|tumblr|bitlybot|
                skypeuripreview|nuzzel|discordbot|google page speed|qwantify|pinterestbot|
                bitrix link preview|xing\-contenttabreceiver|chrome\-lighthouse|telegrambot|
                Perplexity|OAI-SearchBot|ChatGPT|GPTBot|ClaudeBot|Amazonbot|
                integration-test/i.test(user_agent[0].value);

            prerender = prerender || /_escaped_fragment_/.test(request.querystring);
            prerender = prerender &amp;&amp; !/\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|
                ico|rss|zip|mp3|rar|exe|wmv|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|
                mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)$/i
                .test(request.uri);

            if (prerender) {
                console.log('Bot detected:', user_agent[0].value);
                headers['x-prerender-token'] = [{ key: 'X-Prerender-Token', value: INTERNAL_TOKEN }];
                headers['x-prerender-host'] = [{ key: 'X-Prerender-Host', value: host[0].value }];
                headers['x-prerender-cachebuster'] = [{ key: 'X-Prerender-Cachebuster', value: Date.now().toString() }];
                headers['x-query-string'] = [{ key: 'X-Query-String', value: request.querystring }];
            }
        }
    }

    callback(null, request);
};
</code></pre>
<blockquote>
<p><strong>Deploy note</strong>: Lambda@Edge must be in <code>us-east-1</code>. After publishing a new version, update both the viewer-request and origin-request ARNs in your CloudFront behaviour to point to the new version number (e.g. <code>:12</code> → <code>:13</code>). CloudFront takes 5–10 minutes to propagate.</p>
</blockquote>
<h3><strong>Renderer Lambda —</strong> <code>index.js</code></h3>
<p>This runs in <code>ap-south-1</code> ( I choose Mumbai because my app run on this region) as a container image. Receives bot requests from API Gateway, checks S3 cache, renders with Puppeteer if needed.</p>
<pre><code class="language-javascript">'use strict';

const chromium = require('@sparticuz/chromium');
const puppeteer = require('puppeteer-core');
const { S3Client, GetObjectCommand, PutObjectCommand } = require('@aws-sdk/client-s3');

const s3 = new S3Client({});

const BUCKET         = process.env.CACHE_BUCKET;
const CACHE_TTL_MS   = parseInt(process.env.CACHE_TTL_HOURS || '24') * 3600 * 1000;
const INTERNAL_TOKEN = process.env.INTERNAL_TOKEN;
const SITE_URL       = process.env.SITE_URL || 'https://myapp.com';

// Reuse browser across warm Lambda invocations — saves 3-5s Chromium startup time
let browser = null;

async function getBrowser() {
    if (browser &amp;&amp; browser.connected) return browser;
    browser = await puppeteer.launch({
        args: chromium.args,
        defaultViewport: { width: 1280, height: 800 },
        executablePath: await chromium.executablePath(),
        headless: true,
    });
    return browser;
}

function pathToS3Key(urlPath) {
    const clean = urlPath.replace(/^\/+|\/+$/g, '') || 'index';
    return `cache/${clean}.html`;
    // Examples:
    //   /article/my-slug  →  cache/article/my-slug.html
    //   /                 →  cache/index.html
}

exports.handler = async (event) =&gt; {
    const headers = event.headers || {};

    // Reject requests without the internal token
    // (prevents anyone who discovers the URL from triggering renders)
    if (INTERNAL_TOKEN) {
        const token = headers['x-prerender-token'] || headers['x-internal-token'];
        if (token !== INTERNAL_TOKEN) {
            console.log('Rejected: wrong or missing token');
            return { statusCode: 403, body: 'Forbidden' };
        }
    }

    const urlPath   = event.rawPath || '/';
    const host      = headers['x-prerender-host'] || new URL(SITE_URL).hostname;
    const targetUrl = `https://\({host}\){urlPath}`;
    const s3Key     = pathToS3Key(urlPath);

    // ── 1. Check S3 cache 
    try {
        const cached     = await s3.send(new GetObjectCommand({ Bucket: BUCKET, Key: s3Key }));
        const renderedAt = parseInt(cached.Metadata?.['rendered-at'] || '0');

        if ((Date.now() - renderedAt) &lt; CACHE_TTL_MS) {
            const html = await cached.Body.transformToString('utf-8');
            console.log(`CACHE HIT [${urlPath}]`);
            return {
                statusCode: 200,
                headers: {
                    'content-type': 'text/html; charset=utf-8',
                    'x-prerender-cache': 'HIT',
                },
                body: html,
            };
        }
        console.log(`CACHE STALE [${urlPath}] — re-rendering`);
    } catch (err) {
        if (err.name !== 'NoSuchKey') console.error('S3 read error:', err.message);
        console.log(`CACHE MISS [${urlPath}]`);
    }

    // ── 2. Render with Puppeteer ──────────────────────────────────────────────
    console.log(`Rendering: ${targetUrl}`);

    let html;
    try {
        const b    = await getBrowser();
        const page = await b.newPage();

        // Block images, fonts, media — bots only need HTML + meta tags.
        // Blocking these cuts render time by 30-60%.
        await page.setRequestInterception(true);
        page.on('request', req =&gt;
            ['image', 'font', 'media'].includes(req.resourceType())
                ? req.abort()
                : req.continue()
        );

        // networkidle0: wait until no network activity for 500ms.
        // This is how prerender.io works — Angular finishes data fetching and rendering.
        await page.goto(targetUrl, { waitUntil: 'networkidle0', timeout: 25000 });

        html = await page.content();
        await page.close();

    } catch (err) {
        console.error(`Render failed [${targetUrl}]:`, err.message);
        if (browser) {
            try { await browser.close(); } catch (_) {}
            browser = null; // force fresh browser on next invocation
        }
        return { statusCode: 500, body: 'Render error' };
    }

    // ── 3. Store in S3 cache ──────────────────────────────────────────────────
    try {
        await s3.send(new PutObjectCommand({
            Bucket: BUCKET,
            Key: s3Key,
            Body: html,
            ContentType: 'text/html; charset=utf-8',
            Metadata: {
                'rendered-at': Date.now().toString(),
                'source-url': targetUrl,
            },
        }));
        console.log(`Cached → ${s3Key}`);
    } catch (err) {
        console.error('S3 write error (non-fatal):', err.message);
    }

    return {
        statusCode: 200,
        headers: {
            'content-type': 'text/html; charset=utf-8',
            'x-prerender-cache': 'MISS',
        },
        body: html,
    };
};
</code></pre>
<h3><strong>Dockerfile</strong></h3>
<pre><code class="language-dockerfile"># AWS Lambda Node.js 20 base image (Amazon Linux 2023)
# @sparticuz/chromium v133+ is compatible with AL2023
FROM public.ecr.aws/lambda/nodejs:20

COPY package.json ./
RUN npm install --omit=dev

COPY index.js ./

CMD ["index.handler"]
</code></pre>
<h3><code>package.json</code></h3>
<pre><code class="language-json">{
  "name": "prerender-renderer",
  "version": "1.0.0",
  "dependencies": {
    "@aws-sdk/client-s3": "^3.741.0",
    "@sparticuz/chromium": "^133.0.0",
    "puppeteer-core": "^24.0.0"
  }
}
</code></pre>
<blockquote>
<p><strong>Build note</strong>: Always build with <code>--platform linux/amd64 --provenance=false</code> on Mac. The <code>--provenance=false</code> flag prevents Docker Desktop from creating an OCI manifest list, which Lambda doesn't support.</p>
<pre><code class="language-shell">docker build --platform linux/amd64 --provenance=false -t renderer .
</code></pre>
</blockquote>
<h2><strong>Problems We Hit Along the Way</strong></h2>
<h3><strong>Problem 1: Lambda Block Public Access (Account-Level)</strong></h3>
<p>The renderer Lambda needed an HTTP endpoint CloudFront could call as a custom origin. The natural choice was <strong>Lambda Function URL</strong> — no extra services, free, simple.</p>
<p>It returned 403 immediately.</p>
<p>AWS silently enabled <strong>Lambda Block Public Access</strong> at the account level in late 2024 (similar to S3's public access block). This blocks all Lambda Function URLs from public internet access, even with <code>AuthType=NONE</code>. The feature exists for good reasons but wasn't clearly communicated as a default.</p>
<p><strong>Fix</strong>: Use <strong>API Gateway HTTP API</strong> instead. Same effective cost (&lt; $1/month at any realistic scale for this use case), no public access restrictions.</p>
<h3><strong>Problem 2: The Host Header</strong></h3>
<p>This was the hardest bug to diagnose. Symptoms:</p>
<ul>
<li><p>Direct request to <code>precache.myapp.com</code> with token → 200 ✓</p>
</li>
<li><p>Bot request through CloudFront → 403 from API Gateway</p>
</li>
<li><p>Response had <code>x-amzn-errortype: ForbiddenException</code> and <code>content-length: 0</code></p>
</li>
</ul>
<p>The <code>content-length: 0</code> was the clue. Our Lambda's 403 returns body <code>"Forbidden"</code> (8 bytes). Zero content means the request <strong>never reached our Lambda</strong> — API Gateway itself was rejecting it.</p>
<p>Root cause: when Lambda@Edge dynamically changes <code>request.origin</code>, <strong>CloudFront does not update the</strong> <code>Host</code> <strong>header</strong> to match the new origin domain. The request arrives at API Gateway with <code>Host: myapp.com</code> instead of <code>Host: precache.myapp.com</code>. API Gateway rejects it because that host isn't mapped to any API.</p>
<p>Confirmed with:</p>
<pre><code class="language-bash"># Simulates what CloudFront sends without the fix
curl -H "Host: myapp.com" https://YOUR_API_ID.execute-api.ap-south-1.amazonaws.com/
# → 403 ForbiddenException

# Correct Host
curl -H "Host: precache.myapp.com" https://YOUR_API_ID.execute-api.ap-south-1.amazonaws.com/
# → 200 OK
</code></pre>
<p><strong>Fix</strong>: One line in Lambda@Edge, before setting <code>request.origin</code>:</p>
<pre><code class="language-javascript">request.headers['host'] = [{ key: 'Host', value: 'precache.myapp.com' }];
</code></pre>
<p>This is not documented prominently in AWS guides but is a known gotcha with Lambda@Edge + API Gateway custom domains.</p>
<h2><strong>Scale Analysis and Cost Comparison</strong></h2>
<h3><strong>Renders vs Requests — The Critical Distinction</strong></h3>
<p><strong>prerender.io</strong> charges per <strong>render</strong> — a render happens only when their headless Chrome actually runs (cache miss on their end). Repeated requests for the same URL within the cache window don't cost extra renders.</p>
<p><strong>Our system</strong> works the same way:</p>
<ul>
<li><p><strong>Cache miss</strong> = Puppeteer runs → slow (~8s), costs compute</p>
</li>
<li><p><strong>Cache hit</strong> = S3 returns cached HTML → fast (~300ms), costs almost nothing</p>
</li>
</ul>
<h3><strong>Our cache is bounded by content, not traffic</strong></h3>
<p>With 241 content pages and a 24-hour TTL:</p>
<pre><code class="language-plaintext">Maximum renders per month = 241 pages × 30 days = 7,230

No matter how many millions of requests arrive,
Puppeteer runs at most 7,230 times per month.
</code></pre>
<p>At 1,000,000 bot requests per month with a 99.3% cache hit rate, we still only render 7,230 times. This is the structural advantage of URL-level S3 caching.</p>
<h3><strong>AWS Pricing Used (ap-south-1)</strong></h3>
<table>
<thead>
<tr>
<th><strong>Service</strong></th>
<th><strong>Pricing</strong></th>
</tr>
</thead>
<tbody><tr>
<td>Lambda invocations</td>
<td>First 1M/month free, then $0.20/1M</td>
</tr>
<tr>
<td>Lambda compute</td>
<td>First 400,000 GB-s/month free, then $0.0000167/GB-s</td>
</tr>
<tr>
<td>Lambda memory</td>
<td>2048MB = 2GB</td>
</tr>
<tr>
<td>Cache miss compute</td>
<td>2GB × 8s = 16 GB-s per render</td>
</tr>
<tr>
<td>Cache hit compute</td>
<td>2GB × 0.5s = 1 GB-s per request</td>
</tr>
<tr>
<td>API Gateway HTTP API</td>
<td>$1.00 per million requests</td>
</tr>
<tr>
<td>S3 GET</td>
<td>$0.00043 per 1,000 requests</td>
</tr>
<tr>
<td>S3 PUT</td>
<td>$0.0054 per 1,000 requests</td>
</tr>
<tr>
<td>Data transfer out</td>
<td>First 100GB/month free</td>
</tr>
</tbody></table>
<h3><strong>Cost Comparison at Scale</strong></h3>
<table>
<thead>
<tr>
<th><strong>Bot Requests/mo</strong></th>
<th><strong>Renders (cache miss)</strong></th>
<th><strong>Cache Hits</strong></th>
<th><strong>Lambda Compute</strong></th>
<th><strong>API Gateway</strong></th>
<th><strong>Our Cost</strong></th>
<th><strong>prerender.io $49</strong></th>
</tr>
</thead>
<tbody><tr>
<td>~100 (today)</td>
<td>~20</td>
<td>~80</td>
<td>400 GB-s ✓ free</td>
<td>$0.00</td>
<td><strong>$0</strong></td>
<td>$49</td>
</tr>
<tr>
<td>1,000</td>
<td>~200</td>
<td>~800</td>
<td>4,000 GB-s ✓ free</td>
<td>$0.001</td>
<td><strong>~$0</strong></td>
<td>$49</td>
</tr>
<tr>
<td>10,000</td>
<td>~1,000</td>
<td>~9,000</td>
<td>25,000 GB-s ✓ free</td>
<td>$0.01</td>
<td><strong>~$0.01</strong></td>
<td>$49</td>
</tr>
<tr>
<td>100,000</td>
<td>~3,000</td>
<td>~97,000</td>
<td>145,000 GB-s ✓ free</td>
<td>$0.10</td>
<td><strong>~$0.10</strong></td>
<td>$49</td>
</tr>
<tr>
<td>1,000,000</td>
<td>~7,230</td>
<td>~992,770</td>
<td>1.1M GB-s → $11.81</td>
<td>$1.00</td>
<td><strong>~$13</strong></td>
<td>$199+</td>
</tr>
<tr>
<td>5,000,000</td>
<td>~7,230</td>
<td>~4,992,770</td>
<td>5.1M GB-s → $78</td>
<td>$5.00</td>
<td><strong>~$99</strong></td>
<td>Enterprise</td>
</tr>
</tbody></table>
<blockquote>
<p>prerender.io \(49 plan includes 25,000 renders/month. Extra renders cost \)2 per 1,000. Our system never exceeds 7,230 renders/month (bounded by content count), so we'd never hit their overage pricing either.</p>
</blockquote>
<h3><strong>In INR (₹83 = $1 approx)</strong></h3>
<pre><code class="language-plaintext">Today        → ₹0       vs ₹5,000/month   → saves ₹5,000/month
1K req/mo    → ₹0       vs ₹5,000/month   → saves ₹5,000/month
10K req/mo   → ₹1       vs ₹5,000/month   → saves ₹4,999/month
100K req/mo  → ₹10      vs ₹5,000/month   → saves ₹4,990/month
1M req/mo    → ₹1,100   vs ₹16,000+/month → saves ₹14,900+/month
5M req/mo    → ₹8,300   vs Enterprise      → saves significantly
</code></pre>
<h3><strong>When Does prerender.io Win?</strong></h3>
<p>At extreme scale (10M+ requests/month) and where <strong>geographic rendering</strong> matters — prerender.io has global PoPs, so renders happen near the requesting bot. Our renderer is in <code>ap-south-1</code>. For an Indian platform with Indian bots, this is fine. For a global platform, you'd want renderers in multiple regions.</p>
<h3><strong>Future Optimization: CloudFront-Level Caching</strong></h3>
<p>Currently every bot request invokes our Lambda (even cache hits, just for 0.5s). At 1M+ requests/month this adds up. The fix: enable <strong>CloudFront caching</strong> on the <code>/prerender/*</code> behavior with a 24-hour TTL.</p>
<pre><code class="language-plaintext">First bot request for /article/xyz in 24h
    → Lambda invoked → Puppeteer renders → CloudFront caches response

Next 999 bot requests for same URL in same 24h window
    → CloudFront edge serves directly → Lambda never invoked
    → Cost: $0
</code></pre>
<p>This collapses 1M Lambda invocations to ~7,230 per month. At that point the 1M/month scenario costs under \(1 instead of \)13. Worth implementing when you approach that scale.</p>
<h2><strong>Trade-offs and Honest Assessment</strong></h2>
<h3><strong>Advantages</strong></h3>
<p><strong>Pay-per-use with a hard ceiling</strong>: Renders are bounded by content count. 241 pages × 30 days = 7,230 renders max regardless of traffic. Costs can't spiral.</p>
<p><strong>Full control</strong>: Cache TTL, bot detection rules, Puppeteer behaviour — all tunable. With prerender.io, you accept their defaults.</p>
<p><strong>No vendor lock-in</strong>: One day prerender.io could shut down, change pricing, or have an outage. This infrastructure is yours and runs indefinitely.</p>
<p><strong>Transparency</strong>: CloudWatch logs show exactly which bots crawl which pages, render durations, cache hit ratios.</p>
<p><strong>Warm cache hits are fast</strong>: ~300ms, comparable to prerender.io's cached responses.</p>
<h3><strong>Honest Limitations</strong></h3>
<p><strong>Cold start on cache miss</strong>: First bot request for a new URL takes 5–10 seconds. Lambda cold start + Chromium launch + Angular data fetching. Bots are patient, but it's not instant.</p>
<p><strong>You own the Chromium version</strong>: If <code>@sparticuz/chromium</code> has a bug or Chrome updates break something, it's your problem. prerender.io handles this silently. Plan to update the package ~quarterly.</p>
<p><strong>Lambda@Edge timeout risk</strong>: Lambda@Edge has a hard 30-second origin timeout. Complex pages that take longer than ~25 seconds to render will return a 504. Hasn't happened in practice, but it's a ceiling.</p>
<p><strong>No geographic rendering</strong>: Our renderer Lambda is in <code>ap-south-1</code>. For a global platform, bots crawling from the US or Europe add ~150–200ms latency to the render. For an Indian platform with Indian bots, this doesn't matter.</p>
<p><strong>One-time engineering cost</strong>: Setting up this system took a day of work and debugging. prerender.io takes 30 minutes. Factor this in if your time is expensive.</p>
<h2><strong>Final Results</strong></h2>
<pre><code class="language-bash"># WhatsApp bot
HTTP 200 — 2.09s  (cache miss on first request — Puppeteer rendered)

# Googlebot (second request, cache hit)
HTTP 200 — 0.30s

# LinkedIn
HTTP 200 — 0.37s

# Regular user → Amplify, not renderer — no change
HTTP 200 — 0.18s
</code></pre>
<p>Rendered HTML for the article includes the correct title and meta tags:</p>
<pre><code class="language-html">&lt;title&gt;My Article Title | My App&lt;/title&gt;
&lt;meta property="og:title" content="..." /&gt;
&lt;meta property="og:description" content="..." /&gt;
&lt;meta property="og:image" content="..." /&gt;
</code></pre>
<p>Link previews on WhatsApp, LinkedIn, and Twitter work correctly. Googlebot indexes full content. The ₹5,000/month prerender.io subscription is cancelled.</p>
<p><strong>Cost: ₹0/month today. ₹1,100/month at 1 million bot requests. ₹8,300/month at 5 millions.</strong></p>
]]></content:encoded></item><item><title><![CDATA[My First AWS re:Invent Experience]]></title><description><![CDATA[Ten years.
That's how long I'd been waiting to attend re:Invent. Ten years of watching from afar, reading live tweets, consuming session recordings days later, imagining what it would feel like to be there in person.
This year, AWS launched a grant p...]]></description><link>https://www.internetkatta.com/my-first-aws-reinvent-experience</link><guid isPermaLink="true">https://www.internetkatta.com/my-first-aws-reinvent-experience</guid><category><![CDATA[#re:invent2025]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><category><![CDATA[AWS Community]]></category><category><![CDATA[Traveling]]></category><category><![CDATA[Experience ]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Wed, 24 Dec 2025 09:47:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766569352144/913ca0b3-78dc-47c1-afc0-62b95e131c3e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ten years.</p>
<p>That's how long I'd been waiting to attend re:Invent. Ten years of watching from afar, reading live tweets, consuming session recordings days later, imagining what it would feel like to be there in person.</p>
<p>This year, AWS launched a grant program for User Group Leaders. After a decade of being part of AWS community, attending, volunteering, contributing and organising meet-ups, answering questions at midnight, showing up week after week and I finally got the grant.</p>
<p>I was on cloud nine. The long-awaited dream was finally happening.</p>
<p>But here's what they don't tell you about dreams coming true: they rarely arrive smoothly. There are always ups and downs, tests you didn't prepare for, moments that ask you to prove how badly you really want it.</p>
<p>Mine came twenty-four hours before takeoff.</p>
<p>Most re:Invent stories start with excitement—the kind you share in Slack channels and LinkedIn posts. Mine started with a phone screen lighting up in a London airport terminal.</p>
<h2 id="heading-the-call-that-changed-everything">The Call That Changed Everything</h2>
<p>Twenty-four hours earlier, my wife wasn't feeling well. Still, she looked at me with that determined expression I've come to recognise over the years and said, "You finally got this chance. Go. Don't miss it. I'll handle everything here."</p>
<p>The weight of that sentence was the trust, the sacrifice, the quiet strength and it doesn't leave you. It becomes part of the journey itself.</p>
<p>I boarded my first flight trying to convince myself everything would be fine. The nervous energy of a first-time re:Invent attendee mixed with the worry of leaving home when things weren't perfect. But we'd made the decision. I was going.</p>
<p>Then came the message.</p>
<p>Waiting for my connection at Heathrow, surrounded by the usual airport chaos, my phone buzzed. The hospital. My son had fallen. Two fractures. One dislocation. His arm.</p>
<p>I called home immediately. My wife picked up, her voice steady as she walked me through what happened, what the doctors said, what came next. In the background, I could hear the sounds of the emergency room. And as we spoke, trying to process it all, I saw the airline staff closing the aircraft door.</p>
<p>Ten hours in the air where there was no no network. I was waiting for update but can’t do anything.<br />Just the hum of engines and one thought playing on repeat: <em>How is she managing all this alone, when she herself isn't well?</em></p>
<p>That flight became a masterclass in helplessness. In recognising that behind every conference badge, every community contribution, every public achievement, there are people at home carrying half your world, sometimes more and so you can chase the other half.</p>
<h2 id="heading-when-your-mind-finally-lands">When Your Mind Finally Lands</h2>
<p>The moment my plane touched down in Las Vegas, I didn't care about the Strip or the spectacle. I needed that first call home to work. It did. Things were stable. My son was being treated. My wife was managing with the kind of strength that makes you realise you married someone far braver than yourself.</p>
<p>That's when I finally arrived. That had been in Vegas for hours. But my mind. My presence. My ability to actually <em>be</em> at re:Invent.</p>
<p>And from that point forward, everything shifted.</p>
<h2 id="heading-meeting-the-people-who-build-the-things-i-build-on">Meeting the People Who Build the Things I Build On</h2>
<p>I talk about ECS and Serverless constantly from product builder point of view. The service I return to, the one I recommend, the one I've built my mental models around.</p>
<p>This week, I got to meet the people who actually build that home.</p>
<p>I sat in Eric's session on ECS Managed Instances—the kind of talk where you're not just learning features, you're learning <em>intent</em>. Why this approach? What problem were they really solving? What trade-offs did they consider?</p>
<p>I heard the ECS Express Mode introduction straight from the product person and engineer who crafted it. Not through blog posts or documentation, but from the humans who debated, prototyped, and shipped it.</p>
<p>And here's what hit me: you can read docs. I've read AWS docs all year—they've been my reference point for everything. But talking to the people who think about these problems day and night? Who live inside the trade-offs and the edge cases? That changes <em>how</em> you understand a service, not just <em>what</em> you know about it.</p>
<p>We exchanged ideas. We nerded out about container orchestration. We talked about real problems and real solutions.</p>
<p>For me, that alone justified the entire trip.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766559608775/6bd66a3a-817f-41d8-92c3-ecc9894625ac.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-sessions-that-stayed-with-me">The Sessions That Stayed With Me</h2>
<p>If I'm being honest, I probably covered 20% of expo area it and that's generous.</p>
<p>Instead, I planted myself in technical sessions. ECS. Fargate. How teams think about scaling at impossible sizes. Every session fed directly into my "AWS for Product Builders" mindset—the lens I use to evaluate whether something works for startups and growing companies, not just enterprises.</p>
<p>I'll share the specific technical learnings in another post. But the meta-learning is this:</p>
<p><strong>Hearing the intent behind the service is as valuable as learning the service itself.</strong></p>
<p>When you understand <em>why</em> a team made certain decisions, you make better decisions yourself.</p>
<h2 id="heading-from-slack-avatars-to-real-conversations">From Slack Avatars to Real Conversations</h2>
<p>The Community Hub became my anchor during the chaos.</p>
<p>This was my first re:Invent, and I walked in carrying a kind of shyness I don't usually admit to. The hesitation to start conversations with new people. The imposter syndrome that whispers <em>everyone here knows more than you</em>. The Hub was full of heroes. Community Builders whose blogs I'd read for years. Leaders whose work I admired from afar. People I'd wanted to meet but never had the chance.</p>
<p>And I froze.</p>
<p>I'd see them across the room and think, "I should go say hello." But my feet wouldn't move. One step—that's all it would take. But that one step felt impossible in those moments.</p>
<p>I missed talking to people I'd dreamed of meeting. I let opportunities slip by.</p>
<p>But I also pushed myself. Tiny steps. One introduction. One conversation. Then another. And something magical happened.</p>
<p>I met AWS User Group Leaders I'd known online for five years—people who felt like old friends even though we'd never shared the same room before.</p>
<p>I encountered new faces who somehow felt instantly familiar—the kind of connection that reminds you why community work matters in the first place.</p>
<p>One highlight was the User Group meeting Maria organized. UG leaders shared the real problems they faced and the ones that don't make it into polished LinkedIn posts. How they kept their communities engaged when attendance dropped. How they found speakers. How they dealt with burnout while trying to inspire others.</p>
<p>At the APJC Community Awards, I met a leader from the Philippines who completely shifted my perspective. For them, community isn't just networking or professional development—it's a lifeline. They shared how incredibly difficult it is to get things done there. The lack of resources. The infrastructure challenges. The uphill battle to create opportunities where few exist.</p>
<p>Yet they keep showing up. They keep building. They keep creating spaces where people can learn, connect, and grow—because for many in their community, these meetups represent access they simply wouldn't have otherwise.</p>
<p>Listening to their story made me realize how privileged my own challenges are. It reminded me that community work looks different across the world, and the impact it creates can be measured in opportunities that never would have existed.</p>
<p>But there was another moment—one I didn't expect to witness, and one I'll never forget.</p>
<p>Jeff Barr. Twenty years of unwavering commitment to the AWS community. Two decades of blog posts, of showing up, of giving back. The room gathered to honor this milestone, and what happened next was pure, unfiltered emotion.</p>
<p>His son, Stephen, stood up to share another side of Jeff—the father behind the community legend. Stories from childhood. How Jeff balanced being a dad with being the voice of AWS. The late nights writing. The early mornings answering questions. The way he somehow made space for both family and this massive community he'd built.We all watched Jeff cry. Not the polished, composed tears you see at rehearsed events. Real tears. The kind that come when you realise the full weight of what you've built and who stood beside you while you built it.</p>
<p>The room was silent except for a few sniffles. Goosebumps. That rare moment when everyone present knows they're witnessing something genuine.</p>
<p>If someone asked me to name one moment from re:Invent that captured what community really means—the sacrifice, the longevity, the human cost, the profound impact—it would be this one.</p>
<p>Those stories stayed with me long after the session ended.</p>
<p>Community-building isn't just planning events and posting updates. It's resilience. It's creativity. It's learning to keep showing up even when you're tired, even when you wonder if it matters, even when the metrics don't move as fast as you'd like.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766568418381/1b3ed39e-3ef0-40a8-91b0-f9c5d42cb7e8.png" alt /></p>
<h2 id="heading-the-startup-conversation-i-needed">The Startup Conversation I Needed</h2>
<p>At the Startup Amped event, I found myself in the kind of conversations that don't happen at traditional networking sessions.</p>
<p>Founders talking about the messy parts. The pivots that felt like failures until they weren't. The risks that kept them up at night. The moment they landed their first customer. The second-guessing. The breakthroughs.</p>
<p>I shared what we're building at NuShift Connect and our mission to reshape health conversations, awareness, and community in India. How we're trying to fill gaps that the traditional healthcare system leaves open.</p>
<p>These weren't pitches.<br />They were "we've been there too" conversations.<br />They were "here's what I learned the hard way" exchanges.</p>
<p>And then the conversations went deeper.</p>
<p>People opened up about their health struggles. Family health crises that happened while they were trying to build their startups. The nights they sat in hospital waiting rooms while their pitch decks sat untouched on their laptops. The impossible choice between being present for a sick parent or showing up for an investor meeting.</p>
<p>When I shared what had happened with my son just hours before my flight, I saw heads nodding around the room. Not with pity—with recognition. These were founders who understood that life doesn't pause for your business plan. That sometimes your greatest test isn't in the market—it's in the hospital corridor.</p>
<p>That room felt less like a networking event and more like a circle of people who understand what it really costs to build something from nothing while life happens all around you.</p>
<h2 id="heading-the-hard-truth-about-reinvent-you-cant-be-everywhere">The Hard Truth About re:Invent: You Can't Be Everywhere</h2>
<p>Here's something nobody tells you before your first re:Invent: the event will force you to make impossible choices.The Community Builder mixer was happening at the same time as the Startup Amped event. I chose Startup Amped. Which meant I missed connecting with fellow builders in a space specifically designed for us. Did I regret it? In the moment, yes. Absolutely. But here's what I learned: re:Invent isn't about attending everything. It's not about having a perfect schedule or checking every box. It's about choosing what matters most to you right now, in this season of your journey, and showing up fully for that.</p>
<p>I missed events. I missed conversations. I missed people.</p>
<p>But what I didn't miss was being present for the choices I did make.</p>
<p>And sometimes, that's enough.</p>
<h2 id="heading-two-experiences-i-never-planned-for">Two Experiences I Never Planned For</h2>
<p><strong>The Pre-re:Invent Hike</strong></p>
<p>Before re:Invent officially began, there was the hike. Around 40–50 of us gathered, dividing into two teams: one for the medium route, one for the long and tough route. I chose medium. Seemed reasonable after jet lag and hours of travel. Turns out, "medium" was a generous label. The trail was challenging—longer and steeper than any of us expected. Our team actually reached the end later than the "tough route" group, which became a running joke for the rest of the day. But here's what made it memorable: we didn't just hike. We stopped. We breathed. We talked. Someone pulled out food they'd brought from home—snacks from India, treats from different countries. We shared them on the trail like we'd known each other for years. We tried different paths to test which route worked better. A mental exercise wrapped in physical movement. Problem-solving while hiking. Very builder-like, when you think about it. And the strangest part? After jet lag and a transatlantic flight, I didn't feel tired. The opposite, actually. The mountain air, the movement, the conversations and it all felt energising. Within minutes of starting, strangers opened up about their journeys. Career pivots. Burnout stories. The "I almost quit but..." moments that never make it to LinkedIn. That hike wasn't about reaching the summit. It was about connection, genuine, unexpected, and rare. The kind you can only find when you remove the conference badge and just walk alongside other humans trying to figure things out</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766568592809/9f5f232f-c494-4128-bc94-f2e6d18ffeee.png" alt class="image--center mx-auto" /></p>
<p><strong>My First-Ever 5K Run</strong></p>
<p>I'd signed up for the 5K run weeks earlier. But somewhere in the chaos of re:Invent, I got confused about the day. Was it Thursday or Wednesday? Then a message popped up in our India community group. The run was happening. Right now. My heart sank. I was going to miss it. Another opportunity slipping away. But something in me said: not this time. I rushed out, found the shuttle bus, my mind racing faster than I'd be running. When I finally reached the starting point, the run had already begun. People were already on the course, their figures disappearing into the early morning light and strong cold was slapping on face and ear. I could have turned back. Found an excuse. Told myself I tried. Instead, I joined them mid-run. My first 5K run. Started late. Arrived breathless. Nothing spectacular about my time. But I showed up. Even when it would have been easier not to. A reminder that even in a heavy week, health doesn't wait for perfect timing and neither should we.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766568649141/72f7770d-1b20-483c-b91a-a19044343d7c.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-a-nomination-that-meant-more-than-winning">A Nomination That Meant More Than Winning</h2>
<p>Somewhere in the middle of all this, I found out I'd been nominated for the second year in a row for the AWS Community Builder of the Year award.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766568531674/553d65ed-8624-4046-aa16-854dc1388ace.png" alt class="image--center mx-auto" /></p>
<p>I didn't win.</p>
<p>And honestly? That didn't matter.</p>
<p>Seeing my name there again was enough. Because I know what it took to reach this point. The late nights answering questions in forums. The blog posts written when I was exhausted. The community events organised on weekends.</p>
<p>More importantly, I know <em>who</em> stood behind me so I could do any of it.</p>
<p>The nomination wasn't just about me. It was about everyone who made space for me to contribute.</p>
<h2 id="heading-what-this-trip-really-taught-me">What This Trip Really Taught Me</h2>
<p>This wasn't a smooth trip.<br />It wasn't a relaxed conference experience.<br />It wasn't the postcard version of re:Invent you see in highlight reels.</p>
<p>It was real.<br />It was emotional.<br />It stretched me in ways I'm still processing.</p>
<p>And above everything, it crystallised three truths I already knew but needed to feel again:</p>
<p><strong>Family makes the journey possible.</strong><br />Without my wife's and son strength, I wouldn't have made it past the first airport. Every community contribution I make is built on her and his foundation of support.</p>
<p><strong>Community makes the journey meaningful.</strong><br />The technical knowledge matters. But the connections, the shared struggles, the moment you realise someone else has fought the same battles—that's what transforms information into wisdom.</p>
<p><strong>Curiosity makes the journey worth continuing.</strong><br />Even exhausted, even worried, even uncertain—asking questions, seeking understanding, wanting to know <em>why</em> and <em>how</em>—that's what keeps us moving forward.</p>
<h2 id="heading-what-comes-next">What Comes Next</h2>
<p>I'll be sharing more detailed technical learnings soon. The ECS insights. The Fargate patterns. The "AWS for Product Builders" framework I'm developing. The kind of content I'm excited to give back to the community that's given me so much.</p>
<p>But for now, I'm sitting with gratitude.</p>
<p>For my wife, who made an impossible choice to let me go.<br />For my son, who's recovering with the resilience only kids seem to have.<br />For the people I met in Vegas who reminded me why this work matters.<br />For the moments that tested me and, in testing me, changed me.</p>
<p>My first re:Invent wasn't perfect.</p>
<p>But it was mine.</p>
<p>And sometimes, that's exactly what you need.</p>
]]></content:encoded></item><item><title><![CDATA[How I Fixed SEO for Our Angular SPA Using AWS Amplify + Prerender.io]]></title><description><![CDATA[I still remember the excitement of October 22nd, 2025. After months of development and anticipation, Nushift Connect was finally going live. Built with Angular and hosted on AWS Amplify, everything we'd worked so hard on was about to be in the hands ...]]></description><link>https://www.internetkatta.com/how-i-fixed-seo-for-our-angular-spa-using-aws-amplify-prerenderio</link><guid isPermaLink="true">https://www.internetkatta.com/how-i-fixed-seo-for-our-angular-spa-using-aws-amplify-prerenderio</guid><category><![CDATA[Prerender]]></category><category><![CDATA[Angular]]></category><category><![CDATA[AWS]]></category><category><![CDATA[amplify]]></category><category><![CDATA[caching]]></category><category><![CDATA[social media]]></category><category><![CDATA[Angular]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 28 Nov 2025 22:06:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763752082311/21ca0af3-ab91-4945-b491-083437502b96.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I still remember the excitement of October 22nd, 2025. After months of development and anticipation, <a target="_blank" href="https://nushiftconnect.com/">Nushift Connect</a> was finally going live. Built with Angular and hosted on AWS Amplify, everything we'd worked so hard on was about to be in the hands of real users. The deployment was smooth. The app was working beautifully. Then I decided to share one of our articles on LinkedIn to celebrate the launch.</p>
<p>Instead of our beautiful featured image and carefully crafted description, LinkedIn showed... nothing. Just a bland URL. No image. No description. Generic metadata.</p>
<p>"Did we forget to add the meta tags?"</p>
<p>We hadn't. They were there—dynamically generated by Angular. The problem? Social media bots don't execute JavaScript.</p>
<h2 id="heading-understanding-the-problem">Understanding the Problem</h2>
<p>Here's what was happening:</p>
<p><strong>Regular Users:</strong> Browser loads our Angular app → JavaScript executes → Dynamic meta tags render → Perfect experience</p>
<p><strong>Social Media Bots:</strong> Bot requests page → Gets bare HTML (no JavaScript execution) → Sees only static <code>&lt;title&gt;</code> tag → No rich preview</p>
<p>Facebook's crawler, LinkedIn's bot, Twitter's card validator—none of them waited for our Angular app to bootstrap and set those meta tags. They needed HTML, and they needed it immediately. I was aware of this limitation of Angular but while working on feature and other parts completely forgot main SEO friendliness.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763749370994/fb01e39d-9c77-44ae-9f13-0ff3a0657d94.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-research-rabbit-hole">The Research Rabbit Hole</h2>
<p>I spent the next few days exploring every possible solution:</p>
<h3 id="heading-option-1-move-to-a-different-platform">Option 1: Move to a Different Platform</h3>
<p>"Maybe Netlify handles this better?" I thought. They do have prerendering built-in. ECS with server-side rendering was another option which we could run Angular Universal.</p>
<p>But here's the thing: <strong>AWS Amplify was perfect for everything else</strong>. The CI/CD pipeline, the preview branches, the authentication integration, the hosting performance—all excellent. Abandoning it felt like throwing the baby out with the bathwater.</p>
<h3 id="heading-option-2-angular-universal-ssr">Option 2: Angular Universal (SSR)</h3>
<p>The "proper" solution, right? Server-side rendering would solve this elegantly. But it meant:</p>
<ul>
<li><p>Completely restructuring our application architecture</p>
</li>
<li><p>Dealing with window/document undefined errors</p>
</li>
<li><p>Managing a Node.js server</p>
</li>
<li><p>Significantly more complexity for deployments</p>
</li>
</ul>
<p>For a relatively simple SPA, this felt like overkill. We needed something lighter.</p>
<h3 id="heading-option-3-prerendering-services">Option 3: Prerendering Services</h3>
<p>This seemed promising. Services like <a target="_blank" href="http://Prerender.io">Prerender.io</a> could crawl our application and serve rendered HTML to bots. The architecture would be:</p>
<ul>
<li><p>Regular users → Direct to Amplify (fast!)</p>
</li>
<li><p>Social media bots → Through prerender service → Get fully rendered HTML</p>
</li>
</ul>
<p>The challenge? Amplify doesn't have built-in prerendering middleware. We'd need to set it up ourselves.</p>
<h2 id="heading-the-decision-framework-why-prerenderiohttpprerenderio-made-sense">The Decision Framework: Why <a target="_blank" href="http://Prerender.io">Prerender.io</a> Made Sense</h2>
<p>Before committing to any solution, I analysed the actual usage patterns and costs:</p>
<h3 id="heading-understanding-our-traffic-pattern">Understanding Our Traffic Pattern</h3>
<p>Let's be realistic about when prerendering actually happens:</p>
<ul>
<li><p><strong>Regular users</strong> browsing the site: 99%+ of traffic</p>
</li>
<li><p><strong>Social media bots</strong> crawling shared links: &lt;1% of traffic</p>
</li>
</ul>
<p>The key insight: <strong>Prerendering only happens when someone shares a link on social media.</strong> Not on every page load. Not for every user. Only when LinkedIn, Facebook, or Twitter bots crawl a URL.</p>
<h3 id="heading-cost-analysis">Cost Analysis</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763750655673/1878ce8e-0047-4e2c-acc3-2e34c81d37eb.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-the-math-that-convinced-me">The Math That Convinced Me</h3>
<p><strong>Scenario: 10,000 page views/month</strong> (9,900 users + 100 bot crawls)</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763750626544/7f32c81d-c346-471a-a8e7-59b866ff79ab.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-lambdaedge-cost-breakdown">Lambda@Edge Cost Breakdown</h3>
<p><strong>Pricing:</strong></p>
<ul>
<li><p>Request charges: $0.60 per 1M requests</p>
</li>
<li><p>Duration charges: $0.00005001 per GB-second</p>
</li>
<li><p><strong>Free tier</strong>: 1M requests/month (covers most small-medium sites)</p>
</li>
</ul>
<p><strong>Our usage:</strong></p>
<ul>
<li><p>Viewer-request: 10,000/month (bot detection on all traffic)</p>
</li>
<li><p>Origin-request: 100/month (redirect only bots)</p>
</li>
<li><p>Memory: 128 MB | Execution: ~10ms</p>
</li>
<li><p><strong>Monthly cost: ~$0.007</strong> (essentially free with free tier)</p>
</li>
</ul>
<p><strong>At scale:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1763750549788/60289047-f15b-4a95-8266-f4cac3cf24a6.png" alt class="image--center mx-auto" /></p>
<p><strong>Winner:</strong> <a target="_blank" href="http://Prerender.io"><strong>Prerender.io</strong></a> <strong>+ Lambda@Edge</strong> - 90% cost savings, 10x faster implementation, zero infrastructure overhead.</p>
<h3 id="heading-the-reality-check">The Reality Check</h3>
<p>I asked myself: "What am I actually trying to solve?"</p>
<ul>
<li><p>✅ Social media previews when links are shared</p>
</li>
<li><p>❌ NOT trying to rank #1 on Google for competitive keywords</p>
</li>
<li><p>❌ NOT building a content-heavy blog that needs perfect SEO</p>
</li>
<li><p>❌ NOT dealing with thousands of bot crawls per day</p>
</li>
</ul>
<p>For a SPA where social sharing matters but isn't the primary traffic driver, prerendering is the pragmatic choice.</p>
<h3 id="heading-when-not-to-choose-prerenderiohttpprerenderio">When NOT to Choose <a target="_blank" href="http://Prerender.io">Prerender.io</a></h3>
<p>To be fair, <a target="_blank" href="http://Prerender.io">Prerender.io</a> isn't always the answer:</p>
<ul>
<li><p><strong>Heavy SEO focus</strong>: If organic search is your primary channel, SSR is better</p>
</li>
<li><p><strong>Content-heavy sites</strong>: News sites, blogs with thousands of articles need full SSR</p>
</li>
<li><p><strong>High bot traffic</strong>: If bots are &gt;10% of traffic, costs add up</p>
</li>
<li><p><strong>Real-time content</strong>: Stock prices, live scores need instant SSR</p>
</li>
</ul>
<p>But for our use case—a business application where social sharing enhances discoverability—prerendering was perfect.</p>
<p>The challenge? Amplify doesn't have built-in prerendering middleware. We'd need to set it up ourselves.</p>
<h2 id="heading-the-cloudfront-discovery">The CloudFront Discovery</h2>
<p>Then it clicked: Amplify uses CloudFront under the hood. And CloudFront has Lambda@Edge—functions that can intercept and modify requests at the edge.</p>
<p><strong>This was our solution.</strong> We could:</p>
<ol>
<li><p>Detect social media bots at the CloudFront level</p>
</li>
<li><p>Route bot traffic through <a target="_blank" href="http://Prerender.io">Prerender.io</a></p>
</li>
<li><p>Keep regular user traffic going directly to Amplify</p>
</li>
</ol>
<p>Best of both worlds: stay on Amplify, solve the bot problem.</p>
<h2 id="heading-attempt-1-cloudfront-functions-days-of-frustration">Attempt 1: CloudFront Functions (Days of Frustration)</h2>
<p>My first thought: "CloudFront Functions are faster and cheaper than Lambda@Edge. Let's use those!"</p>
<p>CloudFront Functions seemed perfect:</p>
<ul>
<li><p>Execute in microseconds</p>
</li>
<li><p>Cost a fraction of Lambda@Edge</p>
</li>
<li><p>Native CloudFront integration</p>
</li>
<li><p>Perfect for request/response manipulation</p>
</li>
</ul>
<p>I spent days trying to make them work. Here's what I built:</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">handler</span>(<span class="hljs-params">event</span>) </span>{
    <span class="hljs-keyword">var</span> request = event.request;
    <span class="hljs-keyword">var</span> userAgent = request.headers[<span class="hljs-string">'user-agent'</span>];

    <span class="hljs-comment">// Bot detection works fine</span>
    <span class="hljs-keyword">if</span> (<span class="hljs-regexp">/facebookexternalhit|linkedinbot|twitterbot/</span>.test(userAgent.value)) {
        <span class="hljs-comment">// But now what? How do I redirect to prerender.io?</span>
        <span class="hljs-comment">// Can I change the origin? No.</span>
        <span class="hljs-comment">// Can I make an external call? No.</span>
        <span class="hljs-comment">// Can I modify the request to go elsewhere? No.</span>
    }

    <span class="hljs-keyword">return</span> request;
}
</code></pre>
<p>I tried multiple approaches:</p>
<ul>
<li><p><strong>Modifying the request URI</strong> - CloudFront Functions can change URIs, but not the actual origin server</p>
</li>
<li><p><strong>Adding custom headers</strong> - Headers were added successfully, but no way to act on them at the origin level</p>
</li>
<li><p><strong>Request transformation tricks</strong> - Every creative workaround hit the same wall</p>
</li>
</ul>
<p><strong>The Hard Truth:</strong> CloudFront Functions are incredibly limited by design. They can:</p>
<ul>
<li><p>✅ Modify headers</p>
</li>
<li><p>✅ Change URIs and query strings</p>
</li>
<li><p>✅ Validate and sanitize requests</p>
</li>
<li><p>❌ <strong>Cannot change origins</strong> (the actual server handling the request)</p>
</li>
<li><p>❌ <strong>Cannot make external API calls</strong></p>
</li>
<li><p>❌ <strong>Cannot perform complex routing logic</strong></p>
</li>
</ul>
<p>They're designed for lightweight tasks like adding security headers or URL rewrites, not for dynamically routing traffic to different services based on conditions.</p>
<p>After days of testing, researching, and hitting dead ends, I realised: <strong>CloudFront Functions simply cannot solve this problem.</strong> I could detect bots perfectly, but I had no way to route them to <a target="_blank" href="http://Prerender.io">Prerender.io</a>.</p>
<p><strong>Lesson learned:</strong> CloudFront Functions are blazing fast and cheap, but their limitations are real. For origin switching based on conditions, Lambda@Edge is the only option.</p>
<p>Time to learn Lambda@Edge.</p>
<h2 id="heading-attempt-2-the-502-bad-gateway-mystery">Attempt 2: The 502 Bad Gateway Mystery</h2>
<p>This time, the function ran but returned <strong>502 errors</strong>. CloudWatch logs showed the function was executing, but CloudFront rejected the response.</p>
<p>The culprit? I was modifying the request structure incorrectly. Lambda@Edge has strict validation for the request/response objects you return. My custom origin configuration had:</p>
<ul>
<li><p>Missing required fields</p>
</li>
<li><p>Incorrect URL encoding</p>
</li>
<li><p>Wrong domain references (I was using the internal Amplify domain instead of the CloudFront domain)</p>
</li>
</ul>
<p>Each iteration meant another 15-minute deployment wait. Testing edge functions is <em>slow</em>.</p>
<h2 id="heading-the-breakthrough-rtfm-read-the-fine-manual">The Breakthrough: RTFM (Read The Fine Manual)</h2>
<p>Frustrated, I finally dove into <a target="_blank" href="http://Prerender.io">Prerender.io</a>'s official documentation. They had a CloudFormation template specifically for CloudFront integration: <code>prerender-cloudfront.yaml</code>. and thanks to Amazon Q developer CLI for debugging and fixing this issue.</p>
<p>The key insight I'd been missing: <strong>Use the same Lambda function for TWO different CloudFront events:</strong></p>
<ol>
<li><p><strong>viewer-request</strong>: Detect bots and add special headers</p>
</li>
<li><p><strong>origin-request</strong>: Check for those headers and redirect to <a target="_blank" href="http://Prerender.io">Prerender.io</a></p>
</li>
</ol>
<p>Here's the beautiful simplicity of the final solution:</p>
<pre><code class="lang-javascript"><span class="hljs-meta">'use strict'</span>;

<span class="hljs-built_in">exports</span>.handler = <span class="hljs-function">(<span class="hljs-params">event, context, callback</span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> request = event.Records[<span class="hljs-number">0</span>].cf.request;

    <span class="hljs-keyword">if</span> (request.headers[<span class="hljs-string">'x-prerender-token'</span>] &amp;&amp; request.headers[<span class="hljs-string">'x-prerender-host'</span>]) {
        <span class="hljs-comment">// This is the origin-request function - redirect to prerender.io</span>
        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Redirecting to prerender.io'</span>);

        <span class="hljs-keyword">if</span> (request.headers[<span class="hljs-string">'x-query-string'</span>]) {
            request.querystring = request.headers[<span class="hljs-string">'x-query-string'</span>][<span class="hljs-number">0</span>].value;
        }

        request.origin = {
            <span class="hljs-attr">custom</span>: {
                <span class="hljs-attr">domainName</span>: <span class="hljs-string">'service.prerender.io'</span>,
                <span class="hljs-attr">port</span>: <span class="hljs-number">443</span>,
                <span class="hljs-attr">protocol</span>: <span class="hljs-string">'https'</span>,
                <span class="hljs-attr">readTimeout</span>: <span class="hljs-number">20</span>,
                <span class="hljs-attr">keepaliveTimeout</span>: <span class="hljs-number">5</span>,
                <span class="hljs-attr">customHeaders</span>: {},
                <span class="hljs-attr">sslProtocols</span>: [<span class="hljs-string">'TLSv1'</span>, <span class="hljs-string">'TLSv1.1'</span>],
                <span class="hljs-attr">path</span>: <span class="hljs-string">'/https%3A%2F%2F'</span> + request.headers[<span class="hljs-string">'x-prerender-host'</span>][<span class="hljs-number">0</span>].value
            }
        };
    } <span class="hljs-keyword">else</span> {
        <span class="hljs-comment">// This is the viewer-request function - detect bots and set headers</span>
        <span class="hljs-keyword">const</span> headers = request.headers;
        <span class="hljs-keyword">const</span> user_agent = headers[<span class="hljs-string">'user-agent'</span>];
        <span class="hljs-keyword">const</span> host = headers[<span class="hljs-string">'host'</span>];

        <span class="hljs-keyword">if</span> (user_agent &amp;&amp; host) {
            <span class="hljs-keyword">var</span> prerender = <span class="hljs-regexp">/googlebot|adsbot\-google|Feedfetcher\-Google|bingbot|yandex|baiduspider|Facebot|facebookexternalhit|twitterbot|rogerbot|linkedinbot|embedly|quora link preview|showyoubot|outbrain|pinterest|slackbot|vkShare|W3C_Validator|redditbot|applebot|whatsapp|flipboard|tumblr|bitlybot|skypeuripreview|nuzzel|discordbot|google page speed|qwantify|pinterestbot|bitrix link preview|xing\-contenttabreceiver|chrome\-lighthouse|telegrambot|Perplexity|OAI-SearchBot|ChatGPT|GPTBot|ClaudeBot|Amazonbot|integration-test/i</span>.test(user_agent[<span class="hljs-number">0</span>].value);

            prerender = prerender || <span class="hljs-regexp">/_escaped_fragment_/</span>.test(request.querystring);
            prerender = prerender &amp;&amp; ! <span class="hljs-regexp">/\.(js|css|xml|less|png|jpg|jpeg|gif|pdf|doc|txt|ico|rss|zip|mp3|rar|exe|wmv|doc|avi|ppt|mpg|mpeg|tif|wav|mov|psd|ai|xls|mp4|m4a|swf|dat|dmg|iso|flv|m4v|torrent|ttf|woff|svg|eot)$/i</span>.test(request.uri);

            <span class="hljs-keyword">if</span> (prerender) {
                <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Bot detected:'</span>, user_agent[<span class="hljs-number">0</span>].value);
                headers[<span class="hljs-string">'x-prerender-token'</span>] = [{ <span class="hljs-attr">key</span>: <span class="hljs-string">'X-Prerender-Token'</span>, <span class="hljs-attr">value</span>: <span class="hljs-string">'YOUR_PRERENDER_TOKEN'</span>}];
                headers[<span class="hljs-string">'x-prerender-host'</span>] = [{ <span class="hljs-attr">key</span>: <span class="hljs-string">'X-Prerender-Host'</span>, <span class="hljs-attr">value</span>: host[<span class="hljs-number">0</span>].value}];
                headers[<span class="hljs-string">'x-prerender-cachebuster'</span>] = [{ <span class="hljs-attr">key</span>: <span class="hljs-string">'X-Prerender-Cachebuster'</span>, <span class="hljs-attr">value</span>: <span class="hljs-built_in">Date</span>.now().toString()}];
                headers[<span class="hljs-string">'x-query-string'</span>] = [{ <span class="hljs-attr">key</span>: <span class="hljs-string">'X-Query-String'</span>, <span class="hljs-attr">value</span>: request.querystring}];
            } <span class="hljs-keyword">else</span> {
                <span class="hljs-built_in">console</span>.log(<span class="hljs-string">'Regular user'</span>);
            }
        }
    }

    callback(<span class="hljs-literal">null</span>, request);
};
</code></pre>
<h2 id="heading-why-this-works-the-two-stage-magic">Why This Works (The Two-Stage Magic)</h2>
<p>The genius of this approach is the two-stage processing:</p>
<p><strong>Stage 1 - Viewer Request (Edge → Client):</strong></p>
<ul>
<li><p>Lambda checks the user agent</p>
</li>
<li><p>If it's a bot, adds special headers (<code>x-prerender-token</code>, <code>x-prerender-host</code>)</p>
</li>
<li><p>Passes request along</p>
</li>
</ul>
<p><strong>Stage 2 - Origin Request (Edge → Origin):</strong></p>
<ul>
<li><p>Same Lambda function checks for those special headers</p>
</li>
<li><p>If present, redirects the request to <a target="_blank" href="http://Prerender.io">Prerender.io</a> instead of Amplify</p>
</li>
<li><p><a target="_blank" href="http://Prerender.io">Prerender.io</a> renders the Angular app and returns HTML</p>
</li>
<li><p>If not present, request goes directly to Amplify (regular users)</p>
</li>
</ul>
<p>This means:</p>
<ul>
<li><p>✅ Regular users never touch the prerender service (fast!)</p>
</li>
<li><p>✅ Bots get fully rendered HTML with proper meta tags</p>
</li>
<li><p>✅ Zero changes to our Amplify hosting</p>
</li>
<li><p>✅ Cost-efficient—only pay for actual bot traffic (~1%)</p>
</li>
</ul>
<h2 id="heading-configuring-cloudfront">Configuring CloudFront</h2>
<p>In the CloudFront distribution settings, I associated the Lambda function with <strong>both</strong> events:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">Associations:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">EventType:</span> <span class="hljs-string">viewer-request</span>
    <span class="hljs-attr">LambdaFunctionARN:</span> <span class="hljs-string">arn:aws:lambda:us-east-1:xxx:function:socialbots:1</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">EventType:</span> <span class="hljs-string">origin-request</span>
    <span class="hljs-attr">LambdaFunctionARN:</span> <span class="hljs-string">arn:aws:lambda:us-east-1:xxx:function:socialbots:1</span>
</code></pre>
<p>Same function, two different trigger points.</p>
<h2 id="heading-the-angular-side-signaling-readiness">The Angular Side: Signaling Readiness</h2>
<p>One more piece: Angular needed to tell <a target="_blank" href="http://Prerender.io">Prerender.io</a> when the page was fully rendered with all meta tags set.</p>
<p>In our article component:</p>
<pre><code class="lang-typescript">ngOnInit() {
    <span class="hljs-keyword">const</span> articleId = <span class="hljs-built_in">this</span>.route.snapshot.paramMap.get(<span class="hljs-string">'id'</span>);

    <span class="hljs-built_in">this</span>.articleService.getArticle(articleId).subscribe(<span class="hljs-function"><span class="hljs-params">article</span> =&gt;</span> {
        <span class="hljs-comment">// Update meta tags</span>
        <span class="hljs-built_in">this</span>.meta.updateTag({ property: <span class="hljs-string">'og:title'</span>, content: article.title });
        <span class="hljs-built_in">this</span>.meta.updateTag({ property: <span class="hljs-string">'og:description'</span>, content: article.description });
        <span class="hljs-built_in">this</span>.meta.updateTag({ property: <span class="hljs-string">'og:image'</span>, content: article.imageUrl });
        <span class="hljs-built_in">this</span>.meta.updateTag({ name: <span class="hljs-string">'twitter:card'</span>, content: <span class="hljs-string">'summary_large_image'</span> });

        <span class="hljs-comment">// Signal to Prerender.io that the page is ready</span>
        (<span class="hljs-built_in">window</span> <span class="hljs-keyword">as</span> <span class="hljs-built_in">any</span>).prerenderReady = <span class="hljs-literal">true</span>;
    });
}
</code></pre>
<p>Without <code>prerenderReady = true</code>, <a target="_blank" href="http://Prerender.io">Prerender.io</a> might snapshot the page before our API call completes and meta tags are set.</p>
<h2 id="heading-testing-and-debugging">Testing and Debugging</h2>
<p>Testing edge functions is painful because of deployment times. Here's what helped:</p>
<p><strong>1. CloudWatch Logs</strong> Lambda@Edge logs go to CloudWatch in the region where the function executes (us-east-1 for me):</p>
<pre><code class="lang-basic">/aws/lambda/us-east-<span class="hljs-number">1.</span>socialbots
</code></pre>
<p><strong>2. Direct curl Testing</strong></p>
<pre><code class="lang-bash"><span class="hljs-comment"># Test bot detection</span>
curl -A <span class="hljs-string">"facebookexternalhit/1.1"</span> https://your-domain.com/article/123

<span class="hljs-comment"># Test regular user</span>
curl -A <span class="hljs-string">"Mozilla/5.0"</span> https://your-domain.com/article/123
</code></pre>
<p><strong>3. Cache Invalidation</strong> CloudFront caches everything. After changes, invalidate:</p>
<pre><code class="lang-bash">aws cloudfront create-invalidation --distribution-id YOUR_DIST_ID --paths <span class="hljs-string">"/*"</span>
</code></pre>
<p><strong>4.</strong> <a target="_blank" href="http://Prerender.io"><strong>Prerender.io</strong></a> <strong>Direct Testing</strong> Check what <a target="_blank" href="http://Prerender.io">Prerender.io</a> sees:</p>
<pre><code class="lang-basic">https://service.prerender.io/https://your-domain.<span class="hljs-keyword">com</span>/article/<span class="hljs-number">123</span>
</code></pre>
<h2 id="heading-the-waiting-game">The Waiting Game</h2>
<p>The hardest part? <strong>Patience.</strong> Every CloudFront distribution update takes 10-15 minutes to propagate. Every Lambda@Edge deployment requires replicating to all edge locations.</p>
<p>I learned to:</p>
<ul>
<li><p>Make changes in small batches</p>
</li>
<li><p>Test thoroughly in CloudWatch before deploying</p>
</li>
<li><p>Use <a target="_blank" href="http://Prerender.io">Prerender.io</a>'s direct API for quick validation</p>
</li>
<li><p>Keep a testing checklist to avoid forgetting edge cases</p>
</li>
</ul>
<h2 id="heading-key-lessons-learned">Key Lessons Learned</h2>
<ol>
<li><p><strong>Don't abandon a great platform for one missing feature.</strong> Amplify is excellent—we just needed to extend it.</p>
</li>
<li><p><strong>Lambda@Edge is powerful but picky.</strong> CommonJS only, strict validation, slow deployments. Plan accordingly.</p>
</li>
<li><p><strong>Two-stage processing is elegant.</strong> Using the same function for both viewer-request and origin-request is cleaner than complex routing logic.</p>
</li>
<li><p><strong>Follow official patterns.</strong> <a target="_blank" href="http://Prerender.io">Prerender.io</a>'s CloudFormation template saved me hours of trial and error. When stuck, check the docs.</p>
</li>
<li><p><strong>Cache management is critical.</strong> Both CloudFront and <a target="_blank" href="http://Prerender.io">Prerender.io</a> cache aggressively. Know how to clear both.</p>
</li>
<li><p><strong>Testing takes time.</strong> Budget for 15-minute deployment cycles when working with edge functions.</p>
</li>
</ol>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>What started as "our social links don't work" turned into a deep dive into CloudFront, Lambda@Edge, and edge computing. The journey had plenty of 502 errors, syntax mistakes, and waiting for deployments.</p>
<p>But the end result? Our Angular SPA on Amplify now provides beautiful social media previews while maintaining the performance and deployment simplicity we loved in the first place.</p>
<p>Sometimes the right solution isn't changing your infrastructure—it's extending what you already have.</p>
<p><em>Have you dealt with similar challenges in your SPA deployments? What solutions worked for you? Let me know in the comments!</em></p>
<h2 id="heading-references">References :</h2>
<ul>
<li><p><a target="_blank" href="https://github.com/AvinashDalvi89/cloudfront-lambda-edge-prerender-io-routing">https://github.com/AvinashDalvi89/cloudfront-lambda-edge-prerender-io-routing</a></p>
</li>
<li><p><a target="_blank" href="https://docs.prerender.io/docs/cloudfront">https://docs.prerender.io/docs/cloudfront</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The 9 AM Discovery That Saved Our Production: An ECS Fargate Circuit Breaker Story]]></title><description><![CDATA[Hello Devs,
In the world of containerised deployments, small mistakes can have catastrophic consequences. What started as a routine morning API test in our development environment turned into a revelation about production resilience that fundamentall...]]></description><link>https://www.internetkatta.com/the-9-am-discovery-that-saved-our-production-an-ecs-fargate-circuit-breaker-story</link><guid isPermaLink="true">https://www.internetkatta.com/the-9-am-discovery-that-saved-our-production-an-ecs-fargate-circuit-breaker-story</guid><category><![CDATA[ECS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[aws-fargate]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Thu, 18 Sep 2025 04:56:37 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757433282935/0c4e0d89-9a5f-4d73-b489-b79e92e8073a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>In the world of containerised deployments, small mistakes can have catastrophic consequences. What started as a routine morning API test in our development environment turned into a revelation about production resilience that fundamentally changed how we approach <strong>ECS Fargate</strong> deployments.</p>
<p>This is the story of how a simple port configuration error taught us the critical importance of <strong>ECS Deployment Circuit Breakers</strong> – and why every team running workloads on <strong>AWS Fargate</strong> should consider them essential infrastructure, not optional extras.</p>
<blockquote>
<p>The best production incidents, as it turns out, are the ones that never happen.</p>
</blockquote>
<h2 id="heading-the-setup">The Setup</h2>
<p>Our Flask API ran smoothly on ECS Fargate with a cost-optimized dev setup — tasks auto-started at 8 AM and stopped after hours using CloudWatch alarms.</p>
<p>We used an Application Load Balancer (ALB) targeting port <code>5001</code>, with health checks and task definitions perfectly aligned:</p>
<pre><code class="lang-python"><span class="hljs-comment"># app.py - The way it had always been</span>
<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(host=<span class="hljs-string">'0.0.0.0'</span>, port=<span class="hljs-number">5001</span>, debug=<span class="hljs-literal">False</span>)
</code></pre>
<p><strong>ECS config:</strong></p>
<ul>
<li><p><strong>Container port:</strong> 5001</p>
</li>
<li><p><strong>Target group:</strong> 5001</p>
</li>
<li><p><strong>ALB health checks:</strong> 5001</p>
</li>
</ul>
<p>Everything in harmony.</p>
<h2 id="heading-the-innocent-change">The Innocent Change</h2>
<p>One of our backend developers, was working late on a new feature. They were running multiple services locally and kept hitting port conflicts. Port 5001 was already occupied by another service.</p>
<p>"Quick fix," developer thought, and made what seemed like the most logical change:</p>
<pre><code class="lang-python"><span class="hljs-comment"># app.py - The "harmless" local change</span>
<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">'__main__'</span>:
    app.run(host=<span class="hljs-string">'0.0.0.0'</span>, port=<span class="hljs-number">5201</span>, debug=<span class="hljs-literal">False</span>)  <span class="hljs-comment"># Changed to avoid local conflict</span>
</code></pre>
<p>The feature worked perfectly in her local environment. Tests passed. Code review looked good. The Docker build succeeded. Everything seemed normal.</p>
<p>But here's where the story takes a turn.</p>
<h2 id="heading-the-9-am-discovery">The 9 AM Discovery</h2>
<p>The next morning, I arrived open my machine around 9 AM and decided to run some API tests before diving into feature work. Our automated CloudWatch alarm had dutifully started the ECS Fargate tasks at 8 AM, just as configured. But something was wrong.</p>
<p>Every API call returned the dreaded <strong>502 Bad Gateway</strong> error.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757422966125/84d830de-418d-4257-b4e3-bc2e73b3fe84.png" alt class="image--center mx-auto" /></p>
<p>I immediately checked the <strong>ECS console</strong>, and what I saw made me to think : <strong>Fargate tasks</strong> were in a continuous cycle of PENDING → RUNNING → STOPPED. They would start up, run for a few minutes, then get drained and terminated, only for <strong>ECS</strong> to immediately spin up new ones.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757420248853/80d05d3f-d8af-482c-b5d3-9182dd9f5b8c.png" alt class="image--center mx-auto" /></p>
<p>The root cause hit me like a lightning bolt: Our Flask application was now listening on port 5201, but everything else in our infrastructure was still configured for port 5001.</p>
<h2 id="heading-the-downward-spiral">The Downward Spiral</h2>
<p>What followed was a textbook example of how a small misconfiguration can cascade into a major incident:</p>
<ol>
<li><p><strong>Task Launch</strong>: <strong>ECS Fargate</strong> would start a new task</p>
</li>
<li><p><strong>Health Check Failure</strong>: ALB couldn't reach the app on port 5001</p>
</li>
<li><p><strong>Task Termination</strong>: <strong>ECS</strong> marked the task as unhealthy and terminated it</p>
</li>
<li><p><strong>Replacement Attempt</strong>: <strong>ECS</strong> immediately launched a new <strong>Fargate task</strong> to maintain desired count</p>
</li>
<li><p><strong>Infinite Loop</strong>: Steps 1-4 repeated endlessly</p>
</li>
</ol>
<p>Our <strong>ECS Fargate cluster</strong> was stuck in what we later dubbed "the task death spiral." New <strong>Fargate tasks</strong> were being created and destroyed every few minutes, consuming compute resources while serving zero traffic.</p>
<h2 id="heading-the-circuit-breaker-revelation">The Circuit Breaker Revelation</h2>
<p>During our post-incident analysis, then I realised something that would change our deployment strategy forever: "What if I told you this entire incident could have been prevented automatically?"</p>
<p>Enter the <a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html"><strong>ECS Fargate Deployment Circuit Breaker</strong></a>.</p>
<p>This AWS feature acts like an intelligent safety net for <strong>ECS Fargate</strong> deployments. When enabled, it monitors your <strong>Fargate</strong> deployment and can automatically detect when something is going wrong, stopping the deployment and rolling back to the previous stable version.</p>
<h2 id="heading-how-ecs-behaves-in-different-rollback-scenarios">How ECS Behaves in Different Rollback Scenarios</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Scenario</td><td>Desired Count</td><td>Task Def Changed</td><td>Rollback Triggered</td><td>Rollback Time</td></tr>
</thead>
<tbody>
<tr>
<td>Broken image pushed with <code>latest</code> only</td><td>1</td><td>❌ No</td><td>❌ No</td><td>❌ Never</td></tr>
<tr>
<td>Broken task def v3 (flask-app:v2)</td><td>1</td><td>✅ Yes</td><td>✅ Yes</td><td>⏱ ~10–20 min</td></tr>
<tr>
<td>Same failure with <code>desiredCount=5</code></td><td>5</td><td>✅ Yes</td><td>✅ Yes</td><td>⏱ ~3–5 min</td></tr>
</tbody>
</table>
</div><ul>
<li><p>Circuit breaker <strong>only works</strong> if a new <strong>task definition is registered</strong>.</p>
</li>
<li><p><strong>Desired count = 1</strong> leads to <strong>slow failure detection</strong>, delaying rollback.</p>
</li>
<li><p>ECS uses an internal failure threshold (usually 3 failed tasks).</p>
</li>
</ul>
<h2 id="heading-how-circuit-breaker-would-have-saved-us">How Circuit Breaker Would Have Saved Us</h2>
<p>Let's replay our incident with <strong>ECS Fargate deployment circuit breaker</strong> enabled:</p>
<p>To better understand how ECS identifies and reacts to a bad deployment, here’s a simplified flow diagram based on our real incident:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757425452103/f2adce56-6da8-4d09-bff8-36a5d0a4fa6c.png" alt class="image--center mx-auto" /></p>
<ol>
<li><p><strong>Deployment Start:</strong> Mismatched port in new task definition</p>
</li>
<li><p><strong>Monitoring Begins:</strong> ECS tracks task health and startup patterns</p>
</li>
<li><p><strong>Failure Detected:</strong> Multiple ECS task failures trigger threshold</p>
</li>
<li><p><strong>Automatic Rollback:</strong> ECS reverts to previous task definition</p>
</li>
<li><p><strong>Service Restored:</strong> Traffic resumes via healthy version</p>
</li>
</ol>
<p>Instead of 75 minutes of downtime, we would have had perhaps 5-10 minutes of degraded performance while the circuit breaker detected and resolved the issue.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757423154034/03f2b8c7-0997-4ddf-bde9-b1484ed07603.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757423625401/2ab3d6ac-17a2-4456-ad57-54715a949a74.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-how-ecs-actually-triggers-rollbacks-behind-the-scenes">How ECS Actually Triggers Rollbacks: Behind the Scenes</h2>
<p>During our experiments, we noticed some undocumented behaviours:</p>
<ul>
<li><p>ECS doesn't rollback unless there's a new <strong>task definition</strong> — pushing a new image to <code>latest</code> doesn’t count.</p>
</li>
<li><p><strong>Desired count = 1</strong> (common in off-hours cost optimisation) leads to much slower rollbacks due to staggered failures.</p>
</li>
<li><p>ECS seems to use a dynamic <strong>failure threshold of 3</strong> (confirmed visually in the console), meaning it waits for 3 failed task launches before triggering rollback. You cannot change either of the threshold values. It is mentioned in the <a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html#failure-threshold">ECS deployment circuit breaker</a></p>
<blockquote>
<p>ECS uses the fol<a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html">l</a>owing logic to determine rollback<a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html">:  
</a>Minimum threshold &lt;= 0.5 * <code>desired task count</code> =&gt; maximum threshold</p>
</blockquote>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757422632526/3736e4ef-0e1f-4f16-8a60-06dc99bb5a1a.png" alt class="image--center mx-auto" /></p>
<p>What this means in practice:<br />Even if circuit breaker is “enabled,” rollback <strong>won’t happen</strong> unless you structure your deployments correctly.</p>
<h2 id="heading-the-circuit-breaker-implementation">The Circuit Breaker Implementation</h2>
<p>That afternoon, we made a decision that would prove to be one of our best infrastructure investments: enabling ECS Deployment Circuit Breaker across all our services, starting with our most critical production workloads.</p>
<p>The configuration was surprisingly straightforward:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"deploymentCircuitBreaker"</span>: {
    <span class="hljs-attr">"enable"</span>: <span class="hljs-literal">true</span>,
    <span class="hljs-attr">"rollback"</span>: <span class="hljs-literal">true</span>
  }
}
</code></pre>
<h2 id="heading-what-happens-when-desiredcount-0-a-real-risk-pattern"><strong>What Happens When DesiredCount = 0? A Real Risk Pattern</strong></h2>
<p>Our incident happened in a development environment using off-hours scaling — every night, <code>desiredCount = 0</code>, and each morning ECS spins the tasks back up at 8 AM. This helps save cost during non-business hours.</p>
<p>But here’s the hidden danger we uncovered through real experiments:</p>
<blockquote>
<p>In our real case, we pushed a new (and broken) image to <code>flask-app:latest</code> overnight.<br />However, we didn’t register a new task definition — the task definition was unchanged.<br />So when ECS scaled up in the morning, it pulled the broken image and launched new tasks.<br />Because ECS had no healthy task running and no “new deployment” to monitor, <strong>no rollback happened.</strong></p>
</blockquote>
<p>This subtle but critical issue means that:</p>
<ul>
<li><p>ECS had <strong>no baseline healthy task</strong> to compare against</p>
</li>
<li><p>There was <strong>no new task definition</strong>, so ECS didn’t consider this a deployment</p>
</li>
<li><p>Circuit breaker logic was <strong>never triggered</strong></p>
</li>
<li><p>ECS just kept retrying the same broken image silently</p>
</li>
</ul>
<p>Even with circuit breaker enabled, <strong>rollback only works if ECS sees a new deployment</strong> (i.e., new task definition revision). In our case, since we reused <code>flask-app:latest</code> with the same task definition, ECS had nothing to roll back to.</p>
<h2 id="heading-recommendations-based-on-real-world-failures">Recommendations (Based on Real-World Failures)</h2>
<p>These are not just best practices from the AWS documentation. These are hard-earned lessons from our own experiments and real incident recoveries.</p>
<h3 id="heading-1-avoid-using-latest-tag-in-ecs-task-definitions">1. Avoid using <code>latest</code> tag in ECS task definitions</h3>
<p>ECS won't detect image changes if you're using <code>flask-app:latest</code> and don’t update the task definition. This can silently deploy broken images <strong>without triggering a rollback</strong>.</p>
<p><strong>Do this instead:</strong></p>
<ul>
<li><p>Use <strong>immutable image tags</strong> like <code>v1.2.3</code>, <code>build-20250909</code>, or a full <strong>SHA digest</strong></p>
</li>
<li><p>Always reference a new task definition revision tied to each deployment</p>
</li>
</ul>
<h3 id="heading-2-register-a-new-task-definition-with-every-deployment">2. Register a new task definition with every deployment</h3>
<p>The deployment circuit breaker <strong>only activates when ECS detects a new deployment</strong>. If the task definition remains unchanged (even with a new image), ECS won’t treat it as a deployment, and rollback won’t occur.</p>
<p><strong>Do this instead:</strong></p>
<ul>
<li><p>Automate task definition registration in your CI/CD pipeline</p>
</li>
<li><p>Even if using the same image tag, register a revision to trigger deployment detection</p>
</li>
</ul>
<h3 id="heading-3-use-cloudwatch-alarms-to-detect-deployment-failures-early">3. Use CloudWatch alarms to detect deployment failures early</h3>
<p>ECS retries silently when tasks fail during deployment. In non-prod or low-desiredCount environments, this can go unnoticed.</p>
<p><strong>Do this instead:</strong></p>
<ul>
<li><p>Monitor <code>UnhealthyHostCount</code> (ALB) and <strong>ECS service deployment events</strong></p>
</li>
<li><p>Alert on unusual <strong>task exit reasons</strong>, <strong>STOPPED</strong> states, or drops in <code>RunningTaskCount</code></p>
</li>
</ul>
<h3 id="heading-4-enforce-task-definition-updates-in-cicd">4. Enforce Task Definition Updates in CI/CD</h3>
<p>One common issue we saw: devs pushed new images to <code>latest</code>, but forgot to update task definitions. Result? No rollback, no detection, broken app silently running.</p>
<p><strong>Do this instead:</strong></p>
<ul>
<li><p>Add a CI/CD check: <strong>fail the pipeline</strong> if task definition revision isn't updated</p>
</li>
<li><p>Maintain an audit log: map every deployment to a task definition revision</p>
</li>
</ul>
<h3 id="heading-5-use-higher-desiredcount-during-deployments-for-faster-rollback">5. Use higher <code>desiredCount</code> during deployments for faster rollback</h3>
<p>In our tests, when <code>desiredCount</code> was set to 1, rollback took over 20 minutes to trigger. With <code>desiredCount</code> set to 5, the circuit breaker detected the failure pattern faster and triggered rollback within 3–5 minutes.</p>
<p><strong>What to do instead:</strong></p>
<ul>
<li><p>Temporarily increase <code>desiredCount</code> during deployments (e.g., from 1 to 5).</p>
</li>
<li><p>Alternatively, tune <code>deploymentConfiguration</code> to use <code>maxPercent = 200</code> and <code>minHealthyPercent = 50</code> to allow parallel task launches during updates.</p>
</li>
</ul>
<h3 id="heading-summary-table">Summary Table</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Problem</td><td>Recommendation</td></tr>
</thead>
<tbody>
<tr>
<td>ECS didn’t rollback on broken image</td><td>Always register a new task definition</td></tr>
<tr>
<td>ECS used broken <code>latest</code> tag silently</td><td>Avoid <code>latest</code>, use immutable image tags</td></tr>
<tr>
<td>Slow rollback when desired count = 1</td><td>Use higher <code>desiredCount</code> during deploys</td></tr>
<tr>
<td>No alert when tasks failed</td><td>Add CloudWatch alarms for task health and service events</td></tr>
<tr>
<td>Deployment skipped task def update</td><td>Enforce task def registration in pipeline</td></tr>
</tbody>
</table>
</div><h2 id="heading-conclusion-the-safety-net-we-proactively-built">Conclusion: The Safety Net We Proactively Built</h2>
<p>Even small mistakes — like a port mismatch — can bring down containerized systems. That’s why we treat circuit breakers not as optional features, but as must-have infrastructure. They're not just for rollback — they build resilience into your deployment lifecycle.</p>
<p>That seemingly minor port change could’ve caused hours of downtime — but it didn’t. Because we caught it early, we had the chance to rethink our deployment safety.</p>
<p>We turned that morning’s incident into a proactive defense strategy by enabling ECS Deployment Circuit Breaker across all services. It now gives us confidence that even if a broken deployment slips through, ECS will detect the issue and roll back automatically — without us scrambling at 9 AM.</p>
<p>Our team now deploys with confidence, not caution. And the best part? The incident never reached users.</p>
<p>Sometimes the best production incidents are the ones that never happen.</p>
<p>👉 Have you faced something similar? Let’s talk in the comments.</p>
<h3 id="heading-references">References:</h3>
<ul>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/containers/announcing-amazon-ecs-deployment-circuit-breaker/">https://aws.amazon.com/blogs/containers/announcing-amazon-ecs-deployment-circuit-breaker/</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html">https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-circuit-breaker.html</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[AWS Kiro IDE: Not Just Another AI Toy for Developers]]></title><description><![CDATA[Hello Devs,
Every morning, after my usual routine, I scroll through Reddit. It’s part habit, part curiosity—a way to catch up on what’s happening in the world of AWS and product building.

But let me be clear: I’m not the kind of developer who jumps ...]]></description><link>https://www.internetkatta.com/aws-kiro-ide-not-just-another-ai-toy-for-developers</link><guid isPermaLink="true">https://www.internetkatta.com/aws-kiro-ide-not-just-another-ai-toy-for-developers</guid><category><![CDATA[Kiro]]></category><category><![CDATA[AWS]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[product]]></category><category><![CDATA[IDEs]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Tue, 15 Jul 2025 04:36:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752553877606/094b2fdc-c00f-4deb-babd-6407ff8263c4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>Every morning, after my usual routine, I scroll through Reddit. It’s part habit, part curiosity—a way to catch up on what’s happening in the world of AWS and product building.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752552017027/20792d0a-9b1a-45b3-b71c-20cbf41be0ea.png" alt class="image--center mx-auto" /></p>
<p>But let me be clear: I’m not the kind of developer who jumps on every trendy new tool the moment it launches. Unless I see a clear reason it could help me in my real work, I usually wait and watch.</p>
<p>That’s because right now, I’m working under a strict timeline for GTM (Go-To-Market). I’m constantly evaluating anything that might help me ship faster without sacrificing quality.</p>
<p>I have Figma designs ready for most of our screens. Just last week, I built out an entire chat module—including both the UI and backend APIs—using ChatGPT to help speed up the process. Tools like that are no longer just experiments for me; they’re part of how I stay on track to deliver.</p>
<p>So when I saw AWS’s Reddit post about Kiro IDE, I wasn’t sure I’d even try it. Another AI tool? There’s been a flood of those lately.</p>
<p>But there was something different about how AWS positioned Kiro. They weren’t just talking about generating code. They were talking about helping developers go all the way from ideas and specs to production systems. That’s exactly where most AI tools fall short.</p>
<p>Given the tight timelines I’m working under, I figured: If this can help me move faster for real product work, it’s worth testing. So I decided to give it a shot—using a real feature I’m building for NuShift Connect.</p>
<p>At NuShift Connect, we’re building a health-focused social platform. Recently, I’ve been working on an additional feature: Groups. It’s a way for users to create dedicated communities around health topics—like cancer awareness, fitness journeys, or wellness discussions.</p>
<p>This Groups feature isn’t my entire product. It’s one of many pieces I’m layering onto an existing, fairly complex codebase. And that’s why I usually don’t rush into brand-new tools. I can’t afford to break my momentum or disrupt working systems.</p>
<p>But Kiro IDE made me curious because it seemed like it might help me structure my feature properly from the start, keep my thinking organised as I integrate with existing code, and speed up writing UI components in Angular.</p>
<p>I was at the GenAI Loft last week, where I attended a session on the AI Development Lifecycle (AIDLC) by <a target="_blank" href="https://www.linkedin.com/in/siddhesh-jog/">Siddhesh Jog</a>. He talked about how successful product development often requires moving through clear phases—from ideation and requirement gathering to building, testing, and deploying. As I explored Kiro, I could see that it’s built with that same <a target="_blank" href="https://www.linkedin.com/in/siddhesh-jog/"></a>philosophy in mind. It feels like a tool designed to guide product builders step by step through that lifecycle, rather than just spitting out random code</p>
<p>One thing that stood out right away was Kiro’s idea of Specs. Instead of just throwing random prompts at an AI, Kiro pushes you to define what you’re building—almost like writing requirements documentation but right inside your IDE.</p>
<p>For the Groups feature, I wrote out a spec something like this:</p>
<ul>
<li><p>Users can create new groups with a name, description, and cover image</p>
</li>
<li><p>Groups can be public or private</p>
</li>
<li><p>Each group has posts, comments, and member lists</p>
</li>
<li><p>We already have user profiles and authentication in place</p>
</li>
<li><p>My initial focus is designing the UI and Angular components—not backend APIs yet</p>
</li>
</ul>
<p>The moment I saved this spec, Kiro started suggesting Angular component structures (Group List, Group Detail, Create Group form), folder organization ideas for modularity, and approaches for handling reactive forms and UI state.</p>
<p>Instead of dumping generic code snippets, Kiro was shaping its suggestions around my actual app structure and requirements.</p>
<p>This was the part that felt genuinely useful. I’ve used plenty of AI tools before—like when I built our chat module last week using ChatGPT to generate both frontend components and backend API scaffolding. That experience showed me how much time AI can save if used well.</p>
<p>Kiro felt similar, but with even more context awareness for Angular. It suggested splitting my UI into smart, modular components. It helped me outline reactive forms for group creation, including validation logic. It offered ways to connect state between components. It even reminded me to think about loading states and empty UI screens.</p>
<p>I haven’t wired up the backend yet—that’s the next step. But just for the Angular side, Kiro saved me hours of manual scaffolding and planning.</p>
<p>Another thing that impressed me was how Kiro handles Hooks and Steering.</p>
<p><strong>Hooks</strong> are like background automations. For example, when I updated my spec, Kiro prompted me to adjust my component interfaces and possibly update tests. Small nudges, but useful.</p>
<p><strong>Steering</strong> lets you guide how the AI writes code. I could tell Kiro to stick to Angular best practices, use TypeScript consistently, and avoid certain libraries we don’t use at NuShift. Instead of me manually rewriting AI code every time, Kiro started adapting to my way of working.</p>
<p>While exploring Kiro’s UI, I saw something called <strong>MCP Servers</strong>. From what I gather, this lets Kiro connect to real tools and data sources, like fetching live DynamoDB schemas, reading API specs, or integrating with CI/CD systems. I haven’t tested this yet, but the idea that Kiro could be aware of my real environment—not just write generic code—is intriguing for the future.</p>
<h2 id="heading-its-still-early-but-it-feels-different">It’s Still Early, But It Feels Different</h2>
<p>Like any new tool, Kiro isn’t perfect yet.</p>
<ul>
<li><p>Sometimes its suggestions were too generic, especially for UI styling details. Even when I gave it Figma screenshots, it didn’t always pick up the exact styles or color codes. I found myself needing to do more prompting to match our design system.</p>
</li>
<li><p>To be fair, this is something I could solve better by using Kiro’s Specs feature upfront. You can define your product’s brand guidelines, color palettes, typography, and other design rules directly in specs. I simply hadn’t included those details in my initial prompts—but next time, I plan to try it.</p>
</li>
<li><p>It occasionally misunderstood the relationships between my existing components. For example, our codebase has some duplicate components and modules left over from older versions, with similar names scattered in different folders. Instead of deleting unused modules, previous developers just added new ones alongside them. That created confusion for Kiro, which sometimes pulled from the wrong places.</p>
</li>
<li><p>It’s a reminder that tools like Kiro work best if your codebase is clean—or at least if you set ground rules or do some upfront cleanup so the AI knows what’s current and what’s legacy.</p>
</li>
<li><p>Even when there was an error running <code>ng serve</code>, Kiro didn’t automatically detect it or suggest fixes. I had to explicitly prompt it to run the command, read the error output, and help debug the issue.</p>
</li>
<li><p>I was thinking it would be amazing if Kiro could automatically stop <code>ng serve</code> after making changes, rerun it, and check for errors. But I also realize that might get tricky. If the error doesn’t get fixed properly, it could send Kiro into an infinite loop of stopping, starting, and trying again—something I’ve definitely seen happen with ChatGPT during debugging.</p>
</li>
</ul>
<p>I haven’t tested how well it handles full-stack API integrations yet.</p>
<p>That said, it’s worth remembering Kiro is brand new, and everyone—from developers to tech media—is still exploring what it can really do.</p>
<p>But even with those limitations, I came away feeling like this is not just another AI toy.</p>
<p>For the first time, I felt like an AI tool was working with me to build a real feature—not just generating isolated snippets of code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752553835335/2523ea62-a789-4455-a5c6-ab6e41333932.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-why-product-builders-should-pay-attention">Why Product Builders Should Pay Attention</h2>
<p>I’m usually cautious about new tools, especially trendy ones. But Kiro felt different because it fits how product builders actually work.</p>
<p>We don’t just write code—we design systems and think about integration. We have existing codebases, not blank slates. We need speed—but also clarity and maintainability.</p>
<p>Kiro helped me think through my Groups feature in a structured way. It saved time on my Angular scaffolding. And it kept me focused on building something that fits into NuShift Connect—not just playing with isolated code.</p>
<p>I have a strict GTM timeline and a long list of features to deliver. Recently, AI tools like ChatGPT have helped me build faster—like with our new chat module. But Kiro feels like a step forward because it’s built for product builders, not just for writing isolated pieces of code.</p>
<p>If AWS keeps evolving it, Kiro could be one of the most useful developer tools they’ve launched since Lambda.</p>
<h2 id="heading-my-takeaway">My Takeaway</h2>
<p>My advice? If you’re curious, don’t just read blog posts (even this one). Pick a real feature you’re working on and try building it in Kiro IDE. That’s when you’ll see whether it’s just another AI experiment—or a glimpse of how we’ll build products in the future.</p>
<p>Next, I’m planning to see how Kiro handles backend integrations for Groups—especially how it manages API design and DynamoDB schemas. I’ll share those results once I’ve tested it further. Right now, Kiro IDE is free during its preview period. I’m not sure what pricing AWS will introduce later—but for now, it’s worth trying if you’re curious.</p>
<blockquote>
<p>TL;DR: Kiro IDE isn’t just an AI code generator. It feels like a real partner helping me think through product features and ship faster under tight deadlines.</p>
</blockquote>
<h2 id="heading-references-worth-exploring">References Worth Exploring</h2>
<p>If you’re curious to dig deeper into Kiro IDE, here are some good places to start:</p>
<p><a target="_blank" href="https://aws.amazon.com/blogs/aws/introducing-kiro-the-agentic-ai-ide/">Introducing Kiro: The Agentic AI IDE (AWS Blog)</a></p>
<p><a target="_blank" href="https://kiro.dev/">Kiro IDE Official Site &amp; Documentation</a></p>
<p><a target="_blank" href="https://dev.to/aws-builders/kiro-the-new-agentic-ai-ide-from-aws-5311">Kiro: The new Agentic AI IDE from AWS (DEV.to)</a></p>
<p><a target="_blank" href="https://dev.to/aws-builders/kiro-agentic-ai-ide-beyond-a-coding-assistant-full-stack-software-development-with-spec-driven-220l">Kiro Agentic AI IDE — Beyond a coding assistant (DEV.to)</a></p>
]]></content:encoded></item><item><title><![CDATA[Migrating from EC2 to Containers: What Teams Miss]]></title><description><![CDATA[Hello Devs,
In this blog, we are going to learn about the real challenges, insights and mistakes behind migrating from EC2 to containers, based on my experience. So let’s start.
The Reality Check That Started It All

Here's the honest truth: Before N...]]></description><link>https://www.internetkatta.com/migrating-from-ec2-to-containers-what-teams-miss</link><guid isPermaLink="true">https://www.internetkatta.com/migrating-from-ec2-to-containers-what-teams-miss</guid><category><![CDATA[ECS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[containers]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community Builder]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Thu, 10 Jul 2025 14:47:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752158619369/2f2857da-5adf-48f1-822f-e4e1ac6e2075.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>In this blog, we are going to learn about the real challenges, insights and mistakes behind migrating from EC2 to containers, based on my experience. So let’s start.</p>
<h2 id="heading-the-reality-check-that-started-it-all">The Reality Check That Started It All</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752023755961/705c4c48-9313-48d3-897f-656e8b648d50.gif" alt class="image--center mx-auto" /></p>
<p>Here's the honest truth: Before NuShift, I had never migrated workloads from EC2 to containers. At previous jobs, we were either stuck in the EC2 era, had no real container strategy, or everything was already set up for containers, and my role was to focus on scaling and improving as a developer since the DevOps team handled it. I have been using containers for a long time and understood their benefits, but I never had the chance to lead that transformation..</p>
<p>When I joined NuShift, my first few months were spent doing the unglamorous work—cleaning up unused resources, right-sizing EC2 instances, and optimising our AWS bill. We managed to improve utilisation, but I knew this was just putting a band-aid on a deeper problem.</p>
<p>That's when I set a personal and team goal: move us toward containers. This wasn't about cost savings—it was about solving real operational headaches. We needed better resource utilisation, more structured deployments, and most importantly, the promise that "build once, run anywhere" that containers offered.</p>
<p>Like most developers, our instinct was simple: write code, host it quickly, pick the easiest option. That usually meant EC2. Even at NuShift, we initially launched everything on EC2 instances. We even started considering Graviton-based instances for better performance, but realised that would mean dealing with architecture changes and potential compatibility issues.</p>
<p>The real pain point? Environment consistency. What worked in development didn't always work in staging. Missing Python packages, different system libraries, manual patching cycles—these weren't just inconveniences, they were blocking us from moving fast as we prepared for our public launch.</p>
<p>That's when containers became part of my plan from day one. Not because of some dramatic failure, but because I could see the operational complexity we were heading toward if we didn't change course.</p>
<h2 id="heading-the-hidden-operational-pain-of-simple-ec2-deployments">The Hidden Operational Pain of "Simple" EC2 Deployments</h2>
<p>Launching a product is messy. You want the simplest path to get things running. That’s why EC2 becomes the go-to for most startup teams. At NuShift, our early backend stack—Flask APIs, WebSocket server, and MySQL—was each running on separate EC2 instances. No orchestration. Minimal automation. It worked... until it didn’t. Here's what nobody tells you about the EC2-first approach: it feels simple until you need consistency across environments.</p>
<h3 id="heading-the-development-vs-qa-nightmare">The Development vs QA Nightmare</h3>
<p>Our development setup worked perfectly. Local Flask app, local database, everything smooth. But when we moved to QA ( we are not on production yet)</p>
<ul>
<li><p>Missing Python packages that weren't in our requirements.txt</p>
</li>
<li><p>Different system library versions causing unexpected behaviors</p>
</li>
<li><p>Manual patching cycles that meant potential downtime</p>
</li>
<li><p>"It works on my machine" became our team's unofficial motto</p>
</li>
</ul>
<p>The worst part? Setting up a new environment meant hours of manual configuration, hoping we didn't miss any dependencies that existed on our other servers.</p>
<p>These challenges highlighted the need for a more consistent and reliable deployment process.</p>
<h3 id="heading-the-deployment-gamble-game-aka-jugaad">The Deployment gamble game (a.k.a. Jugaad)</h3>
<p>Our deployment process was essentially gambling:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Our "sophisticated" deployment process</span>
ssh into qa-server
git pull
pip install -r requirements.txt  <span class="hljs-comment"># hope nothing breaks or </span>
pip install library-xss <span class="hljs-comment">#manually installed os level package</span>
sudo systemctl restart app
<span class="hljs-comment"># Check logs and pray</span>
</code></pre>
<p>Every deployment felt like rolling the dice. Not because our code was bad—but because we could never guarantee the environment matched what we’d tested locally. A missing Python package here, a different system library version there—it was chaos waiting to happen.</p>
<h3 id="heading-why-this-matters-beyond-just-code">Why This Matters Beyond Just Code</h3>
<p>At first, we thought the problem was purely technical: missing packages, inconsistent environments, surprise crashes in staging. But underneath all that was a bigger issue:</p>
<blockquote>
<p><strong>We were trying to build modern applications on infrastructure we were too small to manage properly.</strong></p>
</blockquote>
<p>Deploying on EC2 meant we were responsible for:</p>
<ul>
<li><p>OS patching</p>
</li>
<li><p>Library dependencies</p>
</li>
<li><p>Security hardening</p>
</li>
<li><p>Deployment orchestration</p>
</li>
<li><p>Scaling and failover</p>
</li>
</ul>
<p>For large teams with dedicated DevOps or platform engineers, this might be manageable. For us—a small team of developers—it wasn’t sustainable.</p>
<p>That’s when we realized:</p>
<blockquote>
<p><strong>Infrastructure decisions aren’t just about technology—they’re about your team’s strengths and capacity.</strong></p>
</blockquote>
<h2 id="heading-the-team-reality-check-know-your-strengths">The Team Reality Check: Know Your Strengths</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752024349296/368f714f-83ee-4d0b-af41-f3f36308b3ab.gif" alt class="image--center mx-auto" /></p>
<p>Here's the brutal truth most migration guides skip: your team composition matters more than technical specs.</p>
<p>Before choosing any path, ask yourself:</p>
<ul>
<li><p>Are you a developer who also handles infrastructure?</p>
</li>
<li><p>Do you have dedicated DevOps/Platform engineers?</p>
</li>
<li><p>How much time can you realistically spend on server maintenance?</p>
</li>
</ul>
<p>At NuShift, we had 3 full-stack developers. Zero dedicated infrastructure people. This meant every hour spent patching EC2 instances, managing security updates, and troubleshooting server issues was an hour stolen from building features our users needed.</p>
<p>And here’s what that hidden cost actually looked like:</p>
<p>Reality Check:</p>
<ul>
<li><p>EC2 patching: 4 hours/month per instance</p>
</li>
<li><p>Security updates: 2 hours/month</p>
</li>
<li><p>Monitoring setup: 8 hours initially + ongoing maintenance</p>
</li>
<li><p>Capacity planning: 3 hours/month</p>
</li>
<li><p>Incident response: 6 hours/month average</p>
</li>
</ul>
<p>Total: ~25 hours/month on infrastructure babysitting</p>
<p><strong>If you're a startup with only developers, staying deep in the EC2 world means accepting that you'll spend significant time on environment management and system administration.</strong> That’s not why most of us became developers.</p>
<p>As we prepared for our public launch, this operational overhead became a real concern. We needed to move fast, deploy reliably, and focus on building features—not troubleshooting environment inconsistencies.</p>
<blockquote>
<p>All these operational headaches forced us to step back and ask ourselves a bigger question.</p>
</blockquote>
<h2 id="heading-the-moment-everything-clicked">The Moment Everything Clicked</h2>
<p>The breakthrough came when I realised we were solving the wrong problem. We weren't trying to manage servers—we were trying to run applications.</p>
<p>That mental shift changed everything.</p>
<p>Instead of asking "which server will this run on?", we started asking "what does this service actually need?"</p>
<h2 id="heading-why-ecs-fargate-became-our-secret-weapon">Why ECS Fargate Became Our Secret Weapon</h2>
<p>We considered three paths:</p>
<ol>
<li><p>ECS with EC2: More control, Spot instances, Graviton savings (but more operational overhead)</p>
</li>
<li><p>EKS: Full Kubernetes power and flexibility…but personally, as a developer, I find this path complex and better suited to teams with dedicated platform engineers. It felt like overkill for our small team</p>
</li>
<li><p>ECS Fargate: Serverless containers (perfect for developer-heavy teams)</p>
</li>
</ol>
<p>For a team of 3 engineers with no dedicated infrastructure expertise, Fargate won because it eliminated the exact operational problems that were slowing us down:</p>
<h3 id="heading-before-environment-inconsistency">Before: Environment Inconsistency</h3>
<p><em>"This works fine in development, but staging has different Python versions."</em></p>
<p><em>"We need to manually install these system packages on every server."</em></p>
<p><em>"Production deployment failed because of a missing dependency."</em></p>
<h3 id="heading-after-container-consistency">After: Container Consistency</h3>
<p><em>"Build the container once, run it everywhere.”</em></p>
<p><em>"All environments use the exact same container image.”</em></p>
<p><em>"Deployment is just pulling and running a container—no surprises."</em></p>
<p>It wasn’t just about technology. It was about giving our small team the freedom to build features without worrying about the plumbing underneath. For us, Fargate meant moving faster, deploying more reliably, and finally escaping the chaos of managing EC2 servers.</p>
<h2 id="heading-the-migration-what-we-wish-wed-known">The Migration: What We Wish We'd Known</h2>
<p>Even as someone deeply familiar with containers, these areas still tripped me up during migration. They’re easy to overlook—and that’s why I’m sharing them.</p>
<h3 id="heading-victory-1-the-iam-maze-and-how-to-navigate-it">Victory #1: The IAM Maze (And How to Navigate It)</h3>
<p>This tripped us up for days. ECS needs TWO different roles:</p>
<p>Task Role: What your application can do</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: [<span class="hljs-string">"s3:GetObject"</span>, <span class="hljs-string">"s3:PutObject"</span>],
  <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"arn:aws:s3:::my-bucket/*"</span>
}
</code></pre>
<p>Execution Role: What ECS can do to run your application</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
  <span class="hljs-attr">"Action"</span>: [
    <span class="hljs-string">"ecr:GetDownloadUrlForLayer"</span>,
    <span class="hljs-string">"logs:CreateLogStream"</span>
  ]
}
</code></pre>
<p>Pro tip: Create the execution role first, then focus on task permissions. Don't mix them up like we did. I have explained in one of Youtube Shorts video <a target="_blank" href="https://youtube.com/shorts/6-MxMB3E43U?si=FyfEr0_CLH8bJT0g">https://youtube.com/shorts/6-MxMB3E43U?si=FyfEr0_CLH8bJT0g</a></p>
<h3 id="heading-victory-2-goodbye-ssh-hello-observability">Victory #2: Goodbye SSH, Hello Observability</h3>
<p>The hardest mental shift? No more SSH debugging. But this forced us to build better logging:</p>
<pre><code class="lang-python"><span class="hljs-comment"># Before: Debug by SSH and printf</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">process_network_request</span>():</span>
    <span class="hljs-comment"># hope nothing breaks</span>
    <span class="hljs-keyword">return</span> result

<span class="hljs-comment"># After: Structured logging for containers</span>
<span class="hljs-keyword">import</span> structlog
logger = structlog.get_logger()

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">process_network_request</span>():</span>
    logger.info(<span class="hljs-string">"processing_network_request"</span>, user_id=user_id, network_count=len(networks))
    <span class="hljs-comment"># proper error handling and metrics</span>
    <span class="hljs-keyword">return</span> result
</code></pre>
<h3 id="heading-victory-3-the-security-groups-revelation">Victory #3: The Security Groups Revelation</h3>
<p>EC2 security groups felt simple—one instance, one set of rules. But Fargate tasks need precise networking:</p>
<p>Our mistake: Copying EC2 security group rules directly to Fargate tasks</p>
<p>The fix: Each ECS service gets its own security group with minimal required access:</p>
<ul>
<li><p>Flask API: Only needs outbound to RDS and inbound from ALB</p>
</li>
<li><p>WebSocket: Needs different port ranges and longer timeouts</p>
</li>
<li><p>Background jobs: Only outbound to external APIs</p>
</li>
</ul>
<h2 id="heading-the-numbers-that-matter">The Numbers That Matter</h2>
<p>Six months post-migration:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Metric</td><td>Before (EC2)</td><td>After (Fargate)</td><td>Change</td></tr>
</thead>
<tbody>
<tr>
<td>Environment setup time</td><td>4-6 hours</td><td>10 minutes</td><td>-95%</td></tr>
<tr>
<td>Deployment time</td><td>15 minutes</td><td>3 minutes</td><td>-80%</td></tr>
<tr>
<td>Deployment failures</td><td>1 in 5</td><td>1 in 50</td><td>-90%</td></tr>
<tr>
<td>"Works on my machine" incidents</td><td>3-4/month</td><td>0</td><td>-100%</td></tr>
</tbody>
</table>
</div><p>But the real win? We stopped being environment troubleshooters and became product builders again. With our public launch approaching, this consistency became invaluable.</p>
<h2 id="heading-the-microservices-question-when-to-split-when-to-keep-together">The Microservices Question: When to Split, When to Keep Together</h2>
<p>Here's where most teams overthink it—and where your team structure should guide your decision.</p>
<p>Our Flask app was already modular:</p>
<ul>
<li><p><code>/networks/*</code> routes → <a target="_blank" href="http://networks.py">networks.py</a></p>
</li>
<li><p><code>/profile/*</code> routes → <a target="_blank" href="http://profile.py">profile.py</a></p>
</li>
<li><p>WebSocket handling → separate module</p>
</li>
</ul>
<p>We could have split these into separate ECS services immediately. But we didn't.</p>
<p>Why we kept them together initially:</p>
<ul>
<li><p>Shared authentication logic</p>
</li>
<li><p>Small team (3 engineers, no dedicated DevOps)</p>
</li>
<li><p>Approaching public launch (needed stability over optimization)</p>
</li>
<li><p>Limited operational bandwidth for managing multiple services</p>
</li>
</ul>
<p>The key insight:</p>
<blockquote>
<p>Containers gave us the flexibility to split later without the environment consistency problems we'd face with EC2.</p>
</blockquote>
<p>And there’s another reason containers were the right choice for us—even if we started monolithic:</p>
<ul>
<li><p>With containers, you can right-size your compute per service as you grow.</p>
</li>
<li><p>If one part of your app is lightweight, you can run it in a small container to save costs.</p>
</li>
<li><p>If another part becomes resource-intensive, you can allocate more CPU, memory, or even switch to a larger container class—all without changing the rest of your system.</p>
</li>
<li><p>This means you can optimise your AWS spend at a much more granular level than with monolithic EC2 instances.</p>
</li>
</ul>
<p>That flexibility was a huge part of why we chose containers from day one. We knew we might not need microservices immediately—but when we did, we’d be ready to split workloads and optimize costs without rewriting everything from scratch.</p>
<h3 id="heading-the-team-size-reality-check">The Team-Size Reality Check</h3>
<p>If you have 1-3 developers doing everything:</p>
<ul>
<li><p>Start with containers, but keep services together</p>
</li>
<li><p>Split only when you have clear performance bottlenecks</p>
</li>
<li><p>Prioritize simplicity over "best practices"</p>
</li>
</ul>
<p>If you have 5+ engineers or dedicated platform team:</p>
<ul>
<li><p>Consider splitting services earlier</p>
</li>
<li><p>You have bandwidth to manage multiple deployment pipelines</p>
</li>
<li><p>Microservices architecture becomes more viable</p>
</li>
</ul>
<p>If you have dedicated DevOps/Platform engineers:</p>
<ul>
<li><p>ECS on EC2 might be worth considering for cost optimization</p>
</li>
<li><p>You can handle the operational complexity of managing instances</p>
</li>
<li><p>Kubernetes (EKS) becomes a viable option</p>
</li>
</ul>
<h3 id="heading-our-splitting-strategy-for-small-teams">Our Splitting Strategy (For Small Teams)</h3>
<ol>
<li><p>Start monolithic in containers (easier migration)</p>
</li>
<li><p>Monitor and measure actual bottlenecks</p>
</li>
<li><p>Split services when you have clear scaling needs AND team bandwidth</p>
</li>
<li><p>Grow your operational maturity alongside your architecture complexity</p>
</li>
</ol>
<p>Signs it's time to split:</p>
<ul>
<li><p>One component consistently uses 80%+ CPU while others idle</p>
</li>
<li><p>Different scaling patterns (API traffic vs background jobs)</p>
</li>
<li><p>Team growth (multiple people working on same codebase)</p>
</li>
</ul>
<h2 id="heading-the-unexpected-wins">The Unexpected Wins</h2>
<h3 id="heading-1-environment-parity-that-actually-works">1. Environment Parity That Actually Works</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># Same container, different environments</span>
<span class="hljs-attr">development:</span>
  <span class="hljs-attr">cpu:</span> <span class="hljs-number">256</span>
  <span class="hljs-attr">memory:</span> <span class="hljs-number">512</span>
<span class="hljs-attr">staging:</span>
  <span class="hljs-attr">cpu:</span> <span class="hljs-number">512</span>  
  <span class="hljs-attr">memory:</span> <span class="hljs-number">1024</span>
<span class="hljs-attr">production:</span>
  <span class="hljs-attr">cpu:</span> <span class="hljs-number">1024</span>
  <span class="hljs-attr">memory:</span> <span class="hljs-number">2048</span>
</code></pre>
<h3 id="heading-2-feature-flags-for-infrastructure">2. Feature Flags for Infrastructure</h3>
<p>Need to test a new background job? Deploy it as a separate ECS service with minimal resources. No risk to existing services.</p>
<h3 id="heading-3-cost-optimisation-by-service">3. Cost Optimisation by Service</h3>
<p>We discovered our podcast audio processing was using 60% of our compute budget. Easy to optimise when you can see it clearly.</p>
<h2 id="heading-what-wed-do-differently">What We'd Do Differently</h2>
<h3 id="heading-start-with-monitoring-day-one">Start with Monitoring Day One</h3>
<p>Don't wait until after migration. Set up CloudWatch Container Insights and structured logging immediately.</p>
<h3 id="heading-embrace-infrastructure-as-code-earlier-its-easier-than-you-think">Embrace Infrastructure as Code Earlier (It's Easier Than You Think)</h3>
<p>We initially managed ECS through the console. Big mistake. CloudFormation templates made everything repeatable and reviewable.</p>
<p>But here's what changed the game: AI-powered infrastructure tools.</p>
<p>I wrote our entire CloudFormation stack using Amazon Q Developer CLI and ChatGPT. What used to require deep AWS expertise now takes basic prompting skills:</p>
<pre><code class="lang-bash">&gt; q chat
<span class="hljs-comment"># Amazon Q Developer CLI in action</span>
&gt; <span class="hljs-string">"Generate CloudFormation template for ECS Fargate with ALB, RDS, and auto-scaling"</span>

<span class="hljs-comment"># Review, modify, and iterate</span>
&gt;  <span class="hljs-string">"Add CloudWatch Container Insights and log retention policies"</span>

<span class="hljs-comment"># ChatGPT for fine-tuning</span>
&gt; <span class="hljs-string">"Fix this IAM policy - getting access denied on ECR pull"</span>...
</code></pre>
<p>The result: 300+ lines of CloudFormation that would have taken me weeks to write manually, delivered in 2 hours.</p>
<p>The new reality: If you can describe your infrastructure in plain English, AI can write the CloudFormation. The barrier to Infrastructure as Code has practically disappeared.</p>
<h3 id="heading-plan-for-secrets-management">Plan for Secrets Management</h3>
<p>We moved secrets from EC2 environment variables to AWS Systems Manager Parameter Store. Should have done this from the start. We didn’t have any secrete which require rotation so choose simple option SSM parameter store.</p>
<h3 id="heading-leverage-ai-tools-for-infrastructure-as-code">Leverage AI Tools for Infrastructure as Code</h3>
<p>Here's a game-changer: GenAI has eliminated the "I don't know CloudFormation" excuse.</p>
<p>I wrote our entire ECS infrastructure using Amazon Q Developer CLI. The process looked like this:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Ask Amazon Q to generate CloudFormation template</span>
&gt; <span class="hljs-string">"Create CloudFormation template for ECS Fargate service with ALB and RDS"</span>
&gt; <span class="hljs-string">"Add CloudWatch log groups and auto-scaling policies to this template"</span>
</code></pre>
<p>What used to take days now takes hours. The AI handles the boilerplate, you focus on the business logic.</p>
<p>Pro tip: Use AI tools as your pair programmer, not your replacement. They're incredible at generating infrastructure templates, but you still need to understand what you're deploying.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>EC2 isn't wrong—it's just not optimised for modern application patterns or small development teams.</p>
<h3 id="heading-choose-your-path-based-on-your-team-reality">Choose Your Path Based on Your Team Reality:</h3>
<p>Pure Developer Team (1-5 people, no DevOps):</p>
<ul>
<li><p>Fargate</p>
</li>
<li><p>Managed databases (RDS, not self-hosted)</p>
</li>
<li><p>Serverless where possible</p>
</li>
<li><p>AI-generated Infrastructure as Code (Amazon Q, ChatGPT for CloudFormation)</p>
</li>
<li><p>Avoid EC2 (unless you want to manage environments manually)</p>
</li>
</ul>
<p>Mixed Team (Developers + DevOps/Platform Engineers):</p>
<ul>
<li><p>ECS on EC2 or EKS for more control</p>
</li>
<li><p>Spot instances and Graviton for optimization</p>
</li>
<li><p>More complex architectures become viable</p>
</li>
<li><p>AI-assisted infrastructure optimization and troubleshooting</p>
</li>
<li><p>Fargate still valid for rapid iteration</p>
</li>
</ul>
<p>Large Team (10+ engineers, dedicated platform team):</p>
<ul>
<li><p>Full Kubernetes (EKS/GKE)</p>
</li>
<li><p>Multi-cloud strategies</p>
</li>
<li><p>Complex microservices architectures</p>
</li>
<li><p>Advanced cost optimization techniques</p>
</li>
</ul>
<p>If you're running a simple, monolithic app with predictable patterns and dedicated infrastructure expertise, EC2 might be perfect. But if you're building a product that needs to move fast, deploy reliably, and maintain consistency across environments while your developers focus on product development, containers will eventually become inevitable.</p>
<p>The question isn't whether to migrate—it's when, and what path matches your team's strengths and timeline.</p>
<h3 id="heading-ready-to-start-your-migration-the-ai-powered-way">Ready to Start Your Migration? (The AI-Powered Way)</h3>
<p>The GenAI advantage: What used to require deep AWS expertise now needs basic prompting skills.</p>
<ol>
<li><p>Audit your current EC2 usage (AWS Cost Explorer is your friend)</p>
</li>
<li><p>Identify your biggest pain points (deployment time? scaling? costs?)</p>
</li>
<li><p>Use AI to generate your infrastructure templates:</p>
<pre><code class="lang-bash"> <span class="hljs-comment"># Amazon Q Developer CLI examples</span>
 &gt;  <span class="hljs-string">"Create ECS Fargate service template for my Flask app"</span>
 &gt; <span class="hljs-string">"Add Application Load Balancer with HTTPS certificate"</span>
 &gt;  <span class="hljs-string">"Configure auto-scaling based on CPU utilization"</span>

 <span class="hljs-comment"># ChatGPT for troubleshooting</span>
 &gt; <span class="hljs-string">"My ECS task is failing with this error: [paste logs]"</span>
 &gt; <span class="hljs-string">"Optimize this CloudFormation template for cost efficiency"</span>
</code></pre>
</li>
<li><p>Start with one service (pick the most isolated one)</p>
</li>
<li><p>Measure everything (before and after metrics)</p>
</li>
<li><p>Iterate and expand (let AI handle the infrastructure complexity)</p>
</li>
</ol>
<p>The old excuse: "I don't know CloudFormation well enough" The new reality: "I can describe what I want in plain English"</p>
<p>The best migration is the one you don't have to do twice—and AI ensures you get it right the first time.</p>
<h3 id="heading-my-container-migration-cheat-sheet">My Container Migration Cheat Sheet</h3>
<ul>
<li><p><strong>Start with EC2 if you must—but design your code so you can migrate later.</strong></p>
</li>
<li><p><strong>Know your team’s capacity.</strong> Small teams should prioritise simplicity and managed services.</p>
</li>
<li><p><strong>Fargate is perfect for dev-heavy teams without dedicated DevOps.</strong> It trades some control for massive operational relief.</p>
</li>
<li><p><strong>ECS on EC2 offers cost savings—but requires infra expertise.</strong></p>
</li>
<li><p><strong>Containerisation solves environment drift.</strong> “It works on my machine” becomes a thing of the past.</p>
</li>
<li><p><strong>Don’t rush into Microservices.</strong> Keep services together initially if your team is small.</p>
</li>
<li><p><strong>Containers let you right-size resources per service.</strong> Small services can save money, while heavy services can scale independently.</p>
</li>
<li><p><strong>IAM roles in ECS are easy to confuse.</strong> Separate your task role from your execution role.</p>
</li>
<li><p><strong>Ditch SSH for observability.</strong> Logging and monitoring are non-negotiable in containerized workloads.</p>
</li>
<li><p><strong>Invest in Infrastructure as Code early.</strong> Use AI tools like Amazon Q or ChatGPT to help generate your templates.</p>
</li>
<li><p><strong>Measure everything before and after migration.</strong> You can’t improve what you can’t measure.</p>
</li>
</ul>
<p>Want to discuss your EC2 to containers migration? I'd love to hear about your experience and challenges. Connect with me on LinkedIn/Twitter or drop a comment below.</p>
]]></content:encoded></item><item><title><![CDATA[The Angular Error That Kept Scratching My Head: NG02100]]></title><description><![CDATA[Hello Devs,
You know those moments where everything compiles, no warnings pop up, the UI mostly works… but then something just quietly breaks?
This was one of those moments.
I wasn’t refactoring some ancient service or tweaking a low-level renderer. ...]]></description><link>https://www.internetkatta.com/the-angular-error-that-kept-scratching-my-head-ng02100</link><guid isPermaLink="true">https://www.internetkatta.com/the-angular-error-that-kept-scratching-my-head-ng02100</guid><category><![CDATA[Developer]]></category><category><![CDATA[Angular]]></category><category><![CDATA[error handling]]></category><category><![CDATA[learning]]></category><category><![CDATA[Experience ]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 27 Jun 2025 00:59:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750985812699/bd2fad0f-4c69-4bb8-a70e-fe0ef463f02a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>You know those moments where everything compiles, no warnings pop up, the UI mostly works… but then something just quietly breaks?</p>
<p>This was one of those moments.</p>
<p>I wasn’t refactoring some ancient service or tweaking a low-level renderer. I was building a simple notification drawer — and somehow, Angular's <code>NG02100</code> error made it feel like the app was haunted.</p>
<h2 id="heading-when-just-a-template-change-isnt">When “Just a Template Change” Isn’t</h2>
<p>I was working on our notification drawer — a simple list showing who did what, when. The API was returning the usual data: message, title, timestamp. Nothing out of the ordinary.</p>
<p>In the template, I rendered the notification time like this:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">label</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"notification-time"</span>&gt;</span>
  {{ notification?.cd | date: 'MMM d, y h:mm a' }}
<span class="hljs-tag">&lt;/<span class="hljs-name">label</span>&gt;</span>
</code></pre>
<p>It worked. I merged it. Life moved on.</p>
<p>Until it didn’t.</p>
<h2 id="heading-the-crash-that-made-no-sense">The Crash That Made No Sense</h2>
<p>One morning, I refreshed the UI. The first few notifications loaded. Then — out of nowhere — Angular crashed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750985593729/6d4a31b1-5f64-4849-b56d-1585a7491c2e.png" alt class="image--center mx-auto" /></p>
<p>Sometimes it broke after the third item, sometimes after the fifth. It wasn’t consistent. But the crash was real, and the console wasn’t much help:</p>
<pre><code class="lang-javascript"> ERROR $e: NG02100
    at qn (http:<span class="hljs-comment">//localhost:4200/main.js:1:1782481)</span>
    at Me.transform (http:<span class="hljs-comment">//localhost:4200/main.js:1:1784244)</span>
    at H2 (http:<span class="hljs-comment">//localhost:4200/main.js:1:1923346)</span>
    at <span class="hljs-built_in">Object</span>.Y2 (http:<span class="hljs-comment">//localhost:4200/main.js:1:1924135)</span>
    at pp (http:<span class="hljs-comment">//localhost:4200/main.js:1:556184)</span>
    at L0 (http:<span class="hljs-comment">//localhost:4200/main.js:1:1865117)</span>
    at ZC (http:<span class="hljs-comment">//localhost:4200/main.js:1:1875545)</span>
    at X0 (http:<span class="hljs-comment">//localhost:4200/main.js:1:1876956)</span>
    at Hb (http:<span class="hljs-comment">//localhost:4200/main.js:1:1876779)</span>
    at Jm (http:<span class="hljs-comment">//localhost:4200/main.js:1:1876711)</span>
</code></pre>
<p>No mention of the template line. No stack trace pointing to my component. Just <code>NG02100</code>.</p>
<p>I hadn’t seen this one before. And I write a lot of Angular.</p>
<h2 id="heading-debugging-the-wrong-places">Debugging the Wrong Places</h2>
<p>Like most devs, I assumed something logical was broken. So I started removing things.</p>
<p>I tried:</p>
<ul>
<li><p>Wrapping fields in <code>*ngIf</code></p>
</li>
<li><p>Adding <code>?.</code> everywhere</p>
</li>
<li><p>Logging the API response</p>
</li>
<li><p>Filtering out <code>null</code> and incomplete records</p>
</li>
<li><p>Removing pipes, anchor tags, images, even entire rows</p>
</li>
</ul>
<p>At one point I replaced the whole template with just:</p>
<pre><code class="lang-xml">{{ notification | json }}
</code></pre>
<p>Still crashing.</p>
<p>The error pointed to the line in my component where I assigned the response:</p>
<pre><code class="lang-javascript"><span class="hljs-built_in">this</span>.notifications = res.data;
</code></pre>
<p>And yet the first few notifications rendered fine. That made it worse — it felt like the problem was hiding somewhere deep in the data, waiting to ambush me at random.</p>
<h2 id="heading-the-breakthrough">The Breakthrough</h2>
<p>After hours of slicing code, I dumped a single notification to the console and noticed this:</p>
<pre><code class="lang-json">cd<span class="hljs-string">": "</span><span class="hljs-number">9</span> mins ago<span class="hljs-string">"</span>
</code></pre>
<p>That used to be:</p>
<pre><code class="lang-json"><span class="hljs-string">"cd"</span>: <span class="hljs-string">"2024-06-06T12:30:00Z"</span>
</code></pre>
<p>So what changed?</p>
<p>The backend team had switched from sending ISO timestamps to human-readable time strings.</p>
<p>Totally reasonable — but my <code>date</code> pipe didn’t agree.</p>
<p>Angular was trying to parse <code>"9 mins ago"</code> with the <code>date</code> pipe. Naturally, it failed. But instead of throwing a descriptive error, it threw <code>NG02100</code>, which just means:</p>
<blockquote>
<p>Something in the template broke during binding. Good luck finding it.</p>
</blockquote>
<h2 id="heading-the-fix-was-simple-but-finding-it-wasnt">The Fix Was Simple (But Finding It Wasn't)</h2>
<p>I removed the <code>date</code> pipe entirely:</p>
<pre><code class="lang-xml"><span class="hljs-tag">&lt;<span class="hljs-name">label</span> <span class="hljs-attr">class</span>=<span class="hljs-string">"notification-time"</span>&gt;</span>
  {{ notification?.cd }}
<span class="hljs-tag">&lt;/<span class="hljs-name">label</span>&gt;</span>
</code></pre>
<p>Just like that — everything worked again.</p>
<p>It took two hours to track down a single pipe that was assuming a valid date. A change that didn't cause TypeScript to fail. A runtime-only issue that crashed without context.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750985752081/16d98a32-908e-4c3c-841a-9747e04b1829.gif" alt class="image--center mx-auto" /></p>
<h2 id="heading-lessons-from-the-trap">Lessons from the Trap</h2>
<p>Looking back, this one line taught me more than I expected:</p>
<ol>
<li><p><strong>NG02100 is a template binding failure</strong>, not a logic error.</p>
</li>
<li><p>Pipes like <code>date</code>, <code>currency</code>, or <code>async</code> are easy to break if the data isn’t shaped exactly as expected.</p>
</li>
<li><p>When errors seem random in a loop (<code>*ngFor</code>), it’s usually <strong>one bad record</strong> in the list.</p>
</li>
<li><p>Defensive programming in templates is just as important as in your components.</p>
</li>
</ol>
<h2 id="heading-what-id-do-differently-now">What I’d Do Differently Now</h2>
<ul>
<li><p>Validate data types before using them in the template.</p>
</li>
<li><p>Guard pipes behind helper functions if the format can vary.</p>
</li>
<li><p>Use conditional formatting:</p>
</li>
<li><p>And finally, when I see <code>NG02100</code> again, I’ll know where to look first — the template, not the logic</p>
</li>
</ul>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>This wasn’t a tricky bug in hindsight. But it was just invisible enough, just misleading enough, to waste hours of time.</p>
<p>Sometimes the problem isn’t what changed — it’s what <strong>you assumed would never change</strong>.</p>
<p>If you ever see <code>NG02100</code>, don’t start with your TypeScript.</p>
<p>Start with your templates.</p>
<p>And then check your pipes.</p>
]]></content:encoded></item><item><title><![CDATA[Why your ECS tasks aren’t scaling]]></title><description><![CDATA[We had auto scaling set. Alarms configured. Metrics wired. And yet—502s.
That was the story every month-end in our GIS image processing app. A spike in usage from ops teams. Annotation tools slowing down. And the infamous error that no one wants to d...]]></description><link>https://www.internetkatta.com/why-your-ecs-tasks-arent-scaling</link><guid isPermaLink="true">https://www.internetkatta.com/why-your-ecs-tasks-arent-scaling</guid><category><![CDATA[ecs task]]></category><category><![CDATA[ECS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Sun, 15 Jun 2025 13:45:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749995295849/c4d6bb6f-115a-456f-8c44-3f4f8073060a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We had auto scaling set. Alarms configured. Metrics wired. And yet—502s.</p>
<p>That was the story every month-end in our GIS image processing app. A spike in usage from ops teams. Annotation tools slowing down. And the infamous error that no one wants to debug under pressure.</p>
<p>The ECS setup wasn’t new—built by the previous team—but now it was on us: developers and DevOps engineers trying to make sense of why scaling wasn’t saving us.</p>
<p>We did what most teams would do. We scaled the ECS service. Added more tasks. And for a while, it worked. Until it didn’t.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746968265838/8b29f16f-311a-4101-b2d6-ba6ee2826de0.jpeg" alt class="image--center mx-auto" /></p>
<p>This blog isn’t just about <strong>what CAS is</strong>—there are plenty of docs for that. This is about <strong>why you might miss it</strong>, how we almost did, and what real-world capacity alignment actually looks like.</p>
<h2 id="heading-most-builders-miss-this-task-scaling-isnt-enough">Most Builders Miss This: Task Scaling Isn’t Enough</h2>
<p>When you configure ECS Service Auto Scaling (like scaling from 2 to 10 tasks based on CPU &gt; 50%), ECS will try to place new tasks.</p>
<p>But here’s the catch:</p>
<blockquote>
<p>If you’re using <strong>EC2 launch type</strong>, ECS needs <strong>available capacity</strong> on the cluster to actually place those tasks.</p>
</blockquote>
<p>No CPU or memory available? The tasks stay stuck in <code>PENDING</code>. And it’s silent unless you're watching.</p>
<p>Here’s where <strong>ECS Cluster Auto Scaling (CAS)</strong> enters the story.</p>
<h2 id="heading-a-past-pain-month-end-gis-workloads-that-failed-to-scale">A Past Pain: Month-End GIS Workloads That Failed to Scale</h2>
<p>In a previous role, we managed an internal image processing tool that rendered GIS data and allowed operations teams to annotate high-resolution maps. It wasn’t a real-time app — but it was heavy. And during critical windows like month-end or year-end closures, load would spike massively.</p>
<p>The app:</p>
<ul>
<li><p>Generated map tiles on the fly</p>
</li>
<li><p>Handled concurrent uploads and annotations</p>
</li>
<li><p>Involved CPU-heavy image processing</p>
</li>
</ul>
<p>We assumed ECS auto scaling would “just work.” But then came 502s.</p>
<p>Naturally, we began by debugging the app:</p>
<ul>
<li><p>Checked RDS performance</p>
</li>
<li><p>Tuned Apache settings</p>
</li>
<li><p>Reproduced failures with same payloads</p>
</li>
</ul>
<p>Nothing helped. The mystery deepened.</p>
<p>Until we noticed this: tasks were stuck in <code>PENDING</code>, but <strong>CPU and memory metrics looked fine</strong>.</p>
<p>That’s when we connected the dots. We had scaling at the <strong>task level</strong>, but the <strong>infrastructure wasn’t scaling with it</strong>.</p>
<p><strong>It was like hiring more workers without giving them desks.</strong> We were adding more containers, but the underlying compute had no room to host them.</p>
<h2 id="heading-how-to-estimate-capacity-like-a-developer">How to Estimate Capacity Like a Developer</h2>
<p>Let’s say you're running a Flask or FastAPI app on ECS. The app handles:</p>
<ul>
<li><p>10–12 API calls per user action</p>
</li>
<li><p>Each API call does a DB lookup + image transform</p>
</li>
<li><p>Spikes happen during end-of-day or batch usage</p>
</li>
</ul>
<h3 id="heading-how-do-you-estimate-how-many-ecs-tasks-you-need">How do you estimate how many ECS tasks you need?</h3>
<p>Here’s a <strong>developer-first method</strong>:</p>
<p>Step 1: Understand the API behaviour</p>
<ul>
<li><p>What is the <strong>average latency</strong> of a single API call? (e.g. 500ms)</p>
</li>
<li><p>Are the calls <strong>CPU or memory bound</strong>? (CloudWatch / APM tools)</p>
</li>
<li><p>What’s the <strong>max concurrency</strong>? (e.g. 100 users x 10 calls = 1,000)</p>
</li>
</ul>
<p>If each ECS task can handle ~10 concurrent API calls → you need <strong>~100 tasks</strong></p>
<p>Step 2: Know Your Task Size</p>
<p>If task = 0.25 vCPU, 512 MB and EC2 = 2 vCPU, 8 GB → host ~8 tasks per EC2</p>
<p>➡️ 100 tasks → ~13 EC2s</p>
<p>Step 3: Monitor Key Metrics</p>
<ul>
<li><p><code>CPUReservation</code> and <code>MemoryReservation</code></p>
</li>
<li><p><code>PendingTaskCount</code> (cluster)</p>
</li>
<li><p>ECS <code>ManagedScaling</code> logs</p>
</li>
<li><p>App logs for 502s, slow endpoints, queuing behaviour</p>
</li>
</ul>
<p>Step 4: Set Scaling Policies</p>
<ul>
<li><p>Task scaling: CPU &gt; 50%</p>
</li>
<li><p>CAS scaling: set <code>targetCapacity = 80%</code> for buffer</p>
</li>
</ul>
<h2 id="heading-how-ecs-cluster-auto-scaling-actually-works">How ECS Cluster Auto Scaling Actually Works</h2>
<h3 id="heading-its-not-magic-its-math">It’s Not Magic — It’s Math</h3>
<p>When ECS needs to launch new tasks but can't due to resource shortage, it uses a <strong>Capacity Provider</strong> with a formula like this:</p>
<pre><code class="lang-plaintext">desired = ceil((needed capacity) / (instance capacity)) * target capacity %
</code></pre>
<p>Let’s say you have:</p>
<ul>
<li><p><strong>Pending Tasks</strong>: 4 tasks</p>
</li>
<li><p><strong>Each Task Needs</strong>: 0.5 vCPU and 1 GB RAM</p>
</li>
<li><p><strong>EC2 Type</strong>: <code>t4g.medium</code> (2 vCPU, 4 GB RAM)</p>
</li>
<li><p><strong>Target Capacity</strong>: 100% (binpack strategy)</p>
</li>
</ul>
<h4 id="heading-step-by-step">Step-by-step:</h4>
<ol>
<li><p><strong>Total Needed Capacity</strong>:</p>
<ul>
<li><p>2 vCPU (0.5 x 4)</p>
</li>
<li><p>4 GB RAM (1 x 4)</p>
</li>
</ul>
</li>
<li><p><strong>Per Instance Capacity</strong>:</p>
<ul>
<li>2 vCPU and 4 GB RAM per <code>t4g.medium</code></li>
</ul>
</li>
<li><p><strong>Divide &amp; Ceil</strong>:</p>
<ul>
<li><p>CPU: 2 / 2 = 1</p>
</li>
<li><p>Memory: 4 / 4 = 1</p>
</li>
<li><p>Take the <strong>max of the two</strong> = 1</p>
</li>
</ul>
</li>
<li><p><strong>Apply Target Capacity %</strong>:</p>
<ul>
<li>At 100% target, no buffer → <code>desired = 1</code> EC2 instance</li>
</ul>
</li>
</ol>
<p>So CAS would scale <strong>one t4g.medium</strong> to place those four tasks.</p>
<p><strong>Target capacity</strong> lets you control buffer: set to 100% for binpack-style efficiency, or 80% for headroom.</p>
<h3 id="heading-key-concepts-from-ecs-cas-internals">Key Concepts from ECS CAS Internals</h3>
<ul>
<li><p><strong>ECS checks task placement every 15 seconds</strong></p>
</li>
<li><p>If it can’t place tasks, they go into the <strong>provisioning state</strong> (not failed)</p>
</li>
<li><p>CAS calculates how many EC2s are needed based on task resource demand</p>
</li>
<li><p><strong>Up to 100 tasks can be in provisioning</strong> per cluster</p>
</li>
<li><p><strong>Provisioning timeout</strong> is 10–30 minutes before task is stopped</p>
</li>
</ul>
<h2 id="heading-daemon-vs-non-daemon-tasks-what-matters-for-scaling">Daemon vs Non-Daemon Tasks: What Matters for Scaling</h2>
<p>Daemon Task</p>
<ul>
<li><p>Scheduled to run on <strong>every EC2 instance</strong></p>
</li>
<li><p>Used for agents, log forwarders, metrics collectors</p>
</li>
<li><p>ECS <strong>ignores these</strong> when calculating scale-out/scale-in</p>
</li>
</ul>
<p>Non-Daemon Task</p>
<ul>
<li><p>Your real app workloads (Flask, Socket, Workers)</p>
</li>
<li><p>These <strong>determine whether EC2s are needed or idle</strong></p>
</li>
</ul>
<h2 id="heading-how-ecs-decides-how-many-ec2s-to-run">How ECS Decides How Many EC2s to Run</h2>
<p>Let’s say:</p>
<ul>
<li><p><code>N</code> = current EC2s</p>
</li>
<li><p><code>M</code> = desired EC2s (CAS output)</p>
</li>
</ul>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Condition</td><td>Outcome</td></tr>
</thead>
<tbody>
<tr>
<td>No pending tasks, all EC2s used</td><td>M = N</td></tr>
<tr>
<td>Pending tasks present</td><td>M &gt; N (scale out)</td></tr>
<tr>
<td>Idle EC2s (only daemon tasks)</td><td>M &lt; N (scale in)</td></tr>
</tbody>
</table>
</div><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1746975766705/0f02da64-a678-4cc0-bc99-bd7552e2558f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-ecs-in-real-life-before-and-after-cas">ECS in Real Life: Before and After CAS</h2>
<p>Once we added <strong>CAS</strong>:</p>
<ul>
<li><p>We linked services to a capacity provider</p>
</li>
<li><p>Enabled managed scaling (target = 100%)</p>
</li>
<li><p>Switched placement to <code>binpack</code></p>
</li>
</ul>
<p>Finally, tasks scaled. And so did the infra. No more 502s.</p>
<h2 id="heading-what-if-youre-launching-a-new-app-and-dont-know-the-load-yet">What If You're Launching a New App and Don’t Know the Load Yet?</h2>
<p>Start lean. Scale for learnings.</p>
<p>When launching something new—like we are with NuShift—it’s often unclear what kind of user load or traffic patterns to expect. In such cases, make decisions based on expected concurrency, your framework’s behaviour, and instance characteristics.</p>
<p>Here are some tips to guide early capacity planning:</p>
<ul>
<li><p><strong>Estimate concurrency</strong>: If you expect 50–100 concurrent users, and each user triggers multiple API calls, try to estimate peak call concurrency.</p>
</li>
<li><p><strong>Understand your app behaviour</strong>: Flask or FastAPI-based apps usually work well with 0.25 vCPU and 512MB, especially if I/O bound (e.g., API calls, DB reads). If your app does image processing or CPU-intensive work, start with 0.5 vCPU.</p>
</li>
<li><p><strong>Choose your EC2 wisely</strong>: We use <code>t4g.medium</code> (2 vCPU, 4GB RAM) for its cost-efficiency and support for multiple small tasks (6–8 per instance).</p>
</li>
<li><p><strong>Monitor early patterns</strong>: Let metrics shape your scaling curve—track <code>CPUUtilisation</code>, <code>MemoryUtilisation</code>, and task startup times.</p>
</li>
</ul>
<p>Example initial config:</p>
<ul>
<li><p>Flask API: 1–3 tasks (0.25 vCPU, 512 MB)</p>
</li>
<li><p>WebSocket: 1–2 tasks (depends on socket concurrency)</p>
</li>
<li><p>EC2: t4g.medium in an ASG with ECS capacity provider</p>
</li>
<li><p>CAS: enabled with 80% targetCapacity for buffer</p>
</li>
</ul>
<p>Use New Relic, CloudWatch, or X-Ray to track CPU, memory, latency, and pending counts.</p>
<h2 id="heading-final-thought">Final Thought</h2>
<p>Scaling your application is easy to talk about. But infrastructure scaling is where things quietly break.</p>
<p>If you’re only watching task counts and CPU graphs, you might miss deeper issues:</p>
<ul>
<li><p>PENDING tasks with nowhere to run</p>
</li>
<li><p>EC2s running agents, not apps</p>
</li>
<li><p>Cold starts caused by infra lag</p>
</li>
</ul>
<blockquote>
<p><strong>Auto scaling isn’t just about adding containers—it’s about giving them somewhere to live</strong></p>
</blockquote>
<h2 id="heading-references">References</h2>
<ul>
<li><p><a target="_blank" href="https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/">Deep Dive on Amazon ECS Cluster Auto Scaling (Official AWS Blog)</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-placement-strategies.html">ECS Task Placement Strategies</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Behind the Scenes of AWS Community Days Bengaluru 2025]]></title><description><![CDATA[The lights were bright, the crowd buzzing, and everything looked perfect — but just hours before the show began, I stood still backstage, feeling a familiar knot in my stomach. Not out of fear, but from the weight of a moment that had been years — ma...]]></description><link>https://www.internetkatta.com/behind-the-scenes-of-aws-community-days-bengaluru-2025</link><guid isPermaLink="true">https://www.internetkatta.com/behind-the-scenes-of-aws-community-days-bengaluru-2025</guid><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Thu, 29 May 2025 13:22:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748524222771/5081e6fc-0493-45cc-80ba-d2e0c57f5ffe.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748524732072/6ac37ac2-a023-4bda-bc9a-92fd3acb3208.png" alt class="image--center mx-auto" /></p>
<p>The lights were bright, the crowd buzzing, and everything looked perfect — but just hours before the show began, I stood still backstage, feeling a familiar knot in my stomach. Not out of fear, but from the weight of a moment that had been years — maybe a decade — in the making. I didn’t fully understand why it meant so much, not yet.</p>
<p>That clarity came later. Like all good stories — this one started with a spark, but the meaning would only unfold with time.</p>
<blockquote>
<p><strong>Great teams aren’t built on agreement — they’re built on shared purpose.”</strong></p>
</blockquote>
<p>That one line sums up our journey toward AWS Community Day Bengaluru 2025.</p>
<p>What started in December as a simple conversation soon turned into months of collaboration, debates, design chaos, speaker curation, and pure execution energy.  We came in with diverse roles — some of us focusing on speakers, some on partnerships, others on content, marketing, logistics, or tech. We didn’t always agree, but we never lost sight of the goal: delivering a memorable experience for the community.</p>
<h3 id="heading-the-start-vision-to-action">The Start: Vision to Action</h3>
<p>The vision was simple — make this the most meaningful and elevated version of ACD Bengaluru yet. Bigger in scale, deeper in content, richer in community vibes. But turning that vision into action meant asking tough questions and making bold decisions.</p>
<p>We explored venue options, discussed format changes, and debated everything — from the layout of tracks to the type of workshops we wanted to host. It all started with a WhatsApp group and a few Google Meet calls. Weekend working sessions were filled with passionate arguments — and what's more magical is that the entire team never met in person, not even once. Yet, the synergy and symphony were real, because our goal was always clear and shared.</p>
<p>This was my first time organizing an event at such a large scale. Last year, I had organized <strong>Mautic Conference India</strong> in Pune — a proud moment with about 100 participants <a target="_blank" href="https://www.internetkatta.com/from-fan-to-organizer-my-incredible-journey-with-mautic-conference-india">shared here</a>. And I had volunteered for <strong>ACD BLR 2024</strong>, which was hosted at the Amazon office. But stepping up to co-lead the effort at a grand venue like <strong>Conrad Bengaluru</strong> was a completely different level. The stakes were higher, the pressure was real — but so was the passion.</p>
<p>That moment brought back an old, bittersweet memory. Back in engineering college, I had one dream: to become the <strong>CSR (Cultural Representative)</strong> — the person who would run the college fest, rally the teams, and own the spotlight. But college politics had other plans. I never got that role. That unfulfilled dream stayed tucked away — quietly lingering.</p>
<p>Yet, life has its own timeline. Organizing ACD BLR 2025, in front of hundreds of people, with a team I admire — it felt like that dream found its way back to me, not in the place I first imagined, but in a space where I was truly meant to lead, contribute, and shine. A heartfelt thanks to <strong>Bhuvana</strong>, <strong>Jones</strong>, and <strong>Ayyanar</strong> for trusting and believing in me with this opportunity — your confidence made all the difference.</p>
<p>Sometimes, you don’t get to live that dream in the place you imagined. But you get something better — a stage that truly values your intent, a platform where your efforts shine, and a team that believes in your vision. <strong>This was that moment for me.</strong></p>
<h3 id="heading-the-build-up-decisions-deadlines-and-dilemmas">The Build-Up: Decisions, Deadlines, and Dilemmas</h3>
<p>As we moved into execution, the real challenges began. From finalising keynote speakers to rolling out creative ticket sale campaigns, each step required hustle, agility, and relentless problem-solving. We tested virtual photo booths, tweaked LinkedIn promos, crafted FOMO posts, created session posters, curated cue cards for emcees, and pulled off a social media blitz powered by our volunteers.</p>
<p>We also kickstarted two powerful community-led initiatives:</p>
<ul>
<li><p><strong>Blogathon</strong>: A platform for community members to share their AWS learnings, stories, and hands-on experiments. It amplified new voices and celebrated the diverse experiences across our builder ecosystem.</p>
</li>
<li><p><strong>Voice of AWS UG Bengaluru</strong>: A new storytelling series where we spotlight the journeys, challenges, and achievements of our local AWS User Group members — giving a voice to the builders who power our community.</p>
</li>
</ul>
<p>Two moments stood out for me as serendipitous reminders of how everything comes full circle:</p>
<p>The first was when my engineering college friend <strong>Sagar</strong>, who now runs an event management company called StudioB3, showed up at ACD. We hadn’t met in over 12 years. He’s been doing events for more than a decade — something I had completely forgotten. But someone once told me, <em>"When your time comes, the right people will show up to help you."</em> Reconnecting with Sagar at this moment felt like that kind of magic.</p>
<p>The second was when <strong>Talvinder Singh</strong>, founder of <a target="_blank" href="https://zop.dev/">zop.dev</a>, reached out via my official email regarding AWS cloud services. I could’ve replied formally, but instead, I sent him a message from my personal email — pitching the idea of sponsorship. And guess what? Talvinder turned out to be a former client from my time at Sodel Solutions. A full-circle moment, reminding me that when you build good relationships, people remember you.</p>
<p>Both moments reminded me that success is never solo — it's shaped by people, timing, and the relationships you nurture along the way.</p>
<p>It wasn’t perfect. And that’s what made it special.</p>
<h3 id="heading-a-moment-to-recharge-before-show-time">A Moment to Recharge Before Show Time</h3>
<p>Just before the final days of prep for ACD BLR 2025, a last-minute yet incredibly meaningful opportunity came our way — the inauguration of the <strong>first AWS Cloud Club in Bengaluru</strong> at <strong>Ramaiah Institute of Technology</strong>. The college had invited AWS UG leaders to attend the event alongside our keynote speaker <strong>Vivek Raja P S</strong>. Though <strong>Jason</strong>, Senior Program Manager of AWS Community Builders, couldn't join due to unforeseen reasons, his spirit was very much present.</p>
<p>Spending half a day there, celebrating the new generation of cloud enthusiasts, was a refreshing break — and a reminder of <em>why</em> we do this in the first place. Right after the ceremony, we were back in execution mode — refining ACD details, tying up loose ends, and getting ready to welcome hundreds of builders to Bengaluru.</p>
<h3 id="heading-the-days-goosebumps-high-fives-and-happy-faces">The Day(s): Goosebumps, High-Fives, and Happy Faces</h3>
<p>This time, I was doing something I’d never done before — speaking in front of the entire audience in the combined auditorium. Over the years, I’ve spoken in tracks and breakout rooms, but this was different. Bigger. A little scary. And deeply exciting. What made it even more special was sharing that stage with <strong>Vivek Raja P S</strong>, someone I deeply respect.</p>
<p>I wasn’t prepared the way I usually like to be — there were just too many moving parts in the days leading up to the event. But I reminded myself: I’ve been on stage more than 30 times. I knew I could pull through.</p>
<p>We tried something different — a conversational-style talk instead of a standard slide-heavy presentation. And to our surprise, many people came up to us afterward saying it helped them understand complex topics more easily. That made us smile. And just like that, the nervousness faded.</p>
<p>I didn’t know how the moment would unfold. Maybe we’d make mistakes. Maybe something wouldn’t go as planned. But I knew this for sure — we had built this event with heart.</p>
<ul>
<li><p>For every learner.</p>
</li>
<li><p>Every builder.</p>
</li>
<li><p>Every dreamer walking into ACD Bengaluru.</p>
</li>
</ul>
<p>Walking into the venue, seeing everything come together — the tracks, booths, branding, swag, and smiles — was an emotional moment for many of us. The reactions from attendees, speakers, and sponsors reminded us that all the hard work had meaning.</p>
<p>Whether it was a packed Exhibition Lounge, a selfie with Jason (yes, that happened), or just the quiet moment of seeing a community member feel seen and celebrated — those were the wins we carried home.</p>
<h3 id="heading-what-made-it-work">What Made It Work?</h3>
<ul>
<li><p><strong>Diverse opinions, united by purpose</strong></p>
</li>
<li><p><strong>Community-first thinking</strong> in every decision</p>
</li>
<li><p><strong>A willingness to do the grunt work</strong></p>
</li>
<li><p><strong>Volunteers who went above and beyond</strong>, without expecting spotlight</p>
</li>
</ul>
<h3 id="heading-faces-behind-the-magic">Faces Behind the Magic</h3>
<ul>
<li><p><strong>Ansh, Isha, and Yashavi</strong> – The spotlight of the event as emcees. Their energy, clarity, and confidence brought the sessions to life and kept the momentum flowing throughout the day.</p>
</li>
<li><p><strong>Chetan's wife</strong> – A silent force of support who chipped in without hesitation and helped the team stay grounded.</p>
</li>
<li><p><strong>Yogesh and his friend -</strong> Our little and young champions who contributed like squirrel did in Ramayana.</p>
</li>
<li><p><strong>Mehnaz &amp; Nabhanyua</strong> – Helped manage event-day logistics, always ready to lend a hand where needed. Their roles may have been short, but their impact was lasting.</p>
</li>
<li><p><strong>Bhuvana</strong> – The soul of AWS UG Bengaluru, her calm energy kept everything grounded. She’s the glue that held people and priorities together.</p>
</li>
<li><p><strong>Jones</strong> – The finance ministry and collaboration expert of our team. Always patient, always composed — he ensured everything behind the scenes ran with clarity and calm. Whether it was budget planning or community outreach, Jones brought structure, insight, and an unwavering willingness to help.</p>
</li>
<li><p><strong>Vivek Raja</strong> – The ML brain with a builder’s heart. Whether it was on-stage storytelling or behind-the-scenes mentoring, Vivek added thought leadership with humility.</p>
</li>
<li><p><strong>Srushith</strong> – The brain behind so many tactics that made this event impactful. From growth strategies to last-mile execution, his input often shaped the outcome — even when he stayed behind the curtain.</p>
</li>
<li><p><strong>Ayyanar</strong> – Deep tech wizard. His AI/ML depth and thoughtful insights shaped the content backbone of the event.</p>
</li>
<li><p><strong>Poobalan</strong> – He was always open to take up any challenge that came his way. From building the website — which was beautifully done — to managing social media seamlessly on event day, his openness and commitment truly stood out. A community strategist with a heart of gold and a can-do spirit.</p>
</li>
<li><p><strong>Logesh</strong> – A shadow for me in all things design, Logesh quietly helped shape the visual tone of the event. But on the event day, he stepped up and made sure everything ran smoothly on the ground — from handling AV to ensuring speaker content displayed right.</p>
</li>
<li><p><strong>Chetan</strong> – The quiet executor. If something needed doing, it was already done. No noise, just results.</p>
</li>
<li><p><strong>Harsha</strong> – Our perfectionist — always making sure our content and write-ups were polished and error-free. Unfortunately, he couldn’t be with us on event day due to a personal commitment, but his contributions in the lead-up were truly valuable and deeply appreciated. A silent force who made a loud impact. smile.</p>
</li>
<li><p><strong>Ramya</strong> – The swags master. From ideating to coordinating distribution, she ensured every attendee felt valued with thoughtful takeaways.</p>
</li>
<li><p><strong>Vijaya Nirmala</strong> – Supporting us from far away outside India, she showed that distance doesn’t matter when the intent is strong. Always present in spirit and helping wherever possible.</p>
</li>
<li><p><strong>Koti</strong> – The voice of calm in chaos. He’s a true community leader who brings in diverse insights from his experience running PyCon India — always calm in chaos. Whenever things went haywire, Koti was there with a plan and a laugh.</p>
</li>
<li><p><strong>Ayyanar &amp; Srushith</strong> – The creative minds behind the AI speaker invention visuals that caught everyone’s attention. Their imaginative execution and attention to detail brought an extra layer of innovation to the event's design language.</p>
</li>
</ul>
<p>Thanks also to the <a target="_blank" href="https://konfhub.com/"><strong>Konfhub</strong></a> <strong>team</strong>, especially <strong>Hari</strong> and <strong>Ganesh</strong>, for their incredible support in making this event a smooth experience — from handling ticketing operations seamlessly to sharing valuable insights whenever we hit a snag. Their behind-the-scenes help played a big part in the overall experience.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1748523668864/2a8ffd07-f02c-42d4-ae16-0560d17105a9.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-life-behind-the-curtain">Life Behind the Curtain</h3>
<p>They say,</p>
<blockquote>
<p><em>"A smooth sea never made a skilled sailor."</em></p>
</blockquote>
<p>And ACD BLR 2025 wasn’t just a journey of coordination — it was a test of balance, resilience, and purpose.</p>
<p>In the midst of all the planning and execution, life threw its own challenges. I was navigating some unexpected health issues — a gastrointestinal scare that pushed me to completely rethink my lifestyle and embrace fitness and clean eating. Just when I was finding a rhythm, my wife had to undergo a sudden surgery. Managing doctor visits, home responsibilities, a full-time job, and community commitments — it felt like everything hit at once.</p>
<p>But through it all, my family stood like a rock. They never once said, “Pause the community work.” They encouraged it. They believed in it. They believed in me. And that quiet support — the kind that doesn’t need applause — is what carried me through the chaos.</p>
<h3 id="heading-a-personal-note">A Personal Note</h3>
<p>As someone who’s been part of many community events, ACD BLR 2025 felt different. It was intense, emotional, and fulfilling. It proved once again that when builders, doers, and dreamers come together with clarity of purpose — magic happens.</p>
<h3 id="heading-to-the-team">To the Team:</h3>
<p>Thank you. You weren’t just volunteers — you were visionaries, warriors, and glue that held it all together. This one’s for you.</p>
<p><a target="_blank" href="https://youtube.com/shorts/i6WMuXA4wMM">https://youtube.com/shorts/i6WMuXA4wMM</a></p>
<h3 id="heading-references">References :</h3>
<ul>
<li><p>AWS UG BLR meet up link to join - <a target="_blank" href="http://meetup.com/awsugblr">http://meetup.com/awsugblr</a></p>
</li>
<li><p>AWS Community Day Bengaluru Blogathon awesome stories - <a target="_blank" href="https://acdblr-blogathon.devpost.com/project-gallery">https://acdblr-blogathon.devpost.com/project-gallery</a></p>
</li>
<li><p><a target="_blank" href="https://konfhub.com/">Konfhub</a> - our ticketing platform</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Angular Standalone Component Gotcha I Didn’t See Coming]]></title><description><![CDATA[Hello Devs,
You know that moment when everything looks fine — no errors, no warnings — but the UI just... doesn’t do what it’s supposed to?That’s how this story begins.
But let me add some background first...

🧳 Returning to Angular After 5 Years
I ...]]></description><link>https://www.internetkatta.com/the-angular-standalone-component-gotcha-i-didnt-see-coming</link><guid isPermaLink="true">https://www.internetkatta.com/the-angular-standalone-component-gotcha-i-didnt-see-coming</guid><category><![CDATA[Angular]]></category><category><![CDATA[standalone-components]]></category><category><![CDATA[angular developer]]></category><category><![CDATA[Developer]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Mon, 28 Apr 2025 11:32:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745495825047/fc4dccc8-c425-4a4c-9f13-6fd88d319db9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>You know that moment when everything looks fine — no errors, no warnings — but the UI just... doesn’t do what it’s supposed to?<br />That’s how this story begins.</p>
<p>But let me add some background first...</p>
<hr />
<h2 id="heading-returning-to-angular-after-5-years">🧳 Returning to Angular After 5 Years</h2>
<p>I recently jumped back into Angular after almost <strong>five years</strong> away.</p>
<p>Things I used to know by heart?<br />Gone or evolved.</p>
<p>There’s <code>standalone: true</code> now. Modules are optional. Components feel more like islands.<br />It’s Angular — but... different.</p>
<p>So when I decided to build a simple standalone navbar component with a dropdown filter, I felt ready to dive in.</p>
<p>After all, how hard could <code>*ngFor</code> be?</p>
<p>Turns out, harder than I expected — if you're doing it blindly.</p>
<hr />
<h2 id="heading-the-setup">🚧 The Setup</h2>
<p>I was building a <code>home-navbar</code> component — a simple filter with values like “Latest”, “Trending”, and “Following”.</p>
<p>In <code>home-navbar.component.ts</code>, I had:</p>
<pre><code class="lang-bash">filterOptions = [<span class="hljs-string">'latest'</span>, <span class="hljs-string">'trending'</span>, <span class="hljs-string">'following'</span>];
</code></pre>
<p>And in the template:</p>
<pre><code class="lang-bash">&lt;ul&gt;
  &lt;li *ngFor=<span class="hljs-string">"let option of filterOptions"</span>&gt;{{ option }}&lt;/li&gt;
&lt;/ul&gt;
</code></pre>
<p>So clean. So elegant.<br />So... <strong>not working</strong>.</p>
<hr />
<h2 id="heading-debugging-the-invisible">😵‍💫 Debugging the Invisible</h2>
<p>No errors. No warnings. Just an empty <code>&lt;ul&gt;</code>.<br />The kind of bug where Angular doesn’t even bother complaining — it just shrugs.</p>
<p>Here’s what I tried:</p>
<ul>
<li><p>✅ <code>console.log(filterOptions)</code> — the array was there.</p>
</li>
<li><p>✅ <code>{{ filterOptions }}</code> in the template — rendered just fine.</p>
</li>
<li><p>❌ <code>*ngFor</code> — ignored. Like it didn’t exist.</p>
</li>
</ul>
<p>I even replaced it with:</p>
<pre><code class="lang-bash">&lt;li *ngFor=<span class="hljs-string">"let item of ['a', 'b', 'c']"</span>&gt;{{ item }}&lt;/li&gt;
</code></pre>
<p>Still nothing.</p>
<p>Then I tried this:</p>
<pre><code class="lang-bash">&lt;div *ngIf=<span class="hljs-string">"true"</span>&gt;Hello ngIf&lt;/div&gt;
</code></pre>
<p>And when <strong>that</strong> didn’t render...<br />Something snapped.</p>
<hr />
<h2 id="heading-the-realization">💡 The Realization</h2>
<p>After spiralling a bit, I checked the docs. Then it hit me:</p>
<blockquote>
<p><strong>This is a standalone component. It doesn’t inherit anything — not even Angular’s own core directives.</strong></p>
</blockquote>
<p>And that’s when I realised what I had missed all along:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { CommonModule } <span class="hljs-keyword">from</span> <span class="hljs-string">'@angular/common'</span>;

@Component({
  <span class="hljs-attr">standalone</span>: <span class="hljs-literal">true</span>,
  ...
  imports: [CommonModule, ...] <span class="hljs-comment">// ← The missing piece</span>
})
</code></pre>
<p>That one tiny line. That was the reason everything was failing — silently.</p>
<hr />
<h2 id="heading-why-it-happens">🔍 Why It Happens</h2>
<p>In Angular:</p>
<ul>
<li><p>If you're using regular components inside an <code>NgModule</code>, importing <code>CommonModule</code> once covers all declared components.</p>
</li>
<li><p>But if you're building <strong>standalone components</strong>, they’re completely self-contained.</p>
</li>
<li><p>That means <strong>you must import</strong> <code>CommonModule</code> yourself, or you won’t get <code>*ngIf</code>, <code>*ngFor</code>, <code>| date</code>, <code>| currency</code>, etc.</p>
</li>
</ul>
<hr />
<h2 id="heading-lesson-from-the-drama">🤦 Lesson from the Drama</h2>
<p>I was so focused on building the UI and making things work that I forgot to stop and ask:</p>
<blockquote>
<p>"Wait, where do structural directives even come from?"</p>
</blockquote>
<p>This was a great reminder:<br /><strong>Even as someone experienced, it's easy to fall into habits — and overlook small things that matter.</strong></p>
<p>The truth is, I wasn't "doing Angular" — I was doing muscle memory.</p>
<p>And Angular? It has changed.</p>
<hr />
<h2 id="heading-the-fix">✅ The Fix</h2>
<p>This one line saved my sanity:</p>
<pre><code class="lang-javascript">imports: [CommonModule]
</code></pre>
<p>Once I added that, <code>*ngFor</code> worked. <code>*ngIf</code> worked. Everything came back to life.</p>
<hr />
<h2 id="heading-update-angular-17-way-without-commonmodule">✨ Update: Angular 17+ Way Without CommonModule</h2>
<p>Starting from Angular 17, there's an even <strong>cleaner</strong> way to handle loops without needing to import <code>CommonModule</code> at all!</p>
<p>✅ You can now use the new <code>@for</code> and <code>@if</code> syntax directly in your templates.</p>
<p>Example:</p>
<pre><code class="lang-javascript">&lt;ul&gt;
  @<span class="hljs-keyword">for</span> (option <span class="hljs-keyword">of</span> filterOptions; track $index) {
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">li</span>&gt;</span>{{ option }}<span class="hljs-tag">&lt;/<span class="hljs-name">li</span>&gt;</span></span>
  }
&lt;/ul&gt;
</code></pre>
<p>or conditionally:</p>
<pre><code class="lang-javascript">@<span class="hljs-keyword">if</span> (filterOptions.length &gt; <span class="hljs-number">0</span>) {
  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">p</span>&gt;</span>We have {{ filterOptions.length }} options!<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span></span>
}
</code></pre>
<h3 id="heading-best-practice">🏆 Best Practice</h3>
<ul>
<li><p><strong>For Angular 16 and below:</strong><br />  Use <code>CommonModule</code>.</p>
</li>
<li><p><strong>For Angular 17 and above:</strong><br />  Prefer <code>@for</code> and <code>@if</code> syntax.</p>
</li>
<li><p><strong>If you need to support older versions</strong>, stick to the <code>*ngFor</code> and <code>*ngIf</code> approach with CommonModule.</p>
</li>
</ul>
<h2 id="heading-takeaways">🔑 Takeaways</h2>
<ul>
<li><p><strong>Standalone components mean standalone responsibilities.</strong><br />  Nothing is auto-included — you explicitly manage what your component needs.</p>
</li>
<li><p><strong>No CommonModule?</strong><br />  Then <strong>no</strong> <code>*ngFor</code>, no <code>*ngIf</code>, no pipes — and no warnings either.<br />  Angular will silently skip them, leaving you scratching your head.</p>
</li>
<li><p><strong>Coming back to a framework after years?</strong><br />  <strong>Don't assume things work like they used to.</strong><br />  Even familiar tools evolve — often in small, silent ways.</p>
</li>
<li><p><strong>Tiny details break things quietly.</strong><br />  Always slow down and verify the basics — they are the first place things go wrong.</p>
</li>
<li><p><strong>Bonus for Angular 17+:</strong><br />  The new <code>@for</code> and <code>@if</code> syntax means even less boilerplate — no CommonModule needed at all!</p>
</li>
</ul>
<h2 id="heading-have-you-had-silent-failures-too">🗯️ Have You Had “Silent Failures” Too?</h2>
<p>Ever spent hours debugging only to realise one import was missing?<br />Or a component didn’t behave because of a silent Angular rule?</p>
<p>Share your moment.<br />Let’s normalise the messy middle part of learning and relearning.</p>
<p>Because sometimes, it’s not about knowing Angular.<br />It’s about knowing what to double check.</p>
]]></content:encoded></item><item><title><![CDATA[Amazon Q Developer CLI Helped Me Clone EC2 in One Prompt — No Console, No YAML]]></title><description><![CDATA[Hello Devs,
you might have heard about the recent release of Amazon Q Developer CLI.

I heard about it too, but like most tools, it sat on my radar until the moment I truly needed it - a real-time use case that made me glad I gave it a shot.
It start...]]></description><link>https://www.internetkatta.com/amazon-q-developer-cli-helped-me-clone-ec2-in-one-prompt-no-console-no-yaml</link><guid isPermaLink="true">https://www.internetkatta.com/amazon-q-developer-cli-helped-me-clone-ec2-in-one-prompt-no-console-no-yaml</guid><category><![CDATA[amazon Q developer CLI ]]></category><category><![CDATA[Amazon Q developers]]></category><category><![CDATA[Amazon Q]]></category><category><![CDATA[AWS]]></category><category><![CDATA[gen ai]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[Developer]]></category><category><![CDATA[developer productivity]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Wed, 16 Apr 2025 02:52:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744771550846/816bcb2a-2214-4622-9a38-e0c4b53fab2f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Hello Devs</strong>,</p>
<p>you might have heard about the recent release of <strong>Amazon Q Developer CLI</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744768321034/f6b446ef-0d65-4bd1-a16f-d3e2d7c07564.png" alt class="image--center mx-auto" /></p>
<p>I heard about it too, but like most tools, it sat on my radar until the moment I truly needed it - a real-time use case that made me glad I gave it a shot.</p>
<h3 id="heading-it-started-with-a-small-request-and-a-potential-disaster">It started with a small request... and a potential disaster.</h3>
<p>One of our developers was testing a machine learning model on an EC2 instance. A regular dev task, nothing fancy.</p>
<p>But a few hours in, he messaged me:</p>
<blockquote>
<p>“Something’s off with the setup. I don’t want to break anything — can I get a clone of this machine to test on?”</p>
</blockquote>
<p>Fair ask. But the instance he was using was one we also used for other internal dev workflows. Any changes could have unintended consequences.</p>
<h3 id="heading-my-default-plan-console-clicks-and-lots-of-waiting">My default plan? Console clicks and lots of waiting.</h3>
<p>Cloning an EC2 isn’t rocket science — but it’s tedious:</p>
<ol>
<li><p>Open AWS Console</p>
</li>
<li><p>Go to EC2</p>
</li>
<li><p>Find the instance</p>
</li>
<li><p>Create an AMI</p>
</li>
<li><p>Wait for the AMI to become available</p>
</li>
<li><p>Launch a new instance from it</p>
</li>
<li><p>Manually select the same instance type, security groups, IAM role...</p>
</li>
<li><p>Hope nothing breaks during this dance</p>
</li>
</ol>
<p>It’s not hard, but I’ve done it too many times to count — and it always eats up 15–20 minutes, minimum.</p>
<h3 id="heading-this-time-i-tried-something-different-amazon-q-developer-cli">This time, I tried something different. Amazon Q Developer CLI.</h3>
<p>I had recently installed <strong>Amazon Q Developer CLI</strong> after reading about it.<br />A command-line assistant that understands natural language prompts and turns them into real AWS commands? I had to try it.</p>
<p>So instead of diving into the Console, I opened my terminal:</p>
<pre><code class="lang-bash">q chat
</code></pre>
<p>And typed:</p>
<blockquote>
<p>“Create a new EC2 instance from the existing one named <code>ml-base-server</code>. Use the same security group and IAM role.”</p>
</blockquote>
<p>That’s it. One sentence. No flags. No resource IDs.</p>
<h3 id="heading-what-happened-next-blew-me-away">What happened next blew me away.</h3>
<p>Amazon Q Developer CLI:</p>
<ul>
<li><p>Identified the instance from its Name tag (<code>ml-base-server</code>)</p>
</li>
<li><p>Generated an AMI creation command</p>
</li>
<li><p>Waited for the AMI to become available</p>
</li>
<li><p>Created a new EC2 instance using:</p>
<ul>
<li><p>The same instance type</p>
</li>
<li><p>The same security group(s)</p>
</li>
<li><p>The same IAM role</p>
</li>
</ul>
</li>
<li><p>Even prompted me to confirm each step before executing</p>
</li>
</ul>
<p>No scrolling. No searching. Just… done.</p>
<h3 id="heading-the-result-my-dev-got-a-clean-copy-our-base-setup-stayed-untouched">The result? My dev got a clean copy. Our base setup stayed untouched.</h3>
<p>He could now run experiments safely.<br />No stress, no last-minute surprises, no Console hopping.</p>
<p>What would have taken 15+ minutes manually was done in under 3 — right from my terminal.</p>
<h3 id="heading-why-this-matters">Why this matters</h3>
<p>We often think of DevOps automation in terms of scripting or building pipelines.<br />But sometimes, what you really want is a <strong>conversation</strong>.</p>
<p>That’s what Amazon Q Developer CLI gives you:</p>
<ul>
<li><p>You say what you want in plain English</p>
</li>
<li><p>It figures out how to do it</p>
</li>
<li><p>You still keep control with confirmations</p>
</li>
</ul>
<p>For small-but-important infra tasks like this, it’s a <strong>game-changer</strong>.</p>
<h3 id="heading-try-it-yourself">Try it yourself</h3>
<p>If you haven’t used Amazon Q Developer CLI yet, get started here:<br />👉 <a target="_blank" href="https://dev.to/aws/getting-started-with-amazon-q-developer-cli-4dkd">Getting Started with Amazon Q Developer CLI – dev.to</a></p>
<p>Once installed, try this prompt:</p>
<pre><code class="lang-bash">q chat
</code></pre>
<blockquote>
<p>“Create a new EC2 instance from the one named <code>your-instance-name</code>. Keep the same security group and IAM role.”</p>
</blockquote>
<h2 id="heading-heads-up-a-few-post-install-steps-are-missing-from-the-docs-macos">⚠️ Heads Up: A Few Post-Install Steps Are Missing from the Docs (macOS)</h2>
<p>If you’re following the <a target="_blank" href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-installing.html">official Amazon Q CLI installation guide</a>, you’ll notice it stops right after:</p>
<pre><code class="lang-bash">brew install amazon-q
q --version
</code></pre>
<p>But if you're on <strong>macOS</strong>, there are a few <em>extra steps</em> you <strong>must</strong> follow before things actually start working:</p>
<h3 id="heading-steps-you-need-to-complete-after-installing-via-brew">✅ Steps You Need to Complete After Installing via Brew</h3>
<ol>
<li><p><strong>Open the Amazon Q desktop app</strong><br /> After the <code>brew install</code>, you’ll find a new app called <strong>Amazon Q</strong> installed on your system (via Launchpad or Spotlight).</p>
</li>
<li><p><strong>Grant Accessibility Permissions</strong><br /> When you launch the app for the first time, it will prompt you to <strong>enable accessibility access</strong>.<br /> Go to:<br /> <code>System Settings → Privacy &amp; Security → Accessibility</code></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744769890931/0b647f32-9a5d-45f4-8132-407b0cfe5cd2.png" alt class="image--center mx-auto" /></p>
<p> Then allow <strong>Amazon Q (CodeWhisperer)</strong> to control your system.</p>
</li>
<li><p><strong>Login to AWS from the app</strong><br /> You’ll be prompted to authenticate with your AWS account.</p>
</li>
<li><p><strong>Enable Terminal Integration</strong><br /> Go to the <strong>Integrations</strong> tab in the sidebar, and click <strong>“Enable”</strong> for your preferred terminal:</p>
<ul>
<li><p>Terminal.app</p>
</li>
<li><p>iTerm2</p>
</li>
<li><p>Hyper</p>
</li>
<li><p>VS Code Terminal</p>
<p>  <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1744769962353/1ae04890-2162-4adc-87b8-f7a4188919c4.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Verify it’s active</strong><br /> Open your terminal and try typing <code>q chat</code> — you should now see the CLI assistant activate in context.</p>
</li>
</ol>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/CKye8xI0HiU">https://youtu.be/CKye8xI0HiU</a></div>
<p> </p>
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>Feel free to share what you are targeting to move from manual or script work to <strong>Amazon Q Developer CLI</strong> I’ve built EC2 clones hundreds of times the old way.<br />But this experience — using natural language to handle it — felt like the future.</p>
<p>Amazon Q Developer CLI didn’t just save time.<br />It let me focus on solving the actual problem, not navigating a UI maze.</p>
<p>And honestly?<br />I can’t wait to see what other repetitive tasks I can retire next.</p>
<h3 id="heading-references">References :</h3>
<ul>
<li><p><a target="_blank" href="https://dev.to/aws/getting-started-with-amazon-q-developer-cli-4dkd">https://dev.to/aws/getting-started-with-amazon-q-developer-cli-4dkd</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/aws/amazon-q-developer-cli">https://github.com/aws/amazon-q-developer-cli</a></p>
</li>
<li><p><a target="_blank" href="https://www.linkedin.com/posts/ricardosueiras_amazonqdevelopercli-aws-demoscene-activity-7312502995567431680-OocE/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAAycFYBiLynyePvMJ45ZRUDGVPqRgz4AJg">https://www.linkedin.com/posts/ricardosueiras_amazonqdevelopercli-aws-demoscene-activity-7312502995567431680-OocE/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAAycFYBiLynyePvMJ45ZRUDGVPqRgz4AJg</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Engineering for Growth: Why Your First Architecture Won’t Be Perfect]]></title><description><![CDATA[Hello Devs! 👋 Have you ever wondered how platforms like Instagram, LinkedIn, or YouTube handle millions (or even billions) of users seamlessly? Do you think they built their highly scalable, complex architectures from day one? If you assume they had...]]></description><link>https://www.internetkatta.com/engineering-for-growth-why-your-first-architecture-wont-be-perfect</link><guid isPermaLink="true">https://www.internetkatta.com/engineering-for-growth-why-your-first-architecture-wont-be-perfect</guid><category><![CDATA[instagram]]></category><category><![CDATA[engineering]]></category><category><![CDATA[  Building Scalable Web Apps]]></category><category><![CDATA[scaling]]></category><category><![CDATA[Learning Journey]]></category><category><![CDATA[Building Product]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 07 Mar 2025 07:07:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741331024803/4254fea3-0896-48d5-bc23-b58c0a90bcca.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Hello Devs!</strong> 👋 Have you ever wondered how platforms like Instagram, LinkedIn, or YouTube handle millions (or even billions) of users seamlessly? Do you think they built their highly scalable, complex architectures from day one? If you assume they had everything figured out on the first attempt, you’d be surprised.</p>
<p>The reality is, these platforms evolved <strong>gradually</strong>. They started simple, observed system behaviour, and scaled incrementally based on <strong>real needs</strong>, not assumptions.</p>
<p>As engineers, we often fall into the trap of <strong>over-engineering</strong> too early. When building a new platform, there’s a natural temptation to implement <strong>event-driven architectures, message queues, caching layers, and microservices</strong> from day one. But do we really need all of that upfront? <strong>No one gets architecture perfect on the first try.</strong></p>
<p>Instagram is a perfect example. When they started, they didn't have the complex, scalable infrastructure they do today. They also failed in the beginning, manually scaling their system—before weekends, they had to <strong>add more servers manually</strong> just to handle the expected load. It was a journey of learning and evolution. As a <strong>Staff Engineer building platforms at NuShift</strong>, I find myself asking similar questions:</p>
<ul>
<li><p><em>What will be my initial user base?</em></p>
</li>
<li><p><em>Should I build everything for scale now, or should I start simple and evolve over time?</em></p>
</li>
</ul>
<p>The answer is clear: <strong>implement what you need, observe, and then scale smartly.</strong></p>
<hr />
<h2 id="heading-instagrams-evolution-a-lesson-in-scaling"><strong>Instagram’s Evolution: A Lesson in Scaling</strong></h2>
<p>Instagram’s infrastructure journey teaches us a lot about <strong>when and how to scale</strong>:</p>
<ol>
<li><p><strong>Start simple:</strong> Instagram began as a monolithic architecture. No microservices, no complex event-driven design—just a <strong>straightforward</strong> system.</p>
</li>
<li><p><strong>Observe system behaviour:</strong> As their user base grew, they started experiencing bottlenecks in <strong>database queries, image processing, and request handling</strong>.</p>
</li>
<li><p><strong>Scale where needed:</strong> Instead of redesigning everything from scratch, they incrementally <strong>added caching (Redis, Memcached), optimised databases (PostgreSQL sharding), and implemented asynchronous processing (Celery, Kafka).</strong></p>
</li>
<li><p><strong>Continuous Evolution:</strong> Over time, Instagram moved towards a <strong>service-oriented architecture</strong>, but only when it was necessary.</p>
</li>
</ol>
<p>Had they over-engineered from day one, they would have slowed down <strong>feature development</strong>, introduced <strong>unnecessary complexity</strong>, and wasted <strong>engineering time</strong> solving problems they didn’t even have yet.</p>
<hr />
<h2 id="heading-building-for-today-vs-tomorrow-my-approach-at-nushift"><strong>Building for Today vs. Tomorrow: My Approach at NuShift</strong></h2>
<p>At <strong>NuShift</strong>, I often face a similar challenge. I have to decide:</p>
<ul>
<li><p>Should I <strong>preemptively build</strong> for millions of users?</p>
</li>
<li><p>Should I <strong>implement a full-scale event-driven system</strong> before the first release?</p>
</li>
<li><p>Do I need <strong>queue-based processing</strong>, or is synchronous processing enough for now?</p>
</li>
</ul>
<p>Here’s my approach:</p>
<p>✅ <strong>Start with the Basics:</strong> Get the <strong>core functionality</strong> working before adding complexity. Users will tell you where the pain points are.<br />✅ <strong>Observe and Measure:</strong> Use <strong>logging, monitoring, and metrics</strong> to track system behaviour. Performance bottlenecks will become clear over time.<br />✅ <strong>Scale Where Necessary:</strong> If database read operations are slow, introduce <strong>read replicas or caching</strong>. If too many background tasks pile up, introduce <strong>queues like SQS or Kafka</strong>.<br />✅ <strong>Avoid Premature Optimisation:</strong> Not everything needs microservices or Kubernetes from day one. Focus on solving <strong>real-world problems, not hypothetical ones</strong>.</p>
<hr />
<h2 id="heading-other-companies-that-follow-this-approach"><strong>Other Companies That Follow This Approach</strong></h2>
<p>Instagram isn’t the only one that scaled this way:</p>
<ul>
<li><p><strong>Twitter:</strong> Started as a simple Rails app, later introduced queues, caching, and distributed storage as they grew.</p>
</li>
<li><p><strong>Airbnb:</strong> Began with a monolithic architecture, then adopted microservices when their scale demanded it.</p>
</li>
<li><p><strong>Netflix:</strong> Started with on-premise data centers, then moved to AWS cloud infrastructure as demand exploded.</p>
</li>
</ul>
<p>None of these companies built their <strong>final, scaled architecture on day one.</strong> They scaled <strong>as needed, when needed.</strong></p>
<hr />
<h2 id="heading-key-takeaways-for-engineers"><strong>Key Takeaways for Engineers</strong></h2>
<ul>
<li><p><strong>No one architecture fits all.</strong> Start simple and evolve based on real-world demands.</p>
</li>
<li><p><strong>Scaling should be an iterative process.</strong> Monitor your system and fix what needs fixing, instead of blindly implementing every best practice.</p>
</li>
<li><p><strong>Premature optimisation can slow you down.</strong> Building for scale before you have users is an inefficient use of time and resources.</p>
</li>
<li><p><strong>Learn from real-world examples.</strong> Companies like Instagram, Twitter, and Netflix scaled <strong>over time, not overnight.</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-final-thoughts"><strong>Final Thoughts</strong></h2>
<p>If you're building a new platform, don’t get caught up in trying to <strong>perfect everything from day one</strong>. Focus on building <strong>a solid foundation</strong>, listen to your system, and <strong>let real usage guide your scaling decisions</strong>. This is what Instagram did, what major tech giants did, and what we are doing at NuShift.</p>
<p>Scaling is a <strong>journey of learning and evolution</strong>. The best approach? <strong>Implement, observe, refine, repeat.</strong> 🚀</p>
<h2 id="heading-references">References :</h2>
<p><a target="_blank" href="https://blog.bytebytego.com/p/how-instagram-scaled-its-infrastructure?ref=dailydev">https://blog.bytebytego.com/p/how-instagram-scaled-its-infrastructure?ref=dailydev</a></p>
]]></content:encoded></item><item><title><![CDATA[Dynamically Modifying CloudFront Origin for Country-Specific and A/B Testing]]></title><description><![CDATA[Hey Devs,
As a dedicated developer, I'm always on the lookout for more efficient ways to implement feature development. However, it's equally important to ensure that the features I build are genuinely beneficial for users. This is where A/B testing ...]]></description><link>https://www.internetkatta.com/dynamically-modifying-cloudfront-origin-for-country-specific-and-ab-testing</link><guid isPermaLink="true">https://www.internetkatta.com/dynamically-modifying-cloudfront-origin-for-country-specific-and-ab-testing</guid><category><![CDATA[AWS]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[ab testing]]></category><category><![CDATA[reinvent2024]]></category><category><![CDATA[serverless]]></category><category><![CDATA[CloudFront Functions]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Mon, 17 Feb 2025 04:18:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739765715399/95322441-a60a-4c81-9eda-aa2c1a84a6c0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hey Devs,</p>
<p>As a dedicated developer, I'm always on the lookout for more efficient ways to implement feature development. However, it's equally important to ensure that the features I build are genuinely beneficial for users. This is where A/B testing comes into play—it helps validate what truly resonates with users.</p>
<p>Previously, I was familiar with A/B testing using Google Analytics and custom logic to direct traffic to specific features. However, I sought a more seamless and scalable approach that didn't rely on additional infrastructure. As re:Invent approached, my focus wasn't solely on DevOps updates; I was actively searching for innovations that could simplify A/B testing and enhance feature development.</p>
<p>This led me to explore whether AWS had introduced any new capabilities to streamline this process. When I came across the recent update to Amazon CloudFront, allowing origin modifications using CloudFront Functions, I realised its potential in dynamically routing users based on key attributes like location or device type.</p>
<p>Amazon CloudFront recently introduced support for modifying the origin using <strong>CloudFront Functions</strong>, enabling developers to dynamically route requests to different origins based on user attributes such as country, device type, or other request headers. This feature unlocks new possibilities for global content delivery, including country-specific websites, A/B testing, and device-based content optimization.</p>
<p>This update immediately caught my attention as I was in the middle of preparing for the re:Cap event. It felt like the perfect opportunity to explore its real-world applications.</p>
<h2 id="heading-the-story-behind-this-update">The Story Behind This Update</h2>
<p>As part of AWS re:Invent preparations, AWS announced a Pre-re:Invent update on November 21, 2024, introducing new CloudFront capabilities. In preparation for the re:Cap event on January 6, 2025, I explored these new capabilities to showcase them in demos. When I came across this update, I decided to do a deep dive into <strong>CloudFront Functions</strong> and explore how it could be leveraged for <strong>dynamic origin selection</strong>. This led me to experiment with different use cases like country-based content delivery, A/B testing, and device-specific optimisations. Through this exploration, I realized how impactful this update could be for developers optimising global content distribution.</p>
<h2 id="heading-why-modify-cloudfront-origin-dynamically">Why Modify CloudFront Origin Dynamically?</h2>
<p>Traditionally, CloudFront distributions have a fixed origin (an S3 bucket, EC2 instance, or any HTTP endpoint). However, with CloudFront Functions, we can dynamically select the origin based on request attributes, such as:</p>
<ul>
<li><p><strong>User’s country:</strong> Serve country-specific content from different S3 buckets or servers.</p>
</li>
<li><p><strong>A/B Testing:</strong> Route a percentage of traffic to different versions of a website or application.</p>
</li>
<li><p><strong>Device Type:</strong> Serve optimised content for mobile or desktop users.</p>
</li>
</ul>
<h2 id="heading-how-it-works">How It Works</h2>
<p>CloudFront Functions allow lightweight JavaScript-based logic to run at the <strong>viewer request</strong> stage, enabling modifications to the request before it reaches the origin. One key function is <code>request.updateRequestOrigin()</code>, which allows us to change the request origin dynamically.</p>
<h2 id="heading-example-changing-origin-based-on-users-country">Example: Changing Origin Based on User’s Country</h2>
<p>For a country-specific website, we can use <code>CloudFront-Viewer-Country</code> header to decide which S3 bucket or server should serve the request.</p>
<h3 id="heading-step-1-create-a-cloudfront-function">Step 1: Create a CloudFront Function</h3>
<p>You can create a CloudFront Function from the AWS Management Console or AWS CLI. Below is a sample function to modify the origin based on the user’s country.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739647182927/0fb2f610-d532-4e1a-a586-479d559d0f53.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-2-deploy-the-cloudfront-function">Step 2: Deploy the CloudFront Function</h3>
<ol>
<li><p><strong>Go to AWS CloudFront Console</strong> → Select <strong>CloudFront Functions</strong></p>
</li>
<li><p><strong>Create a new function</strong> and name it (e.g., <code>ModifyOriginBasedOnCountry</code>)</p>
<p> You can choose any runtime but preferred one to use latest runtime.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739646750098/d5fbe98a-965b-4346-8f85-bfe5ca46cfbc.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Paste the JavaScript code</strong> and publish the function</p>
<pre><code class="lang-javascript"> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">handler</span>(<span class="hljs-params">event</span>) </span>{
     <span class="hljs-keyword">var</span> request = event.request;
     <span class="hljs-keyword">var</span> headers = request.headers;

     <span class="hljs-comment">// Extract country code from CloudFront-Viewer-Country header</span>
     <span class="hljs-keyword">var</span> country = headers[<span class="hljs-string">'cloudfront-viewer-country'</span>] ? headers[<span class="hljs-string">'cloudfront-viewer-country'</span>].value : <span class="hljs-string">'US'</span>;

     <span class="hljs-comment">// Define origin mapping based on country</span>
     <span class="hljs-keyword">var</span> origins = {
         <span class="hljs-string">'US'</span>: { <span class="hljs-attr">domainName</span>: <span class="hljs-string">'us-content.example.com'</span> },
         <span class="hljs-string">'IN'</span>: { <span class="hljs-attr">domainName</span>: <span class="hljs-string">'in-content.example.com'</span> },
         <span class="hljs-string">'UK'</span>: { <span class="hljs-attr">domainName</span>: <span class="hljs-string">'uk-content.example.com'</span> }
     };

     <span class="hljs-comment">// Default to US if no specific origin is found</span>
     <span class="hljs-keyword">var</span> newOrigin = origins[country] || origins[<span class="hljs-string">'US'</span>];

     <span class="hljs-comment">// Update the origin</span>
     request.updateRequestOrigin(newOrigin);
     <span class="hljs-keyword">return</span> request;
 }
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739646915865/03a7ad23-c119-48d1-a5b5-0fe523c0a6f9.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Associate the function</strong> with the desired CloudFront distribution at the <strong>viewer request</strong> stage.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739646970454/aa8ccbf9-819b-4db8-a015-fcfbf9599d21.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1739646975116/8e756648-c399-479a-8cf4-33d6cc564b3c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Make sure to publish function whenever make changes.</p>
</li>
</ol>
<h3 id="heading-step-3-test-the-function">Step 3: Test the Function</h3>
<p>To test country-specific content delivery, you can use any VPN-based proxy site like <a target="_blank" href="https://proxyium.com/#google_vignette">https://proxyium.com</a> to browse the website from different locations. This will allow you to verify if the CloudFront function is correctly routing requests based on the user's country. Or Use <code>curl</code> to simulate requests from different countries by setting the <code>CloudFront-Viewer-Country</code> header.</p>
<pre><code class="lang-bash">curl -H <span class="hljs-string">"CloudFront-Viewer-Country: IN"</span> https://your-cloudfront-domain.com
</code></pre>
<p>If implemented correctly, users from different regions will be routed to the appropriate origin. Same for can be tested for device specific using browser device testing under developer tool or test it on different physical devices.</p>
<h2 id="heading-other-use-cases">Other Use Cases</h2>
<h3 id="heading-1-ab-testing"><strong>1. A/B Testing</strong></h3>
<p>Modify the function to route a percentage of traffic to different origins for testing purposes.</p>
<pre><code class="lang-bash">var randomValue = Math.random();
<span class="hljs-keyword">if</span> (randomValue &lt; 0.5) {
    request.updateRequestOrigin({ domainName: <span class="hljs-string">'experiment.example.com'</span> });
}
</code></pre>
<h3 id="heading-2-device-based-content-routing"><strong>2. Device-Based Content Routing</strong></h3>
<p>Use the <code>User-Agent</code> header to route requests based on mobile or desktop access.</p>
<pre><code class="lang-bash">var userAgent = headers[<span class="hljs-string">'user-agent'</span>].value.toLowerCase();
<span class="hljs-keyword">if</span> (userAgent.includes(<span class="hljs-string">'mobile'</span>)) {
    request.updateRequestOrigin({ domainName: <span class="hljs-string">'mobile-content.example.com'</span> });
}
</code></pre>
<h2 id="heading-live-demo">Live demo</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=P4oI1DcFSNs">https://www.youtube.com/watch?v=P4oI1DcFSNs</a></div>
<p> </p>
<h2 id="heading-summary">Summary</h2>
<p>With the new <code>updateRequestOrigin</code> functionality in CloudFront Functions, developers can build <strong>highly customisable</strong> and <strong>dynamic</strong> content delivery strategies. Whether it's serving country-specific content, conducting A/B tests, or optimising content for different devices, this feature brings greater flexibility to AWS CloudFront.</p>
<p>🚀 <strong>Start leveraging CloudFront Functions today to optimise your content delivery!</strong></p>
<p>I hope this blog helps you to learn. Feel free to reach out to me on my Twitter handle <a class="user-mention" href="https://hashnode.com/@AvinashDalvi_">@AvinashDalvi_</a> or leave comment on the blog. Stay tuned for more learning.</p>
<h2 id="heading-references">References</h2>
<ul>
<li><a target="_blank" href="https://github.com/aws-samples/amazon-cloudfront-functions/tree/main/select-origin-based-on-country">https://github.com/aws-samples/amazon-cloudfront-functions/tree/main/select-origin-based-on-country</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Setting Up Deep Linking in Angular with AWS Amplify Hosting]]></title><description><![CDATA[Hello Devs,
As a developer, we always been in situation where the previous developer has long moved on, and you’re left with no documentation or answers. That was exactly the situation I found myself in when I started working on an Angular project th...]]></description><link>https://www.internetkatta.com/setting-up-deep-linking-in-angular-with-aws-amplify-hosting</link><guid isPermaLink="true">https://www.internetkatta.com/setting-up-deep-linking-in-angular-with-aws-amplify-hosting</guid><category><![CDATA[Amplify Hosting]]></category><category><![CDATA[Angular]]></category><category><![CDATA[amplify]]></category><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 17 Jan 2025 06:27:21 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737019169491/198a0fe0-1005-48cf-b358-6e4a62b2bdb9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>As a developer, we always been in situation where the previous developer has long moved on, and you’re left with no documentation or answers. That was exactly the situation I found myself in when I started working on an Angular project that required deep linking for both iOS and Android. The previous developer wasn’t available, and I had to figure out how to make deep linking work in the existing codebase. Even after so many years of experience deep linking concept was new to me ( how it works ).</p>
<p>I’ll be honest — it wasn’t a smooth start. One of standup my team raised concern about this So I took this challenge to solve Though I good in Amplify hosting and Angular but how deep links work was exploration part. But with the help of Amazon Q Developer and a lot of trial and error, I eventually managed to set everything up. Amazon Q developer become best buddy to solve issue which I am not aware. I do use ChatGPT but for most of AWS related query used Amazon Q developer because expertise it has.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737020278679/ef1c6c02-b20d-4dd3-9d2c-aaf57fa51bb4.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-understanding-deep-linking-the-social-media-analogy">Understanding Deep Linking — The Social Media Analogy</h2>
<p>Before diving into the technical steps, let’s take a moment to understand <strong>deep linking</strong>. You’ve probably seen it many times when using social media apps. For example, in Instagram, WhatsApp, or Facebook, when you want to share a post or a specific piece of content, you usually find a “Share” button or an option to "Copy Link." When you tap it, you're not just sharing a URL; you’re sharing a <strong>direct link to that specific content</strong> within the app.</p>
<p>This is exactly what deep linking does: it allows an app to open specific content directly through a URL, bypassing the homepage and landing you right where you want to be. So, whether it's sharing a Facebook post, sending a WhatsApp message, or viewing an Instagram story, deep links are everywhere — and they’re crucial for user experience in mobile apps.</p>
<p>For iOS, deep links are managed through the <code>apple-app-site-association</code> file, and for Android, it's handled by the <code>assetlinks.json</code> file. These files need to be hosted properly in the root of your web server, under a <code>.well-known/</code> directory, and that’s where AWS Amplify comes into play if you are using Amplify hosting for Angular project.</p>
<h2 id="heading-placing-the-apple-app-site-association-file-in-angular">Placing the <code>apple-app-site-association</code> File in Angular</h2>
<p>Now that we understand what deep linking is and why it’s important, let’s get into the technical part. In an Angular project, the <code>apple-app-site-association</code> file should be placed inside the <code>src/assets/.well-known/</code> directory. This allows Angular to copy the file into the build output, ensuring it gets served correctly when the app is deployed.</p>
<p>Here’s the folder structure you need to follow:</p>
<pre><code class="lang-bash">src/
  assets/
    .well-known/
      apple-app-site-association
</code></pre>
<p>And the contents of the <code>apple-app-site-association</code> file should look like this:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"applinks"</span>: {
        <span class="hljs-attr">"apps"</span>: [],
        <span class="hljs-attr">"details"</span>: [
            {
                <span class="hljs-attr">"appID"</span>: <span class="hljs-string">"TEAM_ID.BUNDLE_ID"</span>,
                <span class="hljs-attr">"paths"</span>: [<span class="hljs-string">"*"</span>]
            }
        ]
    }
}
</code></pre>
<p>This file essentially tells iOS which app should handle specific deep links and what paths are allowed. For instance, <code>*</code> means any path within the app can be opened via a deep link.</p>
<h2 id="heading-updating-angularjson-to-copy-the-file-during-build">Updating <code>angular.json</code> to Copy the File During Build</h2>
<p>At this point, the file is in the right location, but we need to make sure Angular knows to copy it to the build output when we run the build command. This can be done by modifying the <code>angular.json</code> file.</p>
<p>In the <code>assets</code> section of your <code>angular.json</code> file, you need to add an entry for the <code>.well-known</code> directory to ensure it’s copied during the build process:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"architect"</span>: {
    <span class="hljs-attr">"build"</span>: {
      <span class="hljs-attr">"options"</span>: {
        <span class="hljs-attr">"assets"</span>: [
          <span class="hljs-string">"src/favicon.ico"</span>,
          <span class="hljs-string">"src/assets"</span>,
          {
            <span class="hljs-attr">"glob"</span>: <span class="hljs-string">"**/*"</span>,
            <span class="hljs-attr">"input"</span>: <span class="hljs-string">"src/assets/.well-known"</span>,
            <span class="hljs-attr">"output"</span>: <span class="hljs-string">"/.well-known/"</span>
          }
        ]
      }
    }
  }
}
</code></pre>
<p>This ensures that when you build the app, the <code>.well-known</code> folder (and everything inside it) is copied into the root of the final build output.</p>
<p>Is it done ? No there few more steps are pending.</p>
<h2 id="heading-configuring-rewrite-rules-in-aws-amplify">Configuring Rewrite Rules in AWS Amplify</h2>
<p>Here’s where things got a bit tricky for me. AWS Amplify doesn’t automatically serve files like <code>apple-app-site-association</code> with the correct headers, and it also doesn’t automatically rewrite URLs to <code>.json</code> format when needed.</p>
<p>In my initial attempt, I added the rewrite rule for <code>.well-known/apple-app-site-association</code> to be rewritten to <code>.well-known/apple-app-site-association.json</code>. While this worked fine for that specific URL, the rest of the website went down for about two hours. It took me a while to realize that the order of the rewrite rules was causing the problem.</p>
<h3 id="heading-the-mistake"><strong>The Mistake</strong></h3>
<p>I had added the <code>.well-known</code> rule first, followed by other rules for the rest of the website. However, this sequence made the Amplify rewrite engine get stuck, causing an issue where the website would not load as expected. Although the <code>.well-known</code> URL was working fine, the other pages were not. It was a frustrating situation, and I realized that the sequence of the rewrite rules was the root cause of the issue.</p>
<h3 id="heading-the-solution"><strong>The Solution</strong></h3>
<p>After some troubleshooting and a bit of trial and error, I figured out that the correct order of rewrite rules was essential for everything to work properly. I deleted all the previous rules and started fresh, ensuring the rewrite rules were correctly ordered.</p>
<p>Here’s how I corrected it:</p>
<ol>
<li><p>I placed the <code>.well-known</code> rule <strong>first</strong> in the sequence.</p>
</li>
<li><p>Then, I added the rewrite rules to <strong>exclude</strong> <code>.well-known</code> and include other URLs for the rest of the website.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737018145017/882b3202-7cd7-46be-bab0-04357523c489.png" alt class="image--center mx-auto" /></p>
<p>I added a rewrite rule to ensure that requests to <code>/apple-app-site-association</code> are correctly rewritten to <code>.apple-app-site-association.json</code>. This step was important because iOS expects the file to be in <code>.json</code> format, but users might request it without the extension.</p>
<p>I added the following rewrite rule to handle that:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"source"</span>: <span class="hljs-string">"/.well-known/apple-app-site-association"</span>, 
  <span class="hljs-attr">"target"</span>: <span class="hljs-string">"/.well-known/apple-app-site-association.json"</span>,
  <span class="hljs-attr">"status"</span>: <span class="hljs-string">"200"</span>
}
</code></pre>
<p>Additionally, I configured a default rewrite rule to handle non-asset URLs and redirect them to the main <code>index.html</code> for proper routing in the Angular app:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"source"</span>: <span class="hljs-string">"/^(?!\.well-known/)(?!.*\.(css|js|jpg|jpeg|png|gif|svg|woff|ttf|eot|ico|mp4|mp3|json)$).*"</span>,
  <span class="hljs-attr">"target"</span>: <span class="hljs-string">"/index.html"</span>,
  <span class="hljs-attr">"status"</span>: <span class="hljs-string">"200"</span>
}
</code></pre>
<p>This ensures that non-asset URLs are routed to the <code>index.html</code> page, which is typical for single-page applications (SPAs) built with Angular.</p>
<h2 id="heading-deploying-and-verifying">Deploying and Verifying</h2>
<p>Once everything was set up, I deployed the application to AWS Amplify. After deployment, I tested the URL where the <code>apple-app-site-association</code> file should be served, and everything worked as expected:</p>
<pre><code class="lang-bash">https://your-amplify-app-url/.well-known/apple-app-site-association
</code></pre>
<p>I verified that the file was accessible and confirmed that it was being served with the correct content type and caching headers.</p>
<h4 id="heading-ios-verification">iOS Verification</h4>
<p>To verify that the <code>apple-app-site-association</code> file is being served correctly, you can use Apple's verification tool:</p>
<pre><code class="lang-bash">https://app-site-association.cdn-apple.com/a/v1/yourdomain.com
</code></pre>
<p>This URL should return the contents of the <code>apple-app-site-association.json</code> file and show that it's being served with the correct <code>Content-Type</code> (<code>application/json</code>). If it works, your iOS deep linking is correctly configured.</p>
<h2 id="heading-android-deep-linking-a-similar-process">Android Deep Linking — A Similar Process</h2>
<p>While this blog focuses on iOS deep linking with AWS Amplify, it’s worth mentioning that Android also requires a similar configuration. For Android, you’ll need to set up the <code>assetlinks.json</code> file in the same <code>.well-known/</code> directory. The process is very similar to what we’ve done for iOS, and once set up, deep linking will work seamlessly across both platforms.</p>
<h2 id="heading-conclusion-a-learning-experience">Conclusion: A Learning Experience</h2>
<p>Setting up deep linking wasn’t easy, but it was incredibly rewarding. I learned how to handle iOS and Android deep linking in an Angular app, how to host the necessary files on AWS Amplify, and how to configure the right rewrite rules for proper routing. What seemed like a daunting task at first ended up being a great learning experience.</p>
<p>By following these steps, you should be able to set up deep linking for your Angular app hosted on AWS Amplify. I hope my journey helps you navigate this process more smoothly and efficiently!</p>
<p>Happy coding, and may your deep links always work perfectly!</p>
<h3 id="heading-references">References :</h3>
<ul>
<li><p><a target="_blank" href="https://dev.to/developeralamin/deep-linking-for-andriod-and-apple-in-reactjs-1hjc">https://dev.to/developeralamin/deep-linking-for-andriod-and-apple-in-reactjs-1hjc</a></p>
</li>
<li><p><a target="_blank" href="https://developer.android.com/training/app-links/deep-linking">https://developer.android.com/training/app-links/deep-linking</a></p>
</li>
<li><p><a target="_blank" href="https://yeeply.com/en/blog/mobile-app-development/deep-linking-android-ios-apps/">https://yeeply.com/en/blog/mobile-app-development/deep-linking-android-ios-apps/</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Simplified Data Masking in AWS Lambda with Powertools]]></title><description><![CDATA[Hello Devs,

“Data is the new oil,” they say, but in healthcare and finance, it’s more like nitroglycerin—immensely valuable, yet dangerously explosive if mishandled.”

I recently shared on social media that I've joined the healthcare industry. This ...]]></description><link>https://www.internetkatta.com/simplified-data-masking-in-aws-lambda-with-powertool</link><guid isPermaLink="true">https://www.internetkatta.com/simplified-data-masking-in-aws-lambda-with-powertool</guid><category><![CDATA[lambda]]></category><category><![CDATA[powertools]]></category><category><![CDATA[Python]]></category><category><![CDATA[serverless]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Lambda function]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Fri, 03 Jan 2025 18:44:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736168256271/8963631d-82f3-4b8c-8da0-b20c9eac34bd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<blockquote>
<p>“Data is the new oil,” they say, but in healthcare and finance, it’s more like nitroglycerin—immensely valuable, yet dangerously explosive if mishandled.”</p>
</blockquote>
<p>I recently shared on social media that I've joined the healthcare industry. This marks a shift from my background in both e-commerce and finance. During my time in finance, I dealt extensively with highly sensitive data like bank statements, KYC information, personal identification details, and other financial records. I vividly recall understanding that even a minor data handling error could have severe repercussions: breaches, fines, and, most importantly, a loss of public trust. This same level of data sensitivity exists in healthcare, where every piece of patient information is crucial and protected by regulations like GDPR and HIPAA. To comply with these regulations and similar ones worldwide, data masking has become essential. In this blog, I’ll break down what Powertools is, how it can be used for data masking in AWS Lambda, and why it’s critical for domains like finance and healthcare. Let's dive into Powertools for AWS Lambda.</p>
<h2 id="heading-what-is-powertools-for-aws-lambda"><strong>What is Powertools for AWS Lambda?</strong></h2>
<p>Think of <strong>Powertools for AWS Lambda</strong> as a Swiss Army knife for serverless applications. It’s an open-source library that helps you write better, more secure, and more maintainable code. Instead of reinventing the wheel every time you need to mask data, log securely, or handle retries, Powertools provides ready-to-use utilities.</p>
<h4 id="heading-key-features-of-powertools-for-aws-lambda"><strong>Key Features of Powertools for AWS Lambda</strong></h4>
<p>Powertools offers a robust set of features to simplify serverless development:</p>
<ul>
<li><p>Tracer</p>
</li>
<li><p>Logger</p>
</li>
<li><p>Metrics</p>
</li>
<li><p>Event Handler</p>
</li>
<li><p>Parameters</p>
</li>
<li><p>Batch Processing</p>
</li>
<li><p>Typing</p>
</li>
<li><p>Validation</p>
</li>
<li><p>Event Source Data Classes</p>
</li>
<li><p>Parser (Pydantic)</p>
</li>
<li><p>Idempotency</p>
</li>
<li><p>Data Masking <em>(Focus of this blog)</em></p>
</li>
<li><p>Feature Flags</p>
</li>
<li><p>Streaming</p>
</li>
<li><p>Middleware Factory</p>
</li>
<li><p>JMESPath Functions</p>
</li>
<li><p>CloudFormation Custom Resources</p>
</li>
</ul>
<p>In this blog, we are going to explore <strong>Data Masking</strong> usage in detail. For data masking, Powertools ensures that only necessary data appears in your logs.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735881923883/882e6bc3-54d6-45e9-a11d-2c92eeab62e0.png" alt class="image--center mx-auto" /></p>
<p><em>Image taken from official website.</em></p>
<h2 id="heading-why-is-data-masking-important-in-aws-lambda"><strong>Why is Data Masking Important in AWS Lambda?</strong></h2>
<p>When dealing with serverless functions, especially in industries like healthcare and finance:</p>
<ul>
<li><p>Logs are often sent to systems like CloudWatch.</p>
</li>
<li><p>Sensitive information (like SSNs, credit card numbers, or medical IDs) can easily end up in plaintext logs.</p>
</li>
<li><p>Compliance standards (HIPAA for healthcare, PCI-DSS for finance) strictly prohibit exposing such data.</p>
</li>
</ul>
<p>This is where <strong>Powertools for AWS Lambda</strong> comes into play.</p>
<h2 id="heading-how-to-use-powertools-for-aws-lambda-for-data-masking"><strong>How to Use Powertools for AWS Lambda for Data Masking</strong></h2>
<p>Let’s break this down step by step. I am taking example of Python here because it widely used and I used this my day to day activities.</p>
<p><strong>1. Install Powertools</strong></p>
<pre><code class="lang-plaintext">pip install aws-lambda-powertools // use pip3 for Python3 
pip install jsonpath_ng // if you get issue for jsonpath_ng. This mostly require for below example
pip install ply // if you get issue for ply. This mostly require for below example
</code></pre>
<p><strong>2. Use the Logger Utility for Masking ( erase, encrypt, decrypt).</strong></p>
<p>In this example we are using <code>erase</code> method to mask data. Powertools for AWS Lambda offers three primary functions for data masking:</p>
<ul>
<li><p><strong>erase</strong>: Removes sensitive data fields completely.</p>
</li>
<li><p><strong>encrypt</strong>: Encrypts sensitive data fields.</p>
</li>
<li><p><strong>decrypt</strong>: Decrypts previously encrypted fields.</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> __future__ <span class="hljs-keyword">import</span> annotations

<span class="hljs-keyword">from</span> aws_lambda_powertools <span class="hljs-keyword">import</span> Logger
<span class="hljs-keyword">from</span> aws_lambda_powertools.utilities.data_masking <span class="hljs-keyword">import</span> DataMasking
<span class="hljs-keyword">from</span> aws_lambda_powertools.utilities.typing <span class="hljs-keyword">import</span> LambdaContext

logger = Logger()
data_masker = DataMasking()


<span class="hljs-meta">@logger.inject_lambda_context</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lambda_handler</span>(<span class="hljs-params">event: dict, context: LambdaContext</span>) -&gt; dict:</span>
    data: dict = event.get(<span class="hljs-string">"body"</span>, {})

    logger.info(<span class="hljs-string">"Erasing fields email, address.street, and company_address"</span>)

    erased = data_masker.erase(data, fields=[<span class="hljs-string">"email"</span>, <span class="hljs-string">"address.street"</span>, <span class="hljs-string">"company_address"</span>, <span class="hljs-string">"aadhar"</span>, <span class="hljs-string">"diagnosis"</span>,<span class="hljs-string">"blod_group"</span>])  

    <span class="hljs-keyword">return</span> erased
</code></pre>
<p>In this example, sensitive data like Aadhar, blood group, diagnosis is masked while still allowing logs to be useful for debugging.</p>
<pre><code class="lang-json">Response:
{
  <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
  <span class="hljs-attr">"name"</span>: <span class="hljs-string">"Avinash Dalvi"</span>,
  <span class="hljs-attr">"age"</span>: <span class="hljs-number">30</span>,
  <span class="hljs-attr">"email"</span>: <span class="hljs-string">"*****"</span>,
  <span class="hljs-attr">"address"</span>: {
    <span class="hljs-attr">"street"</span>: <span class="hljs-string">"*****"</span>,
    <span class="hljs-attr">"city"</span>: <span class="hljs-string">"Bengaluru"</span>,
    <span class="hljs-attr">"state"</span>: <span class="hljs-string">"KA"</span>,
    <span class="hljs-attr">"zip"</span>: <span class="hljs-string">"211311"</span>
  },
  <span class="hljs-attr">"diagnosis"</span>: <span class="hljs-string">"*****"</span>,
  <span class="hljs-attr">"blod_group"</span>: <span class="hljs-string">"*****"</span>,
  <span class="hljs-attr">"aadhar"</span>: <span class="hljs-string">"*****"</span>,
  <span class="hljs-attr">"company_address"</span>: <span class="hljs-string">"*****"</span>
}
</code></pre>
<p>To use <code>encrypt</code> method refer this official documentation <a target="_blank" href="https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/#encrypting-data">https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/#encrypting-data</a> and for <code>decrypt</code> method use this <a target="_blank" href="https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/#decrypting-data">https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/#decrypting-data</a></p>
<p><a target="_blank" href="https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/#decrypting-data">In this way we can validate compliance with Powertools and it is not only masks data but a</a>lso helps with metrics and tracing to ensure compliance with industry standards like GDPR or HIPAA</p>
<h2 id="heading-real-world-use-case-healthcare-application-logs"><strong>Real-World Use Case: Healthcare Application Logs</strong></h2>
<p>Imagine a healthcare Lambda function that processes insurance claims. Without masking, logs might show:</p>
<pre><code class="lang-python">INFO: Processing claim <span class="hljs-keyword">for</span> Patient: Avinash Dalvi, Aadhar ID : <span class="hljs-number">1233</span><span class="hljs-number">-2432</span><span class="hljs-number">-2233</span>, Blood Group : O+
</code></pre>
<p>With Powertools:</p>
<pre><code class="lang-python">INFO: Processing claim <span class="hljs-keyword">for</span> Patient: Avinash Dalvi, Aadhar ID : *****, Blood Group : *****
</code></pre>
<p>A tiny change, but one that can save millions in potential fines and, more importantly, protect lives.</p>
<h2 id="heading-how-it-impacts-healthcare-and-finance"><strong>How It Impacts Healthcare and Finance</strong></h2>
<ul>
<li><p><strong>Healthcare:</strong> Ensures compliance with HIPAA by preventing PHI (Protected Health Information) leaks.</p>
</li>
<li><p><strong>Finance:</strong> Aligns with PCI-DSS guidelines to prevent exposure of payment details.</p>
</li>
<li><p><strong>Audit Readiness:</strong> Masked logs are audit-friendly while maintaining transparency for debugging.</p>
</li>
</ul>
<p>In fast growing world of Serverless architectures, Powertools for AWS Lambda isn’t just tool - it’s a shield which protect us and make it compliant. Whether you are handling medical records, financial transactions or personal user data integrating Powertools ensures you are not just compliant but also responsible.</p>
<p>As I continue building solutions in the healthcare space, Powertools has become my go-to companion, making critical use cases easier, safer, and scalable.</p>
<p><em>Have you faced similar challenges in your Serverless journey? Share your thoughts below, and let’s keep building secure and responsible applications together.</em></p>
<p>I hope this blog helps you to learn. Feel free to reach out to me on my Twitter handle @AvinashDalvi_ or leave a comment on the blog. Stay tuned for more learning for data masking using Powertools.</p>
<h2 id="heading-references">References:</h2>
<ul>
<li><p><a target="_blank" href="https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/">https://docs.powertools.aws.dev/lambda/python/latest/utilities/data_masking/</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/aws-powertools/powertools-lambda-python">https://github.com/aws-powertools/powertools-lambda-python</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Manifesting Milestones: My Transformative Journey Through 2024]]></title><description><![CDATA[My journey in 2024 was paved long before the year began; it started in the final months of 2023. I am a firm believer in the power of the subconscious mind, and in those reflective moments, I planted two seeds deep within it: In 2024, I will deliver ...]]></description><link>https://www.internetkatta.com/manifesting-milestones-my-transformative-journey-through-2024</link><guid isPermaLink="true">https://www.internetkatta.com/manifesting-milestones-my-transformative-journey-through-2024</guid><category><![CDATA[journey]]></category><category><![CDATA[2024]]></category><category><![CDATA[learning]]></category><category><![CDATA[Manifestation]]></category><category><![CDATA[Career]]></category><category><![CDATA[Experience ]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Tue, 31 Dec 2024 03:48:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1735616124156/ef0d3524-aaa6-42b0-8a24-34922546f86d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>My journey in 2024 was paved long before the year began; it started in the final months of 2023. I am a firm believer in the power of the subconscious mind, and in those reflective moments, I planted two seeds deep within it: <em>In 2024, I will deliver an international talk, and I will speak at an official AWS event.</em></p>
<p>I nurtured these thoughts daily, visualizing them as already achieved, allowing them to guide my actions and intentions. Alongside this, I began preparing myself for the role of a Staff Engineer. Little did I know, the universe was already listening. By the end of 2024, I found myself not just as a Staff Engineer, but as a <strong>Senior Staff Engineer at NuShift</strong>.</p>
<p>This journey wasn't without its challenges, but every step was purposeful, every setback was a lesson, and every small victory was a milestone on the path I had envisioned. This year wasn't just about professional achievements; it was about believing in the unseen, trusting the process, and showing up every single day with intention and resilience.</p>
<h2 id="heading-the-staff-engineer-dream-a-road-paved-with-failures-and-growth"><strong>The Staff Engineer Dream: A Road Paved with Failures and Growth</strong></h2>
<p>One of my key aspirations for 2024 was to transition into a Staff Engineer role. But the journey wasn't linear. I faced multiple rejections during interviews, and each one felt like a heavy blow to my confidence. I remember sitting quietly after one particularly tough rejection, questioning whether I was truly cut out for this path. But amidst those moments of doubt, I reminded myself why I started—to grow, to learn, and to contribute at a higher level.</p>
<p>To ensure I was prepared, I went through countless blogs from people who had walked this path before me. I read <em>The Staff Engineer's Path</em> and immersed myself in videos, interviews, and success stories. I wanted to understand not just the technical expectations but the mindset, leadership qualities, and the varying interpretations of the role across different companies. I didn't want to step into the role unprepared; I wanted to do justice to it.</p>
<p>Slowly, I began treating each rejection not as a failure, but as feedback. I revisited my preparation, identified weak spots, and sought guidance from mentors who had walked this path before me. Every setback became a stepping stone, every doubt became a question to answer, and every small improvement became a reason to keep going. The road was tough, but it shaped me in ways success never could. I leaned on my experiences, sought mentorship, and slowly started building a roadmap for myself, not just professionally but emotionally. The road is still being paved, but the lessons from those moments will forever remain etched in my journey.</p>
<h2 id="heading-breaking-the-habit-of-context-switching-the-power-of-deep-work"><strong>Breaking the Habit of Context Switching: The Power of Deep Work</strong></h2>
<p>For years, I had a habit of multitasking and frequently switching contexts. While it seemed productive on the surface, it often left me exhausted, overwhelmed, and surrounded by unfinished tasks. But in 2024, I stumbled upon the book <em>Deep Work</em> by Cal Newport, and it completely changed my perspective.</p>
<p>The book taught me the value of focus and uninterrupted work sessions. I'm not saying I've perfected this habit, but something has shifted. I've started noticing improvements in how I manage my tasks, focus on one thing at a time, and make meaningful progress without feeling perpetually drained.</p>
<p>In 2025, I aim to deepen this habit even further—to strike a better balance between my professional and community contributions, and to ensure every effort counts.</p>
<h2 id="heading-mentorship-and-judging-0xday-hackathon-pondicherry"><strong>Mentorship and Judging: 0x.Day Hackathon, Pondicherry</strong></h2>
<p>On December 29th and 30th, I had the incredible opportunity to be a mentor and a final panel judge at the prestigious <a target="_blank" href="https://hack.0x.day/">0x.Day Hackathon</a> in Pondicherry. With over 500 participants and 130 teams, the energy was electric. Personally evaluating 27 teams was no small feat, but it was equally rewarding.</p>
<p>During the hackathon, I was amazed by several projects focused on social and community impact. One team worked on creating sign language translations from YouTube or any video content, breaking barriers for hearing-impaired individuals. Another team built a voice-hearing application for people who can't listen, offering them a new way to interact with the world. A third project aimed to create an inclusive healthcare ecosystem, ensuring accessibility and care for underprivileged communities. There was even an innovative idea that focused on transforming negative or unclear speech into positive, meaningful communication.</p>
<p>Each of these projects left a lasting impression on me, not just because of their technical ingenuity, but because of the compassion and purpose driving their creators. It reminded me that technology isn't just about innovation; it's about creating meaningful change in people's lives. These young minds showed courage, creativity, and a deep sense of responsibility, and I felt incredibly proud to have been a part of their journey.</p>
<p>In moments like these, I was reminded why events like hackathons matter. It's not just about flawless execution; it's about courage, creativity, and the willingness to try. I felt proud to contribute to nurturing the next generation of builders, one team at a time.</p>
<h2 id="heading-international-talks-a-dream-realised-almost"><strong>International Talks: A Dream Realised (Almost)</strong></h2>
<p>2024 also marked a significant milestone: my first international talk at FOSS Asia. Standing on an international stage, sharing knowledge, and interacting with a global audience was an experience unlike any other. Out of 26 talk submissions this year, three were accepted internationally. However, I could only attend one. The missed opportunities weighed on me initially, but I reminded myself that sometimes, the universe has other plans.</p>
<p>On the brighter side, I successfully delivered four impactful talks at conferences, each one carefully crafted with a storytelling approach to ensure every attendee left with not just insights but inspiration.</p>
<h2 id="heading-community-building-amplify-mautic-and-beyond"><strong>Community Building: Amplify, Mautic, and Beyond</strong></h2>
<p>This year was also deeply rooted in community contributions. I played a significant role in organizing the <strong>first-ever Mautic Conference in India</strong>, an event that brought together 100+ attendees to share ideas, best practices, and future possibilities for open-source marketing automation.</p>
<p>In parallel, I served as a <strong>supporting organizer for the 30 Days of Amplify event</strong>, hosted by AWS User Group India. Coordinating sessions, engaging participants, and ensuring seamless execution reaffirmed my love for community building and the ripple effect it creates.</p>
<p>I also stepped into the <strong>Mautic community supporting lead role</strong>, a position I was both excited and anxious about. Despite my best intentions, I often felt that I couldn't give the role the justice it deserved. There were moments when I considered stepping away entirely, but I reminded myself why I took on the responsibility in the first place. I continued to contribute in whatever capacity I could, even if it felt small at times. In 2025, I am determined to show up better, contribute more meaningfully, and truly justify the trust placed in me by the community.</p>
<h2 id="heading-fitness-and-well-being-small-habits-big-changes"><strong>Fitness and Well-being: Small Habits, Big Changes</strong></h2>
<p>This year, I also focused on staying active and maintaining my well-being. Whether it was playing cricket, hitting the gym, walking, or enjoying a quick game of table tennis, these activities didn't just keep me fit—they kept me happy. They became moments of pause in an otherwise busy schedule, reminding me that while work is important, keeping the mind relaxed is even more essential.</p>
<p>I also implemented a few small but impactful habits: setting a consistent bedtime, reading before sleep, and ensuring my mobile internet was off by 9:30 PM. Although there were occasional exceptions, I mostly stuck to these habits, and they made a significant difference.</p>
<p>This focus on fitness and mental well-being stems from a deeply personal realization. After being admitted to the ICU during the COVID-19 pandemic, I promised myself that I would prioritize my health. Over the past few years, these small initiatives have helped me stay active, reduce stress, and bring a better version of myself to both my professional and personal life.</p>
<p>In 2025, I aim to continue these habits and refine them further to maintain a balance between physical health, mental well-being, and professional excellence.</p>
<h2 id="heading-youtube-journey-consistency-and-growth"><strong>YouTube Journey: Consistency and Growth</strong></h2>
<p>In 2024, I focused on building consistency on my <strong>YouTube channel - Learn with Avinash Dalvi</strong>. I committed to uploading videos <strong>every Tuesday and Saturday</strong>, and this routine brought both discipline and growth to my channel.</p>
<p>📊 <strong>2024 YouTube Milestones:</strong></p>
<ul>
<li><p><strong>1,054 New Subscribers</strong></p>
</li>
<li><p><strong>82K Views</strong></p>
</li>
<li><p><strong>101 Uploads</strong></p>
</li>
<li><p><strong>2,272 Likes</strong></p>
</li>
<li><p><strong>58 Comments</strong></p>
</li>
<li><p><strong>546 Shares</strong></p>
</li>
</ul>
<p>This journey wasn't just about numbers; it was about showing up consistently, sharing knowledge, and building a connection with the audience. In 2025, I plan to keep this momentum going and continue delivering valuable content.</p>
<h2 id="heading-aws-reinvent-nominations-a-moment-of-pride"><strong>AWS re:Invent Nominations: A Moment of Pride</strong></h2>
<p>One of the proudest moments of 2024 was being <strong>nominated in two categories at the AWS re:Invent APJ Community Awards Night</strong>—'Invent &amp; Simplify' and 'Deliver Results.' While I couldn't personally attend the event and didn't win the award, the nomination itself felt like a significant achievement.</p>
<p>Being recognized at such a prestigious platform reassured me that my contributions are making an impact. It felt like a signal to keep going, to keep contributing, and to continue sharing knowledge with the community. This nomination wasn't just an accolade; it was a reminder that consistency and dedication never go unnoticed.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1735616739456/cfa59245-aa03-4c88-8c7a-f9b8ce6e50ee.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-reflections-the-emotional-side-of-the-journey"><strong>Reflections: The Emotional Side of the Journey</strong></h2>
<p>Beyond the technical accomplishments, this year was also about embracing my vulnerabilities. I realized the power of pauses while speaking, the importance of balancing ambition with self-care, and the humility in acknowledging limitations.</p>
<p>There were moments of self-doubt, burnout, and disappointment. But every time I felt stuck, I reminded myself of why I started—to build, to share, to contribute, and to inspire.</p>
<h2 id="heading-a-special-thanks-to-my-strong-pillars-trupti-and-advit"><strong>A Special Thanks to My Strong Pillars: Trupti and Advit</strong></h2>
<p>None of this would have been possible without the unwavering support of my wife, <strong>Trupti</strong>, and my child, <strong>Advit</strong>. They are my strongest pillars, my motivation, and the reason I can step out into the world with confidence and purpose. Their encouragement, sacrifices, and love have been the silent force behind every achievement this year.</p>
<h2 id="heading-looking-ahead-a-vision-for-2025"><strong>Looking Ahead: A Vision for 2025</strong></h2>
<p>As I step into 2025, I carry with me the lessons, the wins, and even the scars from 2024. I look forward to growing deeper into my technical expertise and continuing to give back to the community. I am also excited to explore unexplored areas like <strong>AI/ML, Big Data, and Blockchain</strong>. These fields demand more learning, experimentation, and sharing, and I am ready to embrace that challenge.</p>
<p>Stay tuned to my <strong>blog</strong> and <a target="_blank" href="https://www.youtube.com/channel/UC9X-0OwTK4PdbWronkNwXSA/"><strong>YouTube channel</strong></a> for updates as we continue to <strong><em>Learn Together, Grow Together.</em></strong> There are also a few initiatives I'm quietly working on, but I'll reveal them when the time is right.</p>
<p>If there's one takeaway from this year, it's this:</p>
<blockquote>
<p><em>"Success isn't always about the applause at the end; it's about showing up, day after day, even when no one's watching."</em></p>
</blockquote>
<p>Thank you, 2024, for being everything I didn't expect but everything I needed.</p>
<p>Here's to 2025—another year of growth, impact, and storytelling.</p>
<p>Whats your reflection ? share in comment would like read about your story too. Every person has story to tell.</p>
]]></content:encoded></item><item><title><![CDATA[A Decade of AWS Lambda and ECS: My Journey of Growth and Gratitude]]></title><description><![CDATA[Hello Devs,
As we approach the 10th anniversary of AWS Lambda and Amazon Elastic Container Service (ECS) on 14th November 2024, I find myself reflecting on how these two revolutionary services have not only transformed cloud computing but have also p...]]></description><link>https://www.internetkatta.com/a-decade-of-aws-lambda-and-ecs-my-journey-of-growth-and-gratitude</link><guid isPermaLink="true">https://www.internetkatta.com/a-decade-of-aws-lambda-and-ecs-my-journey-of-growth-and-gratitude</guid><category><![CDATA[lambda]]></category><category><![CDATA[ECS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AWS Community]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Sat, 09 Nov 2024 00:11:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1731110940158/74168459-ee1c-4ced-af66-cadbd5900781.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>As we approach the 10th anniversary of AWS Lambda and Amazon Elastic Container Service (ECS) on 14th November 2024, I find myself reflecting on how these two revolutionary services have not only transformed cloud computing but have also profoundly shaped my career and passion for sharing knowledge. This milestone feels like the perfect moment to express my gratitude for the journey these services have enabled me to embark on, evolving from curious exploration to becoming an advocate within the AWS community.</p>
<h3 id="heading-discovering-aws-lambda-from-curiosity-to-clarity">Discovering AWS Lambda: From Curiosity to Clarity</h3>
<p>It all began in 2016 when I arrived in Bangalore, excited but unsure of where my tech journey would lead. Coming from a traditional background of managing servers with EC2, the concept of Serverless was completely new. Then came Lambda—a name I initially associated with math, not with revolutionising my approach to development.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731071968365/fc9a2b3a-97c8-4b2a-904f-c33aae03e2d2.png" alt class="image--center mx-auto" /></p>
<p>Lambda was introduced to me as "the way forward" for an upcoming project in one of project meeting when I was in KNAB finance. I remember feeling puzzled—how could we run code without managing servers? But as I dove into Lambda, my confusion turned to clarity. Lambda’s ability to handle massive traffic loads, scale automatically, and only charge for what was used completely changed the way I viewed application development. Suddenly, infrastructure was no longer a limiting factor; instead, it was an enabler, allowing me to focus solely on creating, experimenting, and scaling.</p>
<p>Lambda didn’t just simplify architecture; it shifted my perspective on innovation. This new approach empowered me to experiment with microservices, scaling applications effortlessly. I found myself fascinated by its flexibility and cost-efficiency, and soon enough, I became a regular speaker on Lambda, sharing this newfound knowledge with other developers via blogs. Each session deepened my connection with the AWS community and inspired me to continue exploring what Serverless could do. So Lambda become become my go to service for any research.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731110238002/83003302-ffe3-4fc8-b5b6-2581326709f6.jpeg" alt class="image--center mx-auto" /></p>
<h3 id="heading-embracing-ecs-and-the-power-of-containers">Embracing ECS and the Power of Containers</h3>
<p>While Lambda laid the foundation for Serverless architecture, my journey with containers began in 2021 with ECS When I joined Eagleview. ECS introduced me to container orchestration and allowed me to deploy applications with more control and customisation. Through Fargate, AWS made it possible to run containers without managing servers, aligning with my passion for Serverless solutions.</p>
<p>I began experimenting with ECS for real-world applications, diving deep into concepts like debugging, hosting WordPress, and lazy-loading container images. These experiences led to articles like <a target="_blank" href="https://www.internetkatta.com/debugging-into-aws-ecs-task-containers-what-you-need-to-know">Debugging AWS ECS Task Containers</a>, <a target="_blank" href="https://www.internetkatta.com/host-wordpress-on-aws-ecs-using-fargate">Hosting WordPress on ECS Fargate</a>, and <a target="_blank" href="https://www.internetkatta.com/seekable-oci-lazy-loading-container-images-on-ecs-and-fargate">OCI Lazy Loading with ECS and Fargate</a>. These pieces connected with readers worldwide, and my YouTube video on <a target="_blank" href="https://www.youtube.com/watch?v=fGz5znsEHpE">WordPress on ECS Fargate</a> reached nearly 4,000 views, reinforcing the power of ECS to solve complex challenges simply and effectively.</p>
<p>One of my most memorable sessions at Serverless Days Bengaluru 2024, <strong><em>Serverless Sherlock</em></strong>, showcased ECS Fargate’s Serverless capabilities, allowing me to discuss advanced debugging techniques and troubleshooting strategies with the community. Building on this experience, I am eager to explore further innovations and share more insights on how these technologies can be leveraged to solve complex challenges efficiently.</p>
<h3 id="heading-building-sharing-and-giving-back-to-the-community">Building, Sharing, and Giving Back to the Community</h3>
<p>As my knowledge of Lambda and ECS expanded, so did my commitment to share it. In 2021, I became an AWS Community Builder, a role that has allowed me to share my expertise more widely and connect with other developers and learners. From blogs to Stack Overflow answers, talks, and webinars, I’ve tried to make these technologies more accessible and relatable. My blog post on using <a target="_blank" href="https://www.internetkatta.com/how-to-use-secrets-manager-in-aws-lambda-node-js">AWS Secrets Manager with Lambda</a> gathered over 65,000+ views.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1731109723045/d75610bf-983e-423e-864c-63d706b5ca5b.png" alt class="image--center mx-auto" /></p>
<p>AWS has not only been a set of services—it’s been a supportive community, a source of endless learning, and a platform for growth. The journey has been inspiring, and every new feature or capability AWS releases is an opportunity to learn, share, and give back. With the community's encouragement, my writing took off, and articles like <a target="_blank" href="https://www.internetkatta.com/navigating-streamlined-docker-container-deployment-on-aws">Navigating Docker Container Deployment on AWS</a> resonated with developers looking to simplify their AWS journeys.</p>
<h3 id="heading-a-heartfelt-thanks-to-lambda-ecs-and-aws">A Heartfelt Thanks to Lambda, ECS, and AWS</h3>
<p>To AWS Lambda and ECS: thank you. You have not only simplified complex challenges but have also opened the doors to innovation, allowing me to think bigger and build smarter. Your flexibility, scalability, and the freedom you offer have been instrumental in shaping my career and my commitment to the AWS community.</p>
<p>Here’s to Lambda, ECS, and the endless possibilities they represent. Thank you for being a vital part of my journey and for inspiring countless developers to build, innovate, and share. Here's to the next decade of AWS—and the transformative power it brings to us all.</p>
<h3 id="heading-your-turn-to-share"><strong>Your Turn to Share</strong></h3>
<p>As we celebrate a decade of AWS Lambda and ECS, I invite you to reflect on your own journey with these transformative technologies. Here are a few questions to ponder:</p>
<ul>
<li><p>How has AWS Lambda or ECS impacted your development process?</p>
</li>
<li><p>What challenges have you faced, and how did you overcome them?</p>
</li>
<li><p>What features do you find most beneficial, and why?</p>
</li>
</ul>
<p>I would like to know from you too. I encourage you to share your thoughts and experiences in the comments below or on social media using the hashtag #MyAWSJourney. Your insights could inspire others and contribute to our growing community of AWS enthusiasts.</p>
<h3 id="heading-what-next">What next ?</h3>
<p>Looking ahead, what do you think the future holds for serverless computing and container orchestration? Share your predictions and let's discuss how these technologies might evolve in the next decade.</p>
<h3 id="heading-references">References :</h3>
<ul>
<li><a target="_blank" href="https://aws.amazon.com/serverless/10th-anniversary/">https://aws.amazon.com/serverless/10th-anniversary/</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Building Resilient GenAI pipeline with Open-source AI Gateway]]></title><description><![CDATA[Hello Devs,
Generative AI is rapidly gaining traction, with companies eager to integrate it into their workflows and drive product innovation. As its popularity soars, numerous LLM models and Generative AI companies are emerging. However, this surge ...]]></description><link>https://www.internetkatta.com/building-resilient-genai-pipeline-with-open-source-ai-gateway</link><guid isPermaLink="true">https://www.internetkatta.com/building-resilient-genai-pipeline-with-open-source-ai-gateway</guid><category><![CDATA[PortKey]]></category><category><![CDATA[AI]]></category><category><![CDATA[genai]]></category><category><![CDATA[Model]]></category><category><![CDATA[multimodel ai]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[gateway]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Mon, 28 Oct 2024 05:45:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730094321106/40a77699-f7ba-485a-b3b4-409c6360f62c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs,</p>
<p>Generative AI is rapidly gaining traction, with companies eager to integrate it into their workflows and drive product innovation. As its popularity soars, numerous LLM models and Generative AI companies are emerging. However, this surge in interest brings its own set of challenges. Companies face difficulties in managing large models on their infrastructure, caching results, handling credentials, and dealing with new-age queries (prompts). Most critically, they struggle with managing a diverse array of AI models, each with its unique structure and communication format. This complexity makes it challenging to establish failover mechanisms and efficiently switch between models, leading to significant time and resource investments.</p>
<h2 id="heading-introducing-portkey-open-source-ai-gateway">Introducing Portkey open-source AI Gateway:</h2>
<p><a target="_blank" href="http://portkey.ai">Portkey AI Gateway</a> offers an open-source AI Gateway that simplifies managing generative AI workflows. This powerful tool supports working with multiple large language models (LLMs) from various providers, along with different data formats (multimodal).  By acting as a central hub, Portkey streamlines communication and simplifies integration, improving your application's reliability, cost-effectiveness, and accuracy. </p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcVHdrGL_aWpuuE7-e612P3_CVhah5-EOjX490wwZT5UmJuUxBfjLohrtg3gaNf8vl6LFX6OeoVPQSqbciSsSC9zx0PNh_VPMqNokjw27GrT27Zkzf8bUhCpj2_qifmbPUcljGWkTUnb5w9bg2ukIqKBLc?key=_l91YjqQx1jxUIzcPo4ceQ" alt /></p>
<h2 id="heading-getting-started-with-portkey-ai-gateway">Getting Started with PortKey AI Gateway</h2>
<p>Getting started with PortKey AI Gateway involves very simple steps.</p>
<ol>
<li><p>Open this <a target="_blank" href="https://github.com/Portkey-AI/gateway">https://github.com/Portkey-AI/gateway</a> and follow instructions or run the below command in your terminal. Make sure your NodeJs is installed in your machine. </p>
</li>
<li><p>Hit <a target="_blank" href="http://localhost:8787">http://localhost:8787</a> in the browser. </p>
</li>
</ol>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdPS7O5cTXCLsPSQjm4sTqlHRPSBQl70OF_zqxoUH5jXLJs9WWxp78UlaOy8RJTcKSiroMlUYUAeYRUziWJUYDmXJkiTT_3JPxqKUtbY7hA15f2XIWw7st2tZGRqBNRfexPtSrUb3XmcbcaHxR-M-FB9bA?key=_l91YjqQx1jxUIzcPo4ceQ" alt /></p>
<ol start="3">
<li>That’s it! This server is up, the next step is to test out a Large Language Model (LLM). You can pick up any LLM from the <a target="_blank" href="https://github.com/Portkey-AI/gateway?tab=readme-ov-file#supported-providers.">list</a> </li>
</ol>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdNWPo6ChtrhMXsZ0DMHam_yvsLVGp0n0LeB_rB2BqMprGOb9CXcGHP5ODq4X-Vq9a8Yy8M_JXl9YNSvwowrNQJ6tDcAqiuJFdA_0ygOT3SPXTeq8ZX7cWf1VIL-XLXgHSV8_GMnQU_TEWr55HT3cejGOw-?key=_l91YjqQx1jxUIzcPo4ceQ" alt /></p>
<p>In this blog, I will explain how to use Google PaLM and Gemini model as an example to use with PortKey gateway. </p>
<h2 id="heading-google-palm-and-gemini-with-portkey-ai-gateway">Google PaLM and Gemini with Portkey AI Gateway</h2>
<p>If you already know how to get the API key then skip these steps. </p>
<ol>
<li><p>Sign up using google account here <a target="_blank" href="https://aistudio.google.com">https://aistudio.google.com</a>. And click on “Create new key” on the left navigation menu.  </p>
</li>
<li><p>You can choose any AI model from <a target="_blank" href="https://ai.google.dev/tutorials/rest_quickstart">here</a> </p>
</li>
</ol>
<p>To call PortKey Gateway you need two things: </p>
<ul>
<li><p>Your AI model API key or secret key ( Google AI Studio API key ) </p>
</li>
<li><p>Model name which you would like to test. </p>
</li>
</ul>
<p>PortKey currently supports AI model API hardcoded versions like Google gemini version supported is v1beta and Google Palm supported is v1beta3. A feature request is under process. So you need to find a model which is supported under those API versions, otherwise it will throw an error “Model doesn’t support under API version”. </p>
<p><img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXdHVgLG2f_kQgLu7P7FThBMPELxUZJ9_gFiiTpejZtrv8uU1LEQB03p0vajFyCagYmHOx9wvsB-RBX8_FYSX64fKDVUkGlpHp1BxhhtTMRM-r5JQnY8t-HqoMxwKUvbIqktja8Kx-ahzFWR8GAW8ZUZ7cYB?key=_l91YjqQx1jxUIzcPo4ceQ" alt /></p>
<p>As per this reference document <a target="_blank" href="https://ai.google.dev/tutorials/rest_quickstart">https://ai.google.dev/tutorials/rest_quickstart</a> Google gemini model supported for <code>generateContent</code> is <code>gemini-pro</code>. We will test our <code>/generateContent</code> AI API model. This standard example is to call a CURL request to PortKey Gateway. </p>
<pre><code class="lang-bash">curl <span class="hljs-string">'127.0.0.1:8787/v1/chat/completions'</span> \
  -H <span class="hljs-string">'x-portkey-provider: openai'</span> \
  -H <span class="hljs-string">"Authorization: Bearer <span class="hljs-variable">$OPENAI_KEY</span>"</span> \
  -H <span class="hljs-string">'Content-Type: application/json'</span> \
  -d <span class="hljs-string">'{"messages": [{"role": "user","content": "Say this is test."}], "max_tokens": 20, "model": "gpt-4"}'</span>
</code></pre>
<p>Here is an API request for Google Gemini AI model : Replace <strong>$GOOGLE_KEY</strong> with your actual key.   </p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'127.0.0.1:8787/v1/chat/completions'</span> \
--header <span class="hljs-string">'x-portkey-provider: google'</span> \
--header <span class="hljs-string">'Authorization: Bearer $GOOGLE_KEY'</span> \
--header <span class="hljs-string">'Content-Type: application/json'</span> \
--data <span class="hljs-string">'{
    "messages": [
        {
            "role": "user",
            "content": "Write a story about a magic backpack."
        }
    ],
    "model": "gemini-pro"
}'</span>
</code></pre>
<p>Exactly similarly you can use other AI models like OpenAI, Ollama etc with just model name and AI API key. </p>
<p>Now you have understood how to use this PortKey Gateway API with different AI models. Lets checkout  how you can developed resilient GenAI pipeline with PortKey AI Gateway </p>
<p>I will cover major features of PortKey support to make your GenAI pipeline resilient. </p>
<h3 id="heading-build-failover-mechanism-with-portkey-ai-gateway">Build failover mechanism with PortKey AI Gateway</h3>
<p>As we saw early in blog fallback is one feature of PortKey gateway which supports adding a list of AI Model API if incase any one of them or primary API fails. In this list you can add numbers of AI model AI. </p>
<p>In the same REST call under header against <code>x-portkey-config</code>  you can add these params to add a list of LLM models API along their API keys so that if the primary key fails it will take the second one. Here's a quick example of a config to fallback to Anthropic's claude-v1 if Gemini’s “gemini-pro” fails.</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"strategy"</span>: {
      <span class="hljs-attr">"mode"</span>: <span class="hljs-string">"fallback"</span>,
  },
  <span class="hljs-attr">"targets"</span>: [
    {
      <span class="hljs-attr">"virtual_key"</span>: <span class="hljs-string">"google-virtual-key"</span>,
    },
    {
      <span class="hljs-attr">"virtual_key"</span>: <span class="hljs-string">"anthropic-virtual-key"</span>,
      <span class="hljs-attr">"override_params"</span>: {
          <span class="hljs-attr">"model"</span>: <span class="hljs-string">"claude-1"</span>
      }
    }
  ]
}
</code></pre>
<p>Here is example CURL request to PortKey AI Gateway with config params  : </p>
<pre><code class="lang-bash">curl --location <span class="hljs-string">'127.0.0.1:8787/v1/chat/completions'</span> \
--header <span class="hljs-string">'x-portkey-provider: google'</span> \
--header <span class="hljs-string">'Authorization: Bearer $GOOGLE_KEY'</span> \
--header <span class="hljs-string">'Content-Type: application/json'</span> \
--header <span class="hljs-string">'x-portkey-config: {"strategy":{"mode":"fallback"},"targets":[{"provider":"google","api_key":"$GOOGLE_KEY"},{"provider":"openai","api_key":"sk-***"}]}'</span> \
--data <span class="hljs-string">'{
    "messages": [
        {
            "role": "user",
            "content": "Write a story about a magic backpack."
        }
    ],
    "model": "gemini-pro"
}'</span>
</code></pre>
<p>This way you can have a fallback mechanism in your AI pipeline. In this config you set on which status code of failing API should have fallback by just adding under “strategy”</p>
<pre><code class="lang-json"><span class="hljs-string">"strategy"</span>: {
    <span class="hljs-attr">"mode"</span>: <span class="hljs-string">"fallback"</span>,
    <span class="hljs-attr">"on_status_codes"</span>: [ <span class="hljs-number">429</span> ]
  }
</code></pre>
<p><strong>Note</strong> : You need to ensure while using this fallback mechanism that the LLMs in your fallback list are compatible with your use case. Not all LLMs offer the same capabilities.</p>
<p>Similarly you can have more capabilities to make sure your GenAI pipeline is resilient and stable. Like adding auto retry mechanism, caching and load balancing. Same can be configured under “config” parameters. More details given <a target="_blank" href="https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations">here</a>.</p>
<p>This way PortKey AI gateway simplifies managing generative AI workflows by offering multi-model support and easy integration. This translates to improved app performance through features like automatic failover and efficient model switching.</p>
<p>So give it a try with PortKey AI gateway and make your GenAI pipeline resilient. Let us know if you have any query. </p>
<h2 id="heading-references">References</h2>
<ul>
<li><p>More details about feature and how to use : <a target="_blank" href="https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations">https://docs.Portkey AI Gateway/docs/product/ai-gateway-streamline-llm-integrations</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Portkey-AI/gateway/blob/27a6ebbc9ddd972ef1a716176fc17d0b2671c366/src/providers/google/api.ts#L4C22-L4C70">https://github.com/Portkey-AI/gateway/blob/27a6ebbc9ddd972ef1a716176fc17d0b2671c366/src/providers/google/api.ts#L4C22-L4C70</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Portkey-AI/gateway/blob/27a6ebbc9ddd972ef1a716176fc17d0b2671c366/src/providers/palm/api.ts#L4C64-L4C71">https://github.com/Portkey-AI/gateway/blob/27a6ebbc9ddd972ef1a716176fc17d0b2671c366/src/providers/palm/api.ts#L4C64-L4C71</a></p>
</li>
<li><p>PortKey Config : https://docs.Portkey AI Gateway/docs/api-reference/config-object</p>
</li>
<li><p><a target="_blank" href="https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/fallbacks">https://docs.portkey.ai/docs/product/ai-gateway-streamline-llm-integrations/fallbacks</a></p>
</li>
<li><p>How to add JSON object in header Postman API:  https://community.postman.com/t/get-custom-header-value-as-object-json-stringify/16172</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[How to Test ElastAlert Locally Using LocalStack: A Step-by-Step Guide]]></title><description><![CDATA[Hello Devs ,
I was recently assigned the task of migrating alert notifications from Slack to Microsoft Teams. However, I encountered challenges testing these alerts in a live environment, especially the inability to delete ECS clusters or tasks witho...]]></description><link>https://www.internetkatta.com/how-to-test-elastalert-locally-using-localstack-a-step-by-step-guide</link><guid isPermaLink="true">https://www.internetkatta.com/how-to-test-elastalert-locally-using-localstack-a-step-by-step-guide</guid><category><![CDATA[elastalert]]></category><category><![CDATA[localstack]]></category><category><![CDATA[AWS]]></category><category><![CDATA[elasticsearch]]></category><category><![CDATA[AWS Community]]></category><category><![CDATA[learning]]></category><dc:creator><![CDATA[Avinash Dalvi]]></dc:creator><pubDate>Mon, 30 Sep 2024 10:57:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1727693643075/80e25e7d-42e4-477d-a45c-a0acea0133ff.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello Devs ,</p>
<p>I was recently assigned the task of migrating alert notifications from Slack to Microsoft Teams. However, I encountered challenges testing these alerts in a live environment, especially the inability to delete ECS clusters or tasks without impacting production systems. To address this issue, I looked for a way to simulate the alerting process locally. This search led me to discover LocalStack, a powerful tool that enables you to test AWS services on your local machine. In this blog, I will guide you through setting up LocalStack to test ElastAlert rules, ensuring a smooth alert migration without the risk of disrupting live services.</p>
<h2 id="heading-1-install-prerequisites">1. Install Prerequisites</h2>
<p>Before getting started, ensure you have the following installed:</p>
<ul>
<li><p><strong>Python</strong>: Download and install Python (3.7 or above) from <a target="_blank" href="https://www.python.org/downloads/">python.org</a>.</p>
</li>
<li><p><strong>AWS CLI</strong>: Install the AWS CLI to manage LocalStack resources easily. Follow the <a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html">AWS CLI installation guide</a>.</p>
</li>
<li><p><strong>Docker</strong>: Install Docker, as it is required for running LocalStack.</p>
</li>
</ul>
<h2 id="heading-2-set-up-localstack">2. Set Up LocalStack</h2>
<h3 id="heading-using-the-localstack-desktop-app">Using the LocalStack Desktop App</h3>
<ol>
<li><p>Download the <strong>LocalStack Desktop app</strong> from the <a target="_blank" href="https://localstack.cloud/">LocalStack website</a>.</p>
</li>
<li><p>Install the app by following the provided instructions for your operating system.</p>
</li>
<li><p>Open the LocalStack Desktop app and start the LocalStack service. This will create a LocalStack environment where you can run your AWS services locally.</p>
</li>
</ol>
<h3 id="heading-using-docker-desktop-localstack-extension">Using Docker Desktop LocalStack Extension</h3>
<ol>
<li><p>If you prefer using Docker, open <strong>Docker Desktop</strong>.</p>
</li>
<li><p>Navigate to the <strong>Extensions</strong> section and search for <strong>LocalStack</strong>.</p>
</li>
<li><p>Install the LocalStack extension directly from Docker Desktop.</p>
</li>
<li><p>Once installed, you can start LocalStack by selecting it from the extensions list.</p>
</li>
</ol>
<h2 id="heading-3-create-localstack-elasticsearch-instance">3. Create LocalStack Elasticsearch Instance</h2>
<p>After installing LocalStack (either through the Desktop app or Docker extension), create an Elasticsearch domain. It may take a few minutes for the service to start.</p>
<p>Run the following command to create a LocalStack Elasticsearch domain: <code>—domain-name</code> can use any word I just used “testinglocally”</p>
<pre><code class="lang-plaintext">aws es create-elasticsearch-domain --domain-name testinglocally --endpoint=http://localhost:4566
</code></pre>
<p>Once you run this command it will show host value. copy that host value will require while doing step 4.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"DomainStatus"</span>: {
        <span class="hljs-attr">"DomainId"</span>: <span class="hljs-string">"000000000000/testinglocally"</span>,
        <span class="hljs-attr">"DomainName"</span>: <span class="hljs-string">"locales"</span>,
        <span class="hljs-attr">"ARN"</span>: <span class="hljs-string">"arn:aws:es:us-east-2:000000000000:domain/testinglocally"</span>,
        <span class="hljs-attr">"Created"</span>: <span class="hljs-literal">true</span>,
        <span class="hljs-attr">"Deleted"</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"Endpoint"</span>: <span class="hljs-string">"testinglocally.us-east-2.es.localhost.localstack.cloud:4566"</span>,
        <span class="hljs-attr">"Processing"</span>: <span class="hljs-literal">true</span>,
        <span class="hljs-attr">"UpgradeProcessing"</span>: <span class="hljs-literal">false</span>,
        <span class="hljs-attr">"ElasticsearchVersion"</span>: <span class="hljs-string">"7.10"</span>,
        <span class="hljs-attr">"ElasticsearchClusterConfig"</span>: {
            <span class="hljs-attr">"InstanceType"</span>: <span class="hljs-string">"m3.medium.elasticsearch"</span>,
            <span class="hljs-attr">"InstanceCount"</span>: <span class="hljs-number">1</span>,
            <span class="hljs-attr">"DedicatedMasterEnabled"</span>: <span class="hljs-literal">true</span>,
            <span class="hljs-attr">"ZoneAwarenessEnabled"</span>: <span class="hljs-literal">false</span>,
            <span class="hljs-attr">"DedicatedMasterType"</span>: <span class="hljs-string">"m3.medium.elasticsearch"</span>,
            <span class="hljs-attr">"DedicatedMasterCount"</span>: <span class="hljs-number">1</span>
        }
    }
}
</code></pre>
<p>Verify the instance by listing the domains:</p>
<pre><code class="lang-plaintext">aws es describe-elasticsearch-domain --domain-name testinglocally --endpoint-url=http://localhost:4566
</code></pre>
<h2 id="heading-4-clone-elastalert">4. Clone ElastAlert</h2>
<p>Clone the ElastAlert repository from GitHub:</p>
<pre><code class="lang-plaintext">git clone https://github.com/jertel/elastalert2
cd elastalert2
</code></pre>
<h3 id="heading-create-configuration-file">Create Configuration File</h3>
<p>Copy the example configuration file and create a new <code>config.yaml</code>:</p>
<pre><code class="lang-plaintext">cp config.yaml.example config.yaml
</code></pre>
<p>You can place this file in the root directory or, if you prefer a structured approach, keep it in an <code>example</code> folder. If you do this, remember to specify the path when running ElastAlert rules. <code>elastalert-test-rule --config &lt;path-to-config-file&gt; example_rules/example_frequency.yaml</code></p>
<h2 id="heading-5-install-elastalert-dependencies">5. Install ElastAlert Dependencies</h2>
<p>Install the required Python packages by running:</p>
<pre><code class="lang-plaintext">pip install -r requirements.txt
python setup.py install
</code></pre>
<h2 id="heading-6-create-an-elastalert-rule">6. Create an ElastAlert Rule</h2>
<p>Navigate to the <code>example/rules</code> folder in the ElastAlert repository. You can either select an existing example rule or create a new one for testing. Here’s an example structure for a new rule:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">login</span> <span class="hljs-string">page</span>
<span class="hljs-attr">include:</span> [<span class="hljs-string">"@timestamp"</span>, <span class="hljs-string">"monitor.name"</span>, <span class="hljs-string">"monitor.status"</span>, <span class="hljs-string">"ev.application.name"</span>]
<span class="hljs-attr">type:</span> <span class="hljs-string">frequency</span>
<span class="hljs-comment"># We want to alert if we have been down for 5 minutes</span>
<span class="hljs-comment"># Heartbeat runs every 60s, so we need 6 failures in 330s to trigger an alert</span>
<span class="hljs-comment"># timeframe = (num_events - 1) * 60s + 60s / 2</span>
<span class="hljs-attr">num_events:</span> <span class="hljs-number">1</span>
<span class="hljs-attr">timeframe:</span>
  <span class="hljs-attr">minutes:</span> <span class="hljs-number">10</span>
<span class="hljs-attr">index:</span> <span class="hljs-string">heartbeat-*</span>
<span class="hljs-attr">filter:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">query:</span>
    <span class="hljs-attr">query_string:</span>
      <span class="hljs-attr">query:</span> <span class="hljs-string">'monitor.name: "login page" AND monitor.status: "down"'</span>
<span class="hljs-attr">alert:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">ms_power_automate</span>
<span class="hljs-attr">ms_power_automate_webhook_url:</span> <span class="hljs-string">"https://webhook.site/e3f965f6-4087-4aad-a1db-7e46db55ae1d"</span>
</code></pre>
<h2 id="heading-7-create-an-index-and-sample-data">7. Create an Index and Sample Data</h2>
<p>Create the index that your rule will use, such as <code>heartbeat</code> or <code>filebeat</code>: You can use http://localhost:4566 or locales.us-east-2.es.localhost.localstack.cloud:4566.</p>
<pre><code class="lang-plaintext">curl -X PUT "http://localhost:4566/filebeat-1"
</code></pre>
<h3 id="heading-insert-sample-data">Insert Sample Data</h3>
<p>Use the following <code>PUT</code> request to add sample data:</p>
<p>Note : use UTC timing for testing if you want to know what is your machine UTC timing then run this command <code>date -u</code> it will display like this <code>Mon Sep 30 11:01:57 UTC 2024</code> take out of timing and use in below CURL request.</p>
<pre><code class="lang-powershell"><span class="hljs-built_in">curl</span> <span class="hljs-literal">-X</span> PUT <span class="hljs-string">"http://localhost:4566/filebeat-1/_doc/1"</span> <span class="hljs-literal">-H</span> <span class="hljs-string">'Content-Type: application/json'</span> <span class="hljs-literal">-d</span> <span class="hljs-string">'{
    "message": "Test log entry",
    "@timestamp": "2024-09-30T14:40:00Z",
    "monitor": {
        "name": "Test Monitor",
        "status": "down"
    }
}'</span>
</code></pre>
<h2 id="heading-8-test-the-elastalert-rule">8. Test the ElastAlert Rule</h2>
<p>Run the ElastAlert test command:</p>
<pre><code class="lang-bash">elastalert-test-rule examples/rules/example_error.yaml --alert --config examples/config.yaml
</code></pre>
<p>This command will simulate the alert based on the created rule and the data in your Elasticsearch instance.</p>
<h2 id="heading-9-troubleshooting-and-triage">9. Troubleshooting and Triage</h2>
<p>After running the test, you may want to perform some maintenance and triage tasks or while testing alerts locally, you may encounter some common errors. Here are some triaging steps you can follow:</p>
<h3 id="heading-deleting-an-index">Deleting an Index</h3>
<p>To delete an index, run: Whichever index you have created use that to delete example if you have created hearbeat-1 or filebeat-1 then used accordingly.</p>
<pre><code class="lang-bash">curl -X DELETE <span class="hljs-string">"http://localhost:4566/filebeat-1"</span>
</code></pre>
<h3 id="heading-refreshing-the-index">Refreshing the Index</h3>
<p>You can refresh the index to ensure that all operations are performed:</p>
<pre><code class="lang-bash">curl -X POST <span class="hljs-string">"http://localhost:4566/filebeat-1/_refresh"</span>
</code></pre>
<h3 id="heading-searching-the-index">Searching the Index</h3>
<p>You can search the index to verify data:</p>
<pre><code class="lang-bash">curl -X GET <span class="hljs-string">"http://localhost:4566/filebeat-1/_search"</span>
</code></pre>
<h3 id="heading-searching-with-filters">Searching with Filters</h3>
<p>Apply filters to your search for more specific queries:</p>
<pre><code class="lang-bash">curl -X GET <span class="hljs-string">"http://localhost:4566/filebeat-1/_search?q=monitor.status:down"</span>
</code></pre>
<h3 id="heading-check-indexes"><strong>Check Indexes</strong>:</h3>
<p>Run the following command to verify the indices in Elasticsearch</p>
<pre><code class="lang-bash">curl -X GET <span class="hljs-string">'http://localhost:4566/_cat/indices?v'</span>
</code></pre>
<h3 id="heading-ensure-correct-timing"><strong>Ensure Correct Timing</strong>:</h3>
<p>Alerts are often time-sensitive. Make sure the <code>@timestamp</code> field in your sample data falls within the range of the query in your rule. If your rule looks at data from a specific time range, ensure that your sample data aligns with this range.</p>
<h3 id="heading-verbose-mode"><strong>Verbose Mode</strong>:</h3>
<p>If the alerts aren't triggering, run ElastAlert in verbose mode to get more detailed output**.** So that can see what going wrong. In case my case i created one index with filebeat and testing alert with heartbeat and this issue able to found because of verbose flag.</p>
<pre><code class="lang-bash">elastalert-test-rule examples/rules/example_error.yaml --alert --verbose
</code></pre>
<h3 id="heading-error-resolution"><strong>Error Resolution</strong>:</h3>
<p>Common errors like <code>index_not_found_exception</code> can be resolved by creating the index first or ensuring the correct configuration file paths.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/zmhSuCnv2UA">https://youtu.be/zmhSuCnv2UA</a></div>
<p> </p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By following these steps, you can effectively test ElastAlert locally using LocalStack and simulate web hook alerts. This setup is ideal for development and testing environments where you can validate your alerting rules before deploying them to production.</p>
<p>Happy testing! 🚀</p>
<p>I hope this blog helps you learn. Feel free to reach out to me on my Twitter handle <a target="_blank" href="https://hashnode.com/@AvinashDalvi_">@AvinashDalvi_</a> leave a comment on the blog.</p>
<h3 id="heading-references">References:</h3>
<ul>
<li><p><a target="_blank" href="https://elastalert.readthedocs.io/en/latest/index.html">ElastAlert Documentation</a></p>
</li>
<li><p>LocalStack Documentation</p>
</li>
<li><p>https://docs.localstack.cloud/user-guide/aws/elasticsearch/</p>
</li>
<li><p>https://hub.docker.com/r/localstack/localstack#installing</p>
</li>
<li><p>https://docs.aws.amazon.com/cli/latest/reference/es/describe-elasticsearch-domain.html</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>