<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Untitled Publication]]></title><description><![CDATA[Untitled Publication]]></description><link>https://srujanpakanati.com</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 02:21:03 GMT</lastBuildDate><atom:link href="https://srujanpakanati.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[LLM-D: Serving AI Inference at scale]]></title><description><![CDATA[Introduction:

AI inference is the "doing" part of artificial intelligence. It's the moment a trained model stops learning and starts working, turning its knowledge into real-world results.

We all use cutting-edge frontier models in our day-to-day u...]]></description><link>https://srujanpakanati.com/llm-d-serving-ai-inference-at-scale</link><guid isPermaLink="true">https://srujanpakanati.com/llm-d-serving-ai-inference-at-scale</guid><category><![CDATA[llm-d]]></category><category><![CDATA[#kubernetes-for-ai]]></category><category><![CDATA[inference]]></category><category><![CDATA[Gateway API]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 05 Jan 2026 06:14:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1xE5QnNXJH0/upload/76c8c9faaab7afff6b26422a98852baa.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction:</h2>
<blockquote>
<p>AI inference is the "doing" part of artificial intelligence. It's the moment a trained model stops learning and starts working, turning its knowledge into real-world results.</p>
</blockquote>
<p>We all use cutting-edge frontier models in our day-to-day use like Gemini, Claude etc. In our personal life we don’t interact with open-source, purpose built models like Qwen, Llama etc that often who have smaller parameters and often fine-tuned for specific tasks. But for enterprises, there is a growing demand to use these type of models internally. Whether it is to serve models fine-tuned on proprietary data, or for agentic purposes like tool-calling at scale, understanding-intent etc. This creates a unique use-case of using cloud-native frameworks like KServe and llm-d to put forward best-practices to serve AI inference</p>
<h2 id="heading-llm-dhttpsllm-dai"><a target="_blank" href="https://llm-d.ai/">LLM-D</a></h2>
<blockquote>
<p><em>llm-d: a Kubernetes-native high-performance distributed LLM inference framework</em></p>
</blockquote>
<p>llm-d is an open-source inference framework which offers battle-tested configurations to start our model deployment journey. Rather than using vanilla Kubernetes to run vLLM pods in accelerated hardware, llm-d offers better performance metrics by solving problems like KV-Cache offloading, separation of prefill and decode phases(<a target="_blank" href="https://llm-d.ai/docs/guide/Installation/pd-disaggregation">Prefill/Decode Disaggregation</a>), Intelligent inference routing etc. These features helps us in serving LLM models at scale. One example is, by separating prefill and decode phases, we can deploy prefill pods in compute-bound hardware and decode pods in memory-bound hardware. Also by tweaking the PD ratio of pods, you can achieve smaller TTFT(time to first token) and TPOT(time per output token). The llm-d framework have many well-lit paths to solve specific problems depending on the type of QPS(queries per sec) workloads and models.</p>
<h2 id="heading-architecture">Architecture</h2>
<p>llm-d architecture have some novel components like GIE<a target="_blank" href="https://gateway-api-inference-extension.sigs.k8s.io/">(GatewayAPI Inference Extension)</a> and EndPointPicker(EPP) which uses filtering and scoring techniques to enable Gateway API to route traffic dynamically to the next best pod.</p>
<p><a target="_blank" href="https://llm-d.ai/docs/architecture"><img src="https://github.com/llm-d/llm-d/raw/main/docs/assets/images/llm-d-arch.svg" alt="llm-d Arch" /></a></p>
<p>Here is a detailed infographic on inference scheduling</p>
<p><img src="https://github.com/llm-d/llm-d-inference-scheduler/raw/main/docs/images/architecture.png" alt="Inference Gateway Architecture" /></p>
<h2 id="heading-testing-inference-scheduling">Testing Inference Scheduling</h2>
<p>I have deployed a GKE cluster using Nvidia L4 instance and some general compute for all other pods. Firstly you need to enable GKE Gateway class in-order for llm-d to create Gateway during helm install. Here is a <a target="_blank" href="https://docs.cloud.google.com/kubernetes-engine/docs/how-to/deploy-gke-inference-gateway">how-to</a> for that. Once it is done, you need to install monitoring stack to observe the metrics on vLLM pods. llm-d <a target="_blank" href="https://github.com/llm-d/llm-d/blob/main/docs/monitoring/README.md">github page</a> have a bash script to deploy kube-prometheus-stack. The I have installed the helmcharts to deploy modelserver(ms) pods in L4 GPU instance.</p>
<pre><code class="lang-plaintext">llm-d-inference-scheduler   gaie-inference-scheduling-epp-5dc59b4767-dv8tx                    1/1     Running   0          51m
llm-d-inference-scheduler   ms-inference-scheduling-llm-d-modelservice-decode-5b7bb867kxmff   0/2     Pending   0          27m
llm-d-inference-scheduler   ms-inference-scheduling-llm-d-modelservice-decode-5b7bb867zxgnm   2/2     Running   0          32m
llm-d-inference-scheduler   ms-inference-scheduling-llm-d-modelservice-decode-77f5d7b89bc7n   0/2     Pending   0          31m
</code></pre>
<p>Then I have deployed httproute to connect my InferencePool to the Gateway.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">gateway.networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">HTTPRoute</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">llm-d-inference-scheduling</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">parentRefs:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">gateway.networking.k8s.io</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Gateway</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">infra-inference-scheduling-inference-gateway</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">backendRefs:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">inference.networking.k8s.io</span>
        <span class="hljs-attr">kind:</span> <span class="hljs-string">InferencePool</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">gaie-inference-scheduling</span>
        <span class="hljs-attr">weight:</span> <span class="hljs-number">1</span>
      <span class="hljs-attr">matches:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span>
          <span class="hljs-attr">type:</span> <span class="hljs-string">PathPrefix</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">/</span>
</code></pre>
<p>Then I could run some load tests on the single vLLM pod that I managed to deploy through the external gateway ip.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>All the technologies like llm-d, KServe and GIE are still going through rapid progression. LLM inference poses unique challenges which can be tricky to solve. In the future where there will be massive consumption of tokens by applications, you cannot pay per-token price while it is just an additional feature in your application(imagine booking uber using voice). In those circumstances, frameworks like llm-d is the best alternative.</p>
]]></content:encoded></item><item><title><![CDATA[GitOps based Platform Engineering feat. Crossplane and ArgoCD]]></title><description><![CDATA[Introduction
If you think about it for a minute, once in a lifetime there will be a major invention that profoundly impacts the way we do things. Till late 1990s we all used to shop at physical stores and then internet happened which brought in e-com...]]></description><link>https://srujanpakanati.com/gitops-based-platform-engineering-feat-crossplane-and-argocd</link><guid isPermaLink="true">https://srujanpakanati.com/gitops-based-platform-engineering-feat-crossplane-and-argocd</guid><category><![CDATA[crossplane]]></category><category><![CDATA[gitops]]></category><category><![CDATA[ArgoCD]]></category><category><![CDATA[Platform Engineering ]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Thu, 11 Sep 2025 21:16:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9v1cuPQ5hKM/upload/bce9e496c42c0edc83e584db1fe16375.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>If you think about it for a minute, once in a lifetime there will be a major invention that profoundly impacts the way we do things. Till late 1990s we all used to shop at physical stores and then internet happened which brought in e-commerce. Now with the AI phenomenon, the way we shop is changing again. Every major innovation brings in little process improvements, give us choice.</p>
<p>We have been provisioning infrastructure since the advent of internet. Firstly, there used to be physical hardware which you procure and set it up manually and run software on it. Then came cloud which brought in IaC tools like Terraform, CF etc. This is an improvement on the existing process. You can quickly setup infrastructure as you need instead of doing clickOps but also have a lot of drawbacks like configuration drift over time, managing statefiles etc. But then Kubernetes happened…</p>
<p>The more and more adoption of Kubernetes led to a plethora of new cloud native tools which offers a little process improvement on the existing tools and applications. Crossplane is one such tool which offers a better way of infra creation and management. It prevents configuration drift and allows us to create abstractions for self service. This helps us shift left so developers can provision the infra they need and manage it as required. Here is a X post contrasting differences between Crossplane and Terraform</p>
<blockquote>
<p><a target="_blank" href="https://x.com/ianmiell/status/1788973776996028813">https://x.com/ianmiell/status/1788973776996028813</a></p>
</blockquote>
<p>In this article let’s quickly go through the basic terms used in Crossplane. There are 5 main components</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/composition/composite-resources/">Composite Resource(XR)</a> - It can be defined as a bunch of different resources packaged as one resource. If you need IAM Role, Policy and S3 bucket for your application to access that bucket then all three resources can be combined to make one XR.</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/composition/composite-resource-definitions/">Composite Resource Definition(XRD)</a> - Composite resource definitions (<code>XRDs</code>) define the schema for a custom API. Users create composite resources (<code>XRs</code>) using the API schema defined by an XRD.</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/composition/compositions/">Compositions</a> - Compositions have the actual definition of the things to create that XR is requesting for. It holds all the logic.</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/packages/providers/">Providers</a> - Crossplane providers are extensions that enable Crossplane to manage infrastructure and resources on external services, like cloud providers. Think of them as the bridge that connects your Kubernetes cluster to a third-party API.</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/packages/functions/">Functions</a> - Functions enable you to dynamically populate resources based on the XR. You can use different <em>composition functions</em> to configure what Crossplane does when someone creates or updates a <a target="_blank" href="https://docs.crossplane.io/latest/composition/composite-resources/">composite resource (XR)</a>.</p>
<p>Here is an image from Crossplane docs to better understand how these work</p>
<p><a target="_blank" href="https://docs.crossplane.io/latest/composition/composite-resources/"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757537067976/2133e350-02ca-4e0c-bd8b-98e8c3c766b7.png" alt class="image--center mx-auto" /></a></p>
<p>The reason to use Crossplane with ArgoCD in a gitops based approach is that Git manages the commit history so you can quickly refer the changes that are being made to any crossplane resource. ArgoCD can help you visualize the infrastructure you are creating and also allows you to setup sync waves etc. There are lot more benefits than what is stated here but you can catch my drift.</p>
<h2 id="heading-crossplane-in-action">Crossplane in action</h2>
<blockquote>
<p>Here is the <a target="_blank" href="https://github.com/HighonAces/crossplane-argocd#">github repo</a> used in this blog</p>
</blockquote>
<p>Let’s take a scenario where a dev team needs new EKS cluster for some testing. Usually these requests go through a Jira for DevOps team who create a cluster based on their specification and gives it to them. This comes with a lot of friction because dev team do not want to ask ops team to shutdown the cluster for weekend because they have to raise another jira again on monday and have to wait till the cluster is created again. What if we let dev team create the cluster as required but do not want to deal with HCL or low level components like VPC, Subnet etc.</p>
<p>Here are the building blocks of EKS cluster as platform for developers.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757536041748/b80cc3df-4700-4e16-b313-fe2175cd2d9c.png" alt class="image--center mx-auto" /></p>
<p>Firstly the dev team have to create the required networking to deploy EKS cluster on top of it. So they will push the eksnetworking XR file in the given Github repository</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="e5d2d3538b6d1e4d99501c422da38fc3"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/HighonAces/e5d2d3538b6d1e4d99501c422da38fc3" class="embed-card">https://gist.github.com/HighonAces/e5d2d3538b6d1e4d99501c422da38fc3</a></div><p> </p>
<p>Using this XR, dev team has ability to configure region, CIDR range, number of subnets etc. Yet it is very guardrailed in terms of what parameters are required and how they have to go in etc. This is controlled by Composite Resource Definition(XRD). You can see the required sections and enum below.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="1b0a65e844768b4bbdcea9590d7ff5cb"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/HighonAces/1b0a65e844768b4bbdcea9590d7ff5cb" class="embed-card">https://gist.github.com/HighonAces/1b0a65e844768b4bbdcea9590d7ff5cb</a></div><p> </p>
<p>Now The composition is where all the required resources will be configured. It uses one or more functions to fetch the values given in XR to create the resources. KCL is popular language of choice but you can use go-template or Python as well. Here is the <a target="_blank" href="https://gist.github.com/HighonAces/46e348b57481800f854605d6e3cc7a1a">composition</a> used.</p>
<p>Remember, XRD and Composition must always be owned by ops team who dictates how an XR can be created and what resources will be created as a part of it. Here is a quick snapshot of created resources from ArgoCD</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757537977592/a878a327-edec-4974-91e0-6f6308f7d3d4.png" alt class="image--center mx-auto" /></p>
<p>This gives an overall idea about the resources that are being created and the overall health of your XR. Now you have your networking now to create EKS controlplane. It also follows a similar pattern of creating XRD and Composition. Here is the ekscluster XR for reference</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">srujanpakanati.com/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">EKSCluster</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-eks-cluster</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">parameters:</span>
    <span class="hljs-attr">clusterName:</span> <span class="hljs-string">my-eks-cluster</span>
    <span class="hljs-attr">region:</span> <span class="hljs-string">us-east-2</span>
    <span class="hljs-attr">kubernetesVersion:</span> <span class="hljs-string">"1.33"</span>
    <span class="hljs-attr">accessList:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">developer-role</span>
        <span class="hljs-attr">roleARN:</span> <span class="hljs-string">"arn:aws:iam::xxxxxxxxx:role/eks-dev-role-to-test-crossplane"</span> <span class="hljs-comment"># Replace with your IAM Role ARN</span>
        <span class="hljs-attr">policies:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"</span>
          <span class="hljs-bullet">-</span> <span class="hljs-string">"arn:aws:iam::aws:policy/AmazonEKSServicePolicy"</span>
    <span class="hljs-attr">addons:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">vpc-cni</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"v1.20.1-eksbuild.3"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">coredns</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"v1.12.3-eksbuild.1"</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">kube-proxy</span>
        <span class="hljs-attr">version:</span> <span class="hljs-string">"v1.33.3-eksbuild.6"</span>
    <span class="hljs-attr">vpcId:</span> <span class="hljs-string">"vpc-07ddbab7b7c6a6fef"</span> <span class="hljs-comment"># Replace with your VPC ID</span>
    <span class="hljs-attr">subnetIds:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"subnet-0c7420944a320ddff"</span> <span class="hljs-comment"># Replace with your subnet IDs</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"subnet-095925a4561365bf5"</span>
  <span class="hljs-attr">crossplane:</span>
    <span class="hljs-attr">compositionSelector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">provider:</span> <span class="hljs-string">aws</span>
        <span class="hljs-attr">workload:</span> <span class="hljs-string">ekscluster</span>
</code></pre>
<p>Here we can see the components getting created in real time in ArgoCD</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757618940319/7143a6ba-4124-43f3-a4bb-2184ecc6fe48.png" alt class="image--center mx-auto" /></p>
<p>Once the controlplane gets created, we can go ahead with NodeGroup creation. Here is my nodegroup XR</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">srujanpakanati.com/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">EKSNodeGroup</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">my-nodegroup</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">parameters:</span>
    <span class="hljs-attr">clusterName:</span> <span class="hljs-string">cluster-my-eks-cluster</span> <span class="hljs-comment"># This must match the name of your EKSCluster resource</span>
    <span class="hljs-attr">region:</span> <span class="hljs-string">us-east-2</span>
    <span class="hljs-attr">nodeGroupName:</span> <span class="hljs-string">my-managed-nodes</span>
    <span class="hljs-attr">instanceTypes:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">"t3.medium"</span>
    <span class="hljs-attr">scalingConfig:</span>
      <span class="hljs-attr">minSize:</span> <span class="hljs-number">1</span>
      <span class="hljs-attr">maxSize:</span> <span class="hljs-number">3</span>
      <span class="hljs-attr">desiredSize:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">subnetIds:</span> 
      <span class="hljs-bullet">-</span> <span class="hljs-string">subnet-0c7420944a320ddff</span>
  <span class="hljs-attr">crossplane:</span>
    <span class="hljs-attr">compositionSelector:</span>
      <span class="hljs-attr">matchLabels:</span>
        <span class="hljs-attr">provider:</span> <span class="hljs-string">aws</span>
        <span class="hljs-attr">workload:</span> <span class="hljs-string">eksnodegroup</span>
</code></pre>
<p>Here is the same XR in ArgoCD.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757621184804/0a215c73-c3d2-4ca6-8e86-acf0f99d2d6b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757621834599/9a5cfae0-0999-41fd-ab1a-3d433e2fbeb2.png" alt class="image--center mx-auto" /></p>
<p>If we take step back here a minute and see what we have created, not only created a cluster and nodegroup. We also created a template in which as many number of clusters can be created as required. 0→1 is done now 1→n is easy. Its not just new controlplanes, you can add nodegroups to the existing EKS clusters. This way we can create self-serving platforms for developer teams so that you prevent the infra related bottlenecks.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Crossplane and ArgoCD are quintessential tools in Cloud Native era. Teams iterate in cycles, these tools offer a significant improvement in that cycle pushing developer experience and cost efficiency further. The returns will be multi-fold and the wildest thing is that these tools are open-source. If your workloads runs on Kubernetes then it is time to adopt appropriate tools for the job.</p>
]]></content:encoded></item><item><title><![CDATA[Gateway API: The better way to Ingress]]></title><description><![CDATA[Introduction
While all our existing workloads work well with Ingress and we don’t want to touch what is working just fine. Everyone have to migrate from Ingress to Gateway API at some point. I always felt that ingress is too rigid and complex with a ...]]></description><link>https://srujanpakanati.com/gateway-api-the-better-way-to-ingress</link><guid isPermaLink="true">https://srujanpakanati.com/gateway-api-the-better-way-to-ingress</guid><category><![CDATA[Gateway API]]></category><category><![CDATA[ingress]]></category><category><![CDATA[envoy]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Wed, 03 Sep 2025 17:37:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/xcrI6CPkkJs/upload/ee10b3e1cbd521c2da8b6aaf490893c3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>While all our existing workloads work well with Ingress and we don’t want to touch what is working just fine. Everyone have to migrate from Ingress to Gateway API at some point. I always felt that ingress is too rigid and complex with a lot of nesting YAML and bulky annotation section. Gateway API splits the responsibilities of different people in a team and dedicates a YAML component for them like “GatewayClass“, “Gateway“ and “HTTPRoute“ etc.</p>
<h2 id="heading-why-ingress-sucks">Why Ingress sucks?</h2>
<p>Ingress is an initial solution to the problem where majority did not have a clarity on where the Kubernetes project is going to endup. It became GA in 1.18 which is very early into Kubernetes and also lacked lot of features natively. There are many other drawbacks with Ingress that we can discuss later</p>
<h2 id="heading-gatewayapi">GatewayAPI</h2>
<p>Gateway API is truly non opinionated and loosely coupled way of routing outside traffic to your services. Developer, Cluster Administrator and Infrastructure Engineer have their own tasks to do and YAMLs to manage. This enables a great support for multi-tenancy and portability from one vendor to other. Here is an <a target="_blank" href="https://gateway.envoyproxy.io/docs/tasks/quickstart/">example</a> on how Envoy Gateway API can be tested.</p>
<p>First you need to helm install envoy gateway and CRDs</p>
<pre><code class="lang-yaml"><span class="hljs-string">helm</span> <span class="hljs-string">install</span> <span class="hljs-string">eg</span> <span class="hljs-string">oci://docker.io/envoyproxy/gateway-helm</span> <span class="hljs-string">--version</span> <span class="hljs-string">v1.5.0</span> <span class="hljs-string">-n</span> <span class="hljs-string">envoy-gateway-system</span> <span class="hljs-string">--create-namespace</span>
</code></pre>
<p>This deploys envoy gateway deployment with following config</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">~</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">cm</span> <span class="hljs-string">envoy-gateway-config</span> <span class="hljs-string">-n</span> <span class="hljs-string">envoy-gateway-system</span> <span class="hljs-string">-o</span> <span class="hljs-string">yaml</span> 
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">data:</span>
  <span class="hljs-attr">envoy-gateway.yaml:</span> <span class="hljs-string">|
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    extensionApis: {}
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    logging:
      level:
        default: info
    provider:
      kubernetes:
        rateLimitDeployment:
          container:
            image: docker.io/envoyproxy/ratelimit:3e085e5b
          patch:
            type: StrategicMerge
            value:
              spec:
                template:
                  spec:
                    containers:
                    - imagePullPolicy: IfNotPresent
                      name: envoy-ratelimit
        shutdownManager:
          image: docker.io/envoyproxy/gateway:v1.5.0
      type: Kubernetes
</span><span class="hljs-attr">kind:</span> <span class="hljs-string">ConfigMap</span>
</code></pre>
<p>This configuration essentially defines <code>controllerName</code> and images that needs to be used to for purposes of shutdown and rate limiting.</p>
<p>Following this, you need to install quickstart.yaml to get all the necessary components for the Gateway and application. Practically speaking, GatewayClass and Gateway will already be installed as bootstrap before applications get installed. Here is the GatewayClass and Gateway YAML files.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">gateway.networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">GatewayClass</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">eg</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">controllerName:</span> <span class="hljs-string">gateway.envoyproxy.io/gatewayclass-controller</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">gateway.networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Gateway</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">eg</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">gatewayClassName:</span> <span class="hljs-string">eg</span>
  <span class="hljs-attr">listeners:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">http</span>
      <span class="hljs-attr">protocol:</span> <span class="hljs-string">HTTP</span>
      <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
</code></pre>
<p>Here in the GatewayClass, the <code>controllerName</code> must match with the one we have provided to EnvoyGateway. In that way, you can use different controllers for different GatewayClasses in the same cluster. The <code>gatewayClassName</code> in Gateway must match with the GatewayClass.</p>
<p>Next, installing application related components. After installing the entire quickstart.yaml, it creates below components</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">~</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">apply</span> <span class="hljs-string">-f</span> <span class="hljs-string">https://github.com/envoyproxy/gateway/releases/download/v1.5.0/quickstart.yaml</span> <span class="hljs-string">-n</span> <span class="hljs-string">default</span>

<span class="hljs-string">gatewayclass.gateway.networking.k8s.io/eg</span> <span class="hljs-string">created</span>
<span class="hljs-string">gateway.gateway.networking.k8s.io/eg</span> <span class="hljs-string">created</span>
<span class="hljs-string">serviceaccount/backend</span> <span class="hljs-string">created</span>
<span class="hljs-string">service/backend</span> <span class="hljs-string">created</span>
<span class="hljs-string">deployment.apps/backend</span> <span class="hljs-string">created</span>
<span class="hljs-string">httproute.gateway.networking.k8s.io/backend</span> <span class="hljs-string">created</span>
</code></pre>
<p>You can see that the Gateway has spun up new pod(deployment) and service in envoy namespace</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">~</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">pods,svc</span> <span class="hljs-string">-n</span> <span class="hljs-string">envoy-gateway-system</span>
<span class="hljs-string">NAME</span>                                             <span class="hljs-string">READY</span>   <span class="hljs-string">STATUS</span>    <span class="hljs-string">RESTARTS</span>   <span class="hljs-string">AGE</span>
<span class="hljs-string">pod/envoy-default-eg-e41e7b31-55bf69f99d-pl6dl</span>   <span class="hljs-number">2</span><span class="hljs-string">/2</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">2m37s</span>
<span class="hljs-string">pod/envoy-gateway-667545bc7d-dpmmz</span>               <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">45m</span>

<span class="hljs-string">NAME</span>                                <span class="hljs-string">TYPE</span>           <span class="hljs-string">CLUSTER-IP</span>      <span class="hljs-string">EXTERNAL-IP</span>   <span class="hljs-string">PORT(S)</span>                                            <span class="hljs-string">AGE</span>
<span class="hljs-string">service/envoy-default-eg-e41e7b31</span>   <span class="hljs-string">LoadBalancer</span>   <span class="hljs-number">10.96</span><span class="hljs-number">.222</span><span class="hljs-number">.26</span>    <span class="hljs-number">172.18</span><span class="hljs-number">.0</span><span class="hljs-number">.7</span>    <span class="hljs-number">80</span><span class="hljs-string">:30823/TCP</span>                                       <span class="hljs-string">2m37s</span>
<span class="hljs-string">service/envoy-gateway</span>               <span class="hljs-string">ClusterIP</span>      <span class="hljs-number">10.96</span><span class="hljs-number">.174</span><span class="hljs-number">.248</span>   <span class="hljs-string">&lt;none&gt;</span>        <span class="hljs-number">18000</span><span class="hljs-string">/TCP,18001/TCP,18002/TCP,19001/TCP,9443/TCP</span>   <span class="hljs-string">45m</span>
</code></pre>
<p>In the default namespace</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">~</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">pods,svc</span> <span class="hljs-string">-n</span> <span class="hljs-string">default</span>             
<span class="hljs-string">NAME</span>                           <span class="hljs-string">READY</span>   <span class="hljs-string">STATUS</span>    <span class="hljs-string">RESTARTS</span>   <span class="hljs-string">AGE</span>
<span class="hljs-string">pod/backend-869c8646c5-vrrtl</span>   <span class="hljs-number">1</span><span class="hljs-string">/1</span>     <span class="hljs-string">Running</span>   <span class="hljs-number">0</span>          <span class="hljs-string">5m2s</span>

<span class="hljs-string">NAME</span>                 <span class="hljs-string">TYPE</span>        <span class="hljs-string">CLUSTER-IP</span>     <span class="hljs-string">EXTERNAL-IP</span>   <span class="hljs-string">PORT(S)</span>    <span class="hljs-string">AGE</span>
<span class="hljs-string">service/backend</span>      <span class="hljs-string">ClusterIP</span>   <span class="hljs-number">10.96</span><span class="hljs-number">.220</span><span class="hljs-number">.59</span>   <span class="hljs-string">&lt;none&gt;</span>        <span class="hljs-number">3000</span><span class="hljs-string">/TCP</span>   <span class="hljs-string">5m2s</span>
<span class="hljs-string">service/kubernetes</span>   <span class="hljs-string">ClusterIP</span>   <span class="hljs-number">10.96</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>      <span class="hljs-string">&lt;none&gt;</span>        <span class="hljs-number">443</span><span class="hljs-string">/TCP</span>    <span class="hljs-string">48m</span>
</code></pre>
<p>Here is the httpRoute for this backend app</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">gateway.networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">HTTPRoute</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">backend</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">parentRefs:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">eg</span>
  <span class="hljs-attr">hostnames:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"www.example.com"</span>
  <span class="hljs-attr">rules:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">backendRefs:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">group:</span> <span class="hljs-string">""</span>
          <span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
          <span class="hljs-attr">name:</span> <span class="hljs-string">backend</span>
          <span class="hljs-attr">port:</span> <span class="hljs-number">3000</span>
          <span class="hljs-attr">weight:</span> <span class="hljs-number">1</span>
      <span class="hljs-attr">matches:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span>
            <span class="hljs-attr">type:</span> <span class="hljs-string">PathPrefix</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">/</span>
</code></pre>
<p>To test the traffic, we can port-forward the gateway service and curl it</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">~</span> <span class="hljs-string">curl</span> <span class="hljs-string">--verbose</span> <span class="hljs-string">--header</span> <span class="hljs-string">"Host: www.example.com"</span> <span class="hljs-string">http://localhost:8888/get</span>

<span class="hljs-string">*</span> <span class="hljs-string">Host</span> <span class="hljs-string">localhost:8888</span> <span class="hljs-string">was</span> <span class="hljs-string">resolved.</span>
<span class="hljs-string">*</span> <span class="hljs-attr">IPv6:</span> <span class="hljs-string">::1</span>
<span class="hljs-string">*</span> <span class="hljs-attr">IPv4:</span> <span class="hljs-number">127.0</span><span class="hljs-number">.0</span><span class="hljs-number">.1</span>
<span class="hljs-string">*</span>   <span class="hljs-string">Trying</span> [<span class="hljs-string">::1</span>]<span class="hljs-string">:8888...</span>
<span class="hljs-string">*</span> <span class="hljs-string">Connected</span> <span class="hljs-string">to</span> <span class="hljs-string">localhost</span> <span class="hljs-string">(::1)</span> <span class="hljs-string">port</span> <span class="hljs-number">8888</span>
<span class="hljs-string">&gt;</span> <span class="hljs-string">GET</span> <span class="hljs-string">/get</span> <span class="hljs-string">HTTP/1.1</span>
<span class="hljs-string">&gt;</span> <span class="hljs-attr">Host:</span> <span class="hljs-string">www.example.com</span>
<span class="hljs-string">&gt;</span> <span class="hljs-attr">User-Agent:</span> <span class="hljs-string">curl/8.7.1</span>
<span class="hljs-string">&gt;</span> <span class="hljs-attr">Accept:</span> <span class="hljs-string">*/*</span>
<span class="hljs-string">&gt; 
* Request completely sent off
Handling connection for 8888
&lt; HTTP/1.1 200 OK
&lt; content-type: application/json
&lt; x-content-type-options: nosniff
&lt; date: Wed, 03 Sep 2025 15:34:15 GMT
&lt; content-length: 467
&lt;</span>
</code></pre>
<p>So, in order to present a mental map on how the traffic is flowing, here is a diagram</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1756917432195/d38ba09c-6483-458b-a0f7-0b29ba2ee85d.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>GatewayAPI is maturing with more vendors offering it as a solution. There are good <a target="_blank" href="https://medium.com/@kkrzywicki/how-to-easily-migrate-ingress-to-gateway-api-1d479639c43e">migration</a> <a target="_blank" href="https://gateway-api.sigs.k8s.io/guides/migrating-from-ingress/">guides</a> as well. GatewayAPI is a key step towards easier, superior and de-coupled networking solution.</p>
]]></content:encoded></item><item><title><![CDATA[Guide to Using AL2023 AMI with Amazon EKS]]></title><description><![CDATA[AL2023 family of EKS optimized AMIs will become de-facto AMIs starting from EKS version 1.33. This new AMI brings in new changes to the way node is added to the EKS cluster and default usage of IDMSv2 also brings in additional security features with ...]]></description><link>https://srujanpakanati.com/guide-to-using-al2023-ami-with-amazon-eks</link><guid isPermaLink="true">https://srujanpakanati.com/guide-to-using-al2023-ami-with-amazon-eks</guid><category><![CDATA[al2023]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Sat, 30 Aug 2025 16:31:28 GMT</pubDate><content:encoded><![CDATA[<p>AL2023 family of EKS optimized AMIs will become de-facto AMIs starting from EKS version 1.33. This new AMI brings in new changes to the way node is added to the EKS cluster and default usage of IDMSv2 also brings in additional security features with it.</p>
<h2 id="heading-node-initialization">Node Initialization</h2>
<p>AL2023 no longer uses bootstrap.sh script to join the cluster but rather it uses <a target="_blank" href="https://awslabs.github.io/amazon-eks-ami/nodeadm/">nodeadm</a>. This nodeadm relies on the YAML file that you provide it during start-up. This YAML file of type NodeConfig must have fields like cluster name, endpoint and CA. This information can be obtained from DescribeCluster API call. Here is an example NodeConfig file</p>
<pre><code class="lang-yaml"><span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">node.eks.aws/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">NodeConfig</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">cluster:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">my-cluster</span>
    <span class="hljs-attr">apiServerEndpoint:</span> <span class="hljs-string">https://example.com</span>
    <span class="hljs-attr">certificateAuthority:</span> <span class="hljs-string">Y2VydGlmaWNhdGVBdXRob3JpdHk=</span>
    <span class="hljs-attr">cidr:</span> <span class="hljs-number">10.100</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
  <span class="hljs-attr">kubelet:</span>
    <span class="hljs-attr">config:</span>
      <span class="hljs-attr">shutdownGracePeriod:</span> <span class="hljs-string">30s</span>
      <span class="hljs-attr">featureGates:</span>
        <span class="hljs-attr">DisableKubeletCloudCredentialProviders:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>Once this file is populated, node can join the cluster by running <code>nodeadm init nodeconfig.yaml</code> command. Using NodeConfig file, you are also modifying the Kubelet config. This enables to keep all your node specifc changes in one file rather than editing multiple files. Now all you have to worry about is the ways to automate and generate this file reliably whenever new node joins your cluster. Here is a golang based <a target="_blank" href="https://github.com/HighonAces/go-nodeadm-updater">application</a> that I developed to automatically generate NodeConfig file reliabliy during startup</p>
<h2 id="heading-idmsv2">IDMSv2</h2>
<p>Now as IDMSv2 is set as default, pods can no longer use node credentials by-default. This provides additional security. In order to allow pods to use node role, you need to set <code>HttpPutResponseHopLimit</code> to 2. The better way to provide access to AWS resources for your applications is to use <a target="_blank" href="https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html">EKS Pod Identity</a> or <a target="_blank" href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html">IRSA</a>.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>There are many other changes that comes with AL2023 which can affect your Kubernetes workloads like default versions of VPC CNI and its support to DNF packages etc. Overall the AL2023 marks a significant step towards Kubernetes optimized linux OS like Talos.</p>
]]></content:encoded></item><item><title><![CDATA[Hit me with a BRIC(S)]]></title><description><![CDATA[If I think of a clay brick, it takes me to my childhood where you can see multiple cottage industries running klins in rural india making bricks at a tiny scale. The process is very simple, dig up the clay, clean it and mix it with water to make a un...]]></description><link>https://srujanpakanati.com/hit-me-with-a-brics</link><guid isPermaLink="true">https://srujanpakanati.com/hit-me-with-a-brics</guid><category><![CDATA[trade]]></category><category><![CDATA[Geopolitics]]></category><category><![CDATA[Trump]]></category><category><![CDATA[Economy]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Sat, 09 Aug 2025 23:44:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bOGNLYidRXY/upload/cd84b1ef85e4f653aa2007678b5c75a8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If I think of a clay brick, it takes me to my childhood where you can see multiple cottage industries running klins in rural india making bricks at a tiny scale. The process is very simple, dig up the clay, clean it and mix it with water to make a uniform lump, put the lump in molds and heat it in a kiln. That’s how you make a strong brick out of porous weak clay.</p>
<p>When it comes to BRICS as an organization, it is never either a political, economic or military organization. There is no common culture, ideology, political or economic systems between member countries. Two out of five are authoritarian regimes with communist ideologies. There are border disputes between India-China and Russia-China. China has grown into economic powerhouse so there is no possibility of creating an alternate economic institutions like common currency or payment systems without accepting Chinese hegemony. All the other countries have an opinion that it is better to deal with a known evil rather than an unknown one. In essence, no one country is interested in putting efforts to make a uniform clay lump, mould it and heat it to make strong BRICS.</p>
<blockquote>
<p>BRICS is just a bunch of misfits having tea once a year to discuss their hardships with Western systems.</p>
</blockquote>
<p>Here comes Trump, who started out as “using tariffs to re-industrialize USA” to now “Using tariffs to release Bolsonaro and to stop wars“. This is started by Biden when he weaponized SWIFT payment system and seized Russian assets, now continued by Trump. USA, rather being a neutral marketplace, it is choosing sides while undoing years of American progress in creating global financial institutions like Petrodollar and SWIFT payment system. No one trying to de-dollarize world economy except USA itself.</p>
<p>When President Trump said that he would end Russia-Ukraine war, everyone thought that he would do it by negotiations but not by strong arming India to stop buying Russian oil. USA has to make up its mind on who their biggest enemy is. Because just imagine for a second, USA cannot stop China from buying Russian oil. If India stops buying it, it would raise the global crude prices while forcing Russia to sell its oil at a more discounted price to China. China would and can buy more oil than it needs and has a capacity to make additional facilities to store and process crude oil at a breakneck pace. India cannot afford oil in the international market and results in stifled growth and high inflation while its competitor China buys cheap oil at 50 cents on the dollar.</p>
<blockquote>
<p>There is no wrath like a Modi scorned.</p>
</blockquote>
<p>Due to all these misadventures by Trump, now BRICS has a purpose to fulfill, an agenda to achieve. You cannot keep bullying countries and expect nothing to happen. You cannot keep insulting visiting heads of states without any repercussions. Modi is being forced to shake hands with China at upcoming SCO Summit due to unprovoked provocation by Trump. Remember, India did not even respond to “Elephant-Dragon Tango“ invite by Xi to India’s counterpart Draupadi Murmu on April 1st 2025. There is no other choice left with BRICS nations other than to retaliate against tariffs in some shape or form. Their options can vary from all the BRICS nations having reciprocal tariffs either 50% or half of it 25%. Or can also put tariffs on American service exports to those nations. Trump not only shaped the clay brick but also heated it enough to harden it and handed it over to China only to get hit in the face.</p>
<blockquote>
<p>An object at rest will remain at rest, and an object in motion will remain in motion at a constant velocity unless acted upon by a net external force.</p>
</blockquote>
<p>This is Newton’s first law of motion which also applies in geo-politics. There is a thing called status-quo which you would want to change if it is not working for you. Unluckily for USA, its President want to have the only thing working for US to be removed. As one starts to breath even before their conscience fully develops, one will only realize that the air is most important thing to live only when one enters into medium which is lack of air. Sometimes people do not actually realize what is working for them until they lack it.</p>
]]></content:encoded></item><item><title><![CDATA[Dynamic Resource Allocation for better device usage efficiency]]></title><description><![CDATA[Introduction
Dynamic Resource Allocation(DRA) is a new feature in Kubernetes to address the pain point of managing assignment of hardware devices like GPUs in a better fashion. This can be understood as higher level generalization of storage allocati...]]></description><link>https://srujanpakanati.com/dynamic-resource-allocation-for-better-device-usage-efficiency</link><guid isPermaLink="true">https://srujanpakanati.com/dynamic-resource-allocation-for-better-device-usage-efficiency</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[GPU]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 26 May 2025 22:03:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/N_lrIeCWgw0/upload/374aef1cc4239a1c4c71d0bf2609fbb5.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Dynamic Resource Allocation(DRA) is a new feature in Kubernetes to address the pain point of managing assignment of hardware devices like GPUs in a better fashion. This can be understood as higher level generalization of storage allocation pattern in K8s using apis like StorageClass, PV and PVC. Similar to them, you’ll have DeviceClass, ResourceClaimTemplate and ResourceClaim. I have tried this feature with the help of <a target="_blank" href="https://github.com/kubernetes-sigs/dra-example-driver/tree/main">dra-example-driver</a> created by kubernetes-sigs</p>
<h2 id="heading-the-need-for-dra">The Need for DRA</h2>
<p>Since the AI boom, there is an increased focus in K8s to address the painpoints of AI/ML workloads. One such problem is k8s ability to allocate GPUs and schedule pods in the node that have GPU, resource sharing between pods etc. This feature is not just limited to GPUs but also for other hardware like network interfaces for which one has to rely on solutions like multus.</p>
<p>The DRA is a step in right direction to address the gap in the synchrony of hardware and pod that uses it.</p>
<h2 id="heading-how-it-works">How it works?</h2>
<p>Here is how Kubernetes documentation defines the APIs</p>
<blockquote>
<p><strong>ResourceClaim</strong></p>
<p>Describes a request for access to resources in the cluster, for use by workloads. For example, if a workload needs an accelerator device with specific properties, this is how that request is expressed. The status stanza tracks whether this claim has been satisfied and what specific resources have been allocated.</p>
<p><strong>ResourceClaimTemplate</strong></p>
<p>Defines the spec and some metadata for creating ResourceClaims. Created by a user when deploying a workload. The per-Pod ResourceClaims are then created and removed by Kubernetes automatically.</p>
<p><strong>DeviceClass</strong></p>
<p>Contains pre-defined selection criteria for certain devices and configuration for them. DeviceClasses are created by a cluster administrator when installing a resource driver. Each request to allocate a device in a ResourceClaim must reference exactly one DeviceClass.</p>
<p><strong>ResourceSlice</strong></p>
<p>Used by DRA drivers to publish information about resources (typically devices) that are available in the cluster.</p>
<p><strong>DeviceTaintRule</strong></p>
<p>Used by admins or control plane components to add device taints to the devices described in ResourceSlices.</p>
</blockquote>
<p>So basically, when the device driver is installed, it also comes with its own DeviceClass(like StorageClass) and also ResourceSlice which manages the list of resources of that DeviceClass that are available. The pods will have reference to either ResourceClaim or ResourceClaimTemplate.</p>
<h2 id="heading-testing-dra">Testing DRA</h2>
<p>So I have created kind cluster using latest version 1.33. We need to explicitly enable DRA feature. Here is the kind config</p>
<pre><code class="lang-yaml"><span class="hljs-attr">kind:</span> <span class="hljs-string">Cluster</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">kind.x-k8s.io/v1alpha4</span>
<span class="hljs-attr">featureGates:</span>
  <span class="hljs-attr">DynamicResourceAllocation:</span> <span class="hljs-literal">true</span>
<span class="hljs-attr">containerdConfigPatches:</span>
<span class="hljs-comment"># Enable CDI as described in</span>
<span class="hljs-comment"># https://tags.cncf.io/container-device-interface#containerd-configuration</span>
<span class="hljs-bullet">-</span> <span class="hljs-string">|-
  [plugins."io.containerd.grpc.v1.cri"]
    enable_cdi = true
</span><span class="hljs-attr">nodes:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">role:</span> <span class="hljs-string">control-plane</span>
  <span class="hljs-attr">kubeadmConfigPatches:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">|
    kind: ClusterConfiguration
    apiServer:
        extraArgs:
          runtime-config: "resource.k8s.io/v1beta1=true"
    scheduler:
        extraArgs:
          v: "1"
    controllerManager:
        extraArgs:
          v: "1"
</span>  <span class="hljs-bullet">-</span> <span class="hljs-string">|
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        v: "1"
</span><span class="hljs-bullet">-</span> <span class="hljs-attr">role:</span> <span class="hljs-string">worker</span>
  <span class="hljs-attr">kubeadmConfigPatches:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-string">|
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        v: "1"</span>
</code></pre>
<p>Once the cluster is created, we need to install the <a target="_blank" href="https://github.com/kubernetes-sigs/dra-example-driver/blob/main/deployments/helm/dra-example-driver/Chart.yaml">helmchart</a> dra-example-driver. This creates the daemon-set</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">kind</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">daemonsets.apps</span> <span class="hljs-string">-n</span> <span class="hljs-string">dra-example-driver</span> 
<span class="hljs-string">NAME</span>                               <span class="hljs-string">DESIRED</span>   <span class="hljs-string">CURRENT</span>   <span class="hljs-string">READY</span>   <span class="hljs-string">UP-TO-DATE</span>   <span class="hljs-string">AVAILABLE</span>   <span class="hljs-string">NODE</span> <span class="hljs-string">SELECTOR</span>   <span class="hljs-string">AGE</span>
<span class="hljs-string">dra-example-driver-kubeletplugin</span>   <span class="hljs-number">1</span>         <span class="hljs-number">1</span>         <span class="hljs-number">1</span>       <span class="hljs-number">1</span>            <span class="hljs-number">1</span>           <span class="hljs-string">&lt;none&gt;</span>          <span class="hljs-string">8h</span>
</code></pre>
<p>you can now see the device class by listing <code>k get deviceclasses</code> command and you can also list the resourceslice available for that deviceclass  </p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">dra-example-driver</span> <span class="hljs-string">git:(main)</span> <span class="hljs-string">kubectl</span> <span class="hljs-string">get</span> <span class="hljs-string">resourceslice</span> <span class="hljs-string">-o</span> <span class="hljs-string">yaml</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">items:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">resource.k8s.io/v1beta1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">ResourceSlice</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-string">"2025-05-25T20:02:27Z"</span>
    <span class="hljs-attr">generateName:</span> <span class="hljs-string">kind-worker-gpu.example.com-</span>
    <span class="hljs-attr">generation:</span> <span class="hljs-number">1</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">kind-worker-gpu.example.com-tkbqm</span>
    <span class="hljs-attr">ownerReferences:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
      <span class="hljs-attr">controller:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">Node</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">kind-worker</span>
      <span class="hljs-attr">uid:</span> <span class="hljs-string">3c2af9d8-c6e1-449d-8869-85bc68131687</span>
    <span class="hljs-attr">resourceVersion:</span> <span class="hljs-string">"1010"</span>
    <span class="hljs-attr">uid:</span> <span class="hljs-string">ffeeda3f-b43b-4be9-bb8f-e6937284907b</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">devices:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">6</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-121b8219-b8d6-015c-b2eb-1e320ee07510</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-6</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">7</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-f270be4e-7cd6-da75-39e7-b707122f9b70</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-7</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">0</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-4cbf87f3-433e-6717-5588-c33e6886832f</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-0</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">1</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-58bd415e-dee8-f0a5-ca03-02d000554b1a</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-1</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">2</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-6ab67185-8eff-3a23-32fd-75bfbe37b488</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-2</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">3</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-6b77fb80-2d68-809d-4bf1-285e5f47dcc5</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-3</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">4</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-417d66cd-4546-0786-59a3-ef7eb54c564d</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-4</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">basic:</span>
        <span class="hljs-attr">attributes:</span>
          <span class="hljs-attr">driverVersion:</span>
            <span class="hljs-attr">version:</span> <span class="hljs-number">1.0</span><span class="hljs-number">.0</span>
          <span class="hljs-attr">index:</span>
            <span class="hljs-attr">int:</span> <span class="hljs-number">5</span>
          <span class="hljs-attr">model:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">LATEST-GPU-MODEL</span>
          <span class="hljs-attr">uuid:</span>
            <span class="hljs-attr">string:</span> <span class="hljs-string">gpu-f0fdf728-dccb-f484-bbf5-33f63a90b820</span>
        <span class="hljs-attr">capacity:</span>
          <span class="hljs-attr">memory:</span>
            <span class="hljs-attr">value:</span> <span class="hljs-string">80Gi</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">gpu-5</span>
    <span class="hljs-attr">driver:</span> <span class="hljs-string">gpu.example.com</span>
    <span class="hljs-attr">nodeName:</span> <span class="hljs-string">kind-worker</span>
    <span class="hljs-attr">pool:</span>
      <span class="hljs-attr">generation:</span> <span class="hljs-number">1</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">kind-worker</span>
      <span class="hljs-attr">resourceSliceCount:</span> <span class="hljs-number">1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">List</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">resourceVersion:</span> <span class="hljs-string">""</span>
</code></pre>
<p>It lists all the GPUs available in the cluster. Now I have tried running the pods listed in the repo and observed their behaviour<br />For example <a target="_blank" href="https://github.com/kubernetes-sigs/dra-example-driver/blob/main/demo/gpu-test1.yaml">gpu-test-1</a></p>
<blockquote>
<p>Two pods, one container each</p>
<p>Each container asking for 1 distinct GPU  </p>
</blockquote>
<p>Once I installed the pod, here is the list of resourceclaims(RC) and RCTs that I can see  </p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">demo</span> <span class="hljs-string">git:(main)</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">resourceclaim</span>
<span class="hljs-string">NAME</span>             <span class="hljs-string">STATE</span>                <span class="hljs-string">AGE</span>
<span class="hljs-string">pod0-gpu-724sp</span>   <span class="hljs-string">allocated,reserved</span>   <span class="hljs-string">106s</span>
<span class="hljs-string">pod1-gpu-kvs27</span>   <span class="hljs-string">allocated,reserved</span>   <span class="hljs-string">106s</span>
<span class="hljs-string">➜</span>  <span class="hljs-string">demo</span> <span class="hljs-string">git:(main)</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">resourceclaimtemplates.resource.k8s.io</span> 
<span class="hljs-string">NAME</span>         <span class="hljs-string">AGE</span>
<span class="hljs-string">single-gpu</span>   <span class="hljs-string">2m4s</span>
</code></pre>
<p>Here are the logs from one of the pod showing the GPU it was allocated with  </p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">demo</span> <span class="hljs-string">git:(main)</span> <span class="hljs-string">k</span> <span class="hljs-string">logs</span> <span class="hljs-string">pod0</span> 
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">DRA_RESOURCE_DRIVER_NAME="gpu.example.com"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">GPU_DEVICE_6="gpu-6"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">GPU_DEVICE_6_RESOURCE_CLAIM="0b66bdc4-7112-4eb7-a371-6df9e0a08167"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">GPU_DEVICE_6_SHARING_STRATEGY="TimeSlicing"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">GPU_DEVICE_6_TIMESLICE_INTERVAL="Default"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">HOME="/root"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">HOSTNAME="pod0"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_NODE_NAME="kind-worker"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_PORT="tcp://10.96.0.1:443"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_PORT_443_TCP="tcp://10.96.0.1:443"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_PORT_443_TCP_ADDR="10.96.0.1"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_PORT_443_TCP_PORT="443"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_PORT_443_TCP_PROTO="tcp"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_SERVICE_HOST="10.96.0.1"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_SERVICE_PORT="443"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">KUBERNETES_SERVICE_PORT_HTTPS="443"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">OLDPWD</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">PWD="/"</span>
<span class="hljs-string">declare</span> <span class="hljs-string">-x</span> <span class="hljs-string">SHLVL="1"</span>
</code></pre>
<p>Similarly I also tried other definition files as well and everything worked as expected.  </p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Though this is just an example testing, I believe that DRA will work in real-world just as expected. Though DRA is still in its initial stages, a lot of new features can be added and all cloud providers will come up with new device classes to make them available for pods.</p>
]]></content:encoded></item><item><title><![CDATA[Is this Liberation?]]></title><description><![CDATA[Howdy folks! Hope everyone is having fun with their Studio Ghibli style photos which are cute. I made some progress in career, hopefully will share the good news soon. Kubecon London has concluded. All my screen time goes to listening to their talks....]]></description><link>https://srujanpakanati.com/is-this-liberation</link><guid isPermaLink="true">https://srujanpakanati.com/is-this-liberation</guid><category><![CDATA[#tariff]]></category><category><![CDATA[Trump]]></category><category><![CDATA[trade]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Tue, 08 Apr 2025 05:17:32 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/bAQH53VquTc/upload/4efeb7e8c3f0e73f14e19aa7808475a3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Howdy folks! Hope everyone is having fun with their Studio Ghibli style photos which are cute. I made some progress in career, hopefully will share the good news soon. <a target="_blank" href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/">Kubecon</a> London has concluded. All my screen time goes to listening to their talks. Life is good but there is always room for improvement. In this blog post, I wanted to dive deep into some topics that I have been pondering for quite sometime.</p>
<blockquote>
<p>Disclaimer: I am an individual working in tech sector. I never academically pursued immigration or economics. Data maybe incorrect, arguments maybe flawed but intent is genuine. AI maybe used to read the source material to write this post but no part of this blog is AI written(As one can realize from the poor grammar😜).</p>
</blockquote>
<h2 id="heading-free-privileged-speech"><s>Free</s> Privileged Speech</h2>
<p>Yes, we all can agree that ability to immigrate to a country is a privilege and not a right. US have always been a diverse country where immigrants can thrive with hard work and perseverance to become a cornerstone of the nation. Be it Italians, Irish, Chinese, Arabs and Indians. But I always thought free speech is a right for everyone who is legally present. As Emma Lazarus’ iconic poem engraved on the Statue of Liberty goes</p>
<blockquote>
<p>Not like the brazen giant of Greek fame,<br />With conquering limbs astride from land to land;<br />Here at our sea-washed, sunset gates shall stand<br />A mighty woman with a torch, whose flame<br />Is the imprisoned lightning, and her name<br />Mother of Exiles. From her beacon-hand<br />Glows world-wide welcome; her mild eyes command<br />The air-bridged harbor that twin cities frame.<br />“Keep, ancient lands, your storied pomp!” cries she<br />With silent lips. “<mark>Give me your tired, your poor,<br />Your huddled masses yearning to breathe free</mark>,<br />The wretched refuse of your teeming shore.<br />Send these, the homeless, tempest-tost to me,<br />I lift my lamp beside the golden door!”</p>
</blockquote>
<p>Maybe this has to be updated again to “yearning to breathe free as long as you do not criticize government with that freedom“. Ironically Emma is a Jewish American. With the current acts of government, If a privilege can be revoked for exercising a right, <strong><em>Isn’t right a privilege now?</em></strong></p>
<h2 id="heading-the-l-day">The L Day</h2>
<p>Coming on to a lighter topic, I have always enjoyed President Trump’s press meets. He is candid, fun and whimsical. I always believed he have best interests at heart but his approaches are far from ideal. On April 2nd he have liberated Americans from the shackles for free trade by imposing reciprocal sanctions on all the countries including uninhabited islands like <em>Heard and McDonald Islands</em>. Which questions the amount of thought went into the matter at hand given the stakes. I have always believed that tariffs are important tool in principle to build and strengthen local industries which pays exponentially by keeping money inside the economy. But this sort of implementation begs the question, Will this work?</p>
<blockquote>
<p>I would prefer to buy an American made hat at $50 which may earn $5 in net income to an American worker than $25 cap which is made in China and which may earn $0.5 to a Chinese worker because..</p>
<ul>
<li><p>Money earned by an American will become a income to at least 6 other people. Imagine Company → worker → restaurant employee → gas station employee → coffeeshop employee → etc</p>
</li>
<li><p>Chinese government takes major chunk of the sale which in turn uses it for global dominance, spyware masqueraded as weather balloons or <a target="_blank" href="https://www.bbc.com/news/world-asia-62558767">spy vessels</a> masquerading as research ships etc.</p>
</li>
<li><p>CCP promises it’s people some sort of global economic dominance as it used to be before the “Century of Humiliation“. This forces CCP to save face in front of its people by doing things like “Belt and Road“ and “EV supply chain dominance“ etc.</p>
</li>
<li><p>My concern is that the lack of democracy will take country in a wrong path for a prolonged periods of time and when people starts becoming wary of its government, they can try to keep false sense of nationalism intact by going into war with other nations. Imagine “Battle of Falklands“, “Kuwait war“ etc. CCP have quite few targets like Taiwan, India, Japan, Philippines and Russia.</p>
</li>
</ul>
</blockquote>
<p>I have always thought that American government has left its workers high and dry in the pursuit of cheaper products and using its consumers as a bargaining chip in the geo-political global dominance game. This has led to awful demises of many industries like auto, manufacturing etc. rendering majority of American middle class to rely on service sector. Instead of competing in production at global markets, USA has gave up on its pursuit of raising a generation of highly skilled workforce, automation and innovation. It’s a sad reality where the nation that pioneered assembly line have given up on its industry for cheaper alternatives.</p>
<h2 id="heading-an-indian-perspective">An Indian Perspective</h2>
<p>I have lived in india for 25 years which have very high tariffs on electronics and luxury goods, sometimes as much as 100%. I can only justify tariffs in India as a way of raising income and putting a smaller orifice for the flow of Chinese products with whom India is at odds since Indo-China war of 1962. There is a popular way of categorizing Indian consumers into 3 groups. Europe, Brazil and Africa. Europe category is small maybe 5cr who are not much worried about tariffs since the purchases are discretionary in nature and they buy the luxury goods anyway for their comfort and to garner respect. African section of Indian economy comprises of around 100cr which have nothing to do with the products of foreign origin. The Brazilian section is the middle class who are stuck between rock and hard place. They wants to(sometimes have to) buy electronics or luxury goods will suffer due to these high tariffs and often try to emigrate to western nations for better quality of life and higher purchase parity.</p>
<p>None of these tariffs have moved any production to India till recent days. Due to raise in middle class population accessibility to cheaper skilled labor and with government initiatives like “Make in India“ and “PLI“, there is a growth in production. Iphones have become cheaper in India when compared to USA.</p>
<h2 id="heading-problems-in-current-implementation">Problems in current implementation</h2>
<p>On a bigger picture, USA does not want to be global policeman anymore and it is trading soft power for prosperity which leaves a void and leads to an emergence of new global order. As many problems there are with current global order, The USA have played a key role in relatively peaceful world since WW2 and I am personally grateful for that. The genZ are struggling to make ends meet and wants to make decent living. USA’s changing priorities can help a lot with that.</p>
<p>There are many problems with current tariff implementation like…</p>
<ul>
<li><p>Trade deficit should not be a prime proponent to tariffs.Things like Italian cheese, Scotch and Champagne will always have better quality at cheaper price.</p>
</li>
<li><p>There are no long term plans for tariffs. Are they going to be here after Trump? Can companies gamble on moving production back to USA? Is declining population and struggling generation a concern?</p>
</li>
<li><p>Tariffs are not industry targeted. For example you can try to move auto and semiconductor industries back by piggybacking on CHIPS Act investments.</p>
</li>
<li><p>Skilled labor is always a concern. The current higher education system is far from ideal to address this problem. Are there any program linked incentives for universities?</p>
</li>
</ul>
<h2 id="heading-an-ideal-way">An Ideal way</h2>
<p>The best way in my opinion to reduce trade deficit and make US an industrial powerhouse again is by categorizing deficit in 3 groups</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Group</td><td>Industries</td><td>Ideal timeline</td><td>Remarks</td></tr>
</thead>
<tbody>
<tr>
<td>Green(Low hanging fruits)</td><td>Food processing, apparel etc</td><td>2-4 years</td><td>Relatively easy to setup and can take advantage of automation, AI, robotics and skilled labor.</td></tr>
<tr>
<td>Yellow(Relatively hard)</td><td>semiconductor, auto and electronics</td><td>4-8years</td><td>These industries used to be in USA but have moved due to cheaper costs. Hence can be moved back easily. This also helps to keep Chinese growth at bay. Challenges include availability of high skilled labor where the migrants can be leveraged.</td></tr>
<tr>
<td>Red(Impossible)</td><td>Traditional specialties</td><td>-</td><td>Good if there are alternatives but doesn’t have to be targeted.</td></tr>
</tbody>
</table>
</div><p>Government should not treat tariffs as a source of income(please do not get addicted) and should pump the money raised as different forms of incentives to their respective industries. Imagine tax rebates, incentivizing universities for industry specific programs and income tax discounts for highly skilled workers participating in inter-state migration.</p>
<p>More tariffs collected for an industry → More money gets pumped as incentives → Faster progression to target</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I am and always grateful to this country. The people are patriotic, hardworking, warm and always treated me with humility. I hate to see them struggle where their government wastes money on pointless pursuits and lack of clearer vision. You simply cannot perform surgery with an axe. It takes dexterity, perseverance and most of all… time. As the old Indian saying goes, I would like to quote it in the spirit of globalization.</p>
<blockquote>
<h2 id="heading-vasathhava-katamabma-world-is-one-big-family">वसुधैव कुटुम्बम् ~ World is one big family</h2>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[OpenHands🤲: The Flawless Open-Source AI Coding Companion]]></title><description><![CDATA[Boy-oh-boy where do I start, this week has been overwhelming for me. I celebrated my 29th birthday, every new morning is a blessing. Grok3 got released and it keeps knocking my socks off. The best model till date. It has surpassed the Llmarena leader...]]></description><link>https://srujanpakanati.com/openhands-the-flawless-open-source-ai-coding-companion</link><guid isPermaLink="true">https://srujanpakanati.com/openhands-the-flawless-open-source-ai-coding-companion</guid><category><![CDATA[openhands]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[aiagents]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 24 Feb 2025 05:00:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Oy2yXvl1WLg/upload/c4e359d85e1d0ac2fa05488e5aa6e2d4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Boy-oh-boy where do I start, this week has been overwhelming for me. I celebrated my 29th birthday, every new morning is a blessing. <a target="_blank" href="https://grok.com/">Grok3</a> got released and it keeps knocking my socks off. The best model till date. It has surpassed the Llmarena <a target="_blank" href="https://lmarena.ai/?gad_source=1&amp;gclid=CjwKCAiAiOa9BhBqEiwABCdG86S5la9pzK1wLjQgm4SsUwEEOpEAg3ary1gQPejIbN9IUWjTNz6pBhoCS7AQAvD_BwE">leaderboard</a> and rightly so. I never imagined that x.ai would hit it out of the park. I have been tinkering with <a target="_blank" href="https://github.com/All-Hands-AI/OpenHands">OpenHands</a> all week.Solved an interesting use case. Let's dive deeper into it.</p>
<h2 id="heading-a-happy-accident">A happy accident</h2>
<p>I started watching a Youtube <a target="_blank" href="https://www.youtube.com/watch?v=WKF__cJTxvg&amp;t=1299s">video</a> from sentdex about his experience using openhands and then I was intrigued by it. As I followed Devin saga and their ridiculous $500/month price I was excited to try openhands as it solves the same use-case of using a coding agent to accelerate your dev work. I tried it and it is delightfully good at doing what you are asking it to do. If you can articulate clearly(as the case with every LLM) what needs to be done. It can do it for you.</p>
<h2 id="heading-openhands-in-action">Openhands in action</h2>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=RUF51sC8Dxs">https://www.youtube.com/watch?v=RUF51sC8Dxs</a></div>
<p> </p>
<p>As you can see from the video. It can get a lot of things done. We have a choice to use whichever LLM we want to solve the tasks. I am using gemini2.0-flash-exp because it is free and I will not get rate limited. The ideal candidate would be claude-sonnet 3.5. The agent would be as good as the LLM we are using. You can create a github bot and provide token here and you can ask it to raise PRs so that you can review them later. Openhands would be an excellent sidekick for every dev out there. Me being a Devops engineer would do anything to alleviate their pain points(poor devs😆) by equipping them with the best possible tooling with least additional costs.</p>
<h2 id="heading-reading-tea-leaves">Reading tea leaves</h2>
<p>Okay, my hunch is that in the future, opensource models will be as good as closed source ones like R1 over o1. And every cloud provider out there will charge more for inference by hosting these models(both open and close source) than compared to self-hosting. Right now every company is experimenting with these models as base layer and creating applications on top of them and cloud providers are providing foundational models for dirt cheap. Once the applications reach the scale and cloud providers raise the prices, that’s when companies realize that self-hosting models would be a better option and ends up doing significant code changes and rewiring. If companies explore options to self host and use from the get-go. They can save tons in coming years.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>closed source model</td><td>only hosted on their hardware</td><td>no other choice but to use them</td><td>Think Grok3</td></tr>
</thead>
<tbody>
<tr>
<td>closed source model</td><td>neutral cloud providers can host</td><td>You have limited choices.</td><td>Think claude-sonnet</td></tr>
<tr>
<td>open source model</td><td>neutral cloud providers offering as a service</td><td>Here is where you have to think of self hosting it on cloud as Kubernetes cluster</td><td>Think llama3.3 on Groq</td></tr>
<tr>
<td>open source model</td><td>self hosting on cloud</td><td>more control and freedom of choice.</td><td>Think llmariner or Kuberay</td></tr>
<tr>
<td>open source model</td><td>own hardware</td><td>C’mon let’s be practical😁</td></tr>
</tbody>
</table>
</div><p>We are in the subscription era both as consumers and as businesses. Imagine you run a company where you chose datadog and splunk over otel/prometheus, ELK stack and Grafana. During the upward trajectory everything looks fine. What if your business plateaued and you are looking to cut costs? What if you are in a downward trajectory and have to cut costs? You cannot hire devs then to rewrite apps so you can cut costs. You should not grab leaves after your hands are burnt(poorly translated idiomatic expression in Telugu).</p>
<blockquote>
<p>I know, I digress a lot😅. Let’s get back to OpenHands</p>
</blockquote>
<h2 id="heading-my-use-case">My use-case</h2>
<p>So currently you can run OpenHands locally using the docker run command.</p>
<pre><code class="lang-plaintext">sudo docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands-state:/.openhands-state \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.24
</code></pre>
<p>But what if I wanted to run it as a deployment in my Kubernetes cluster and serve developers from there? If I run models in the same cluster using <a target="_blank" href="https://github.com/llmariner/llmariner">llmariner</a>(An excellent opensource solution to self-host, finetune and train genai models. Please do check it out). It would cut my latency by a lot. So I raised a <a target="_blank" href="https://github.com/All-Hands-AI/OpenHands/issues/6864#issuecomment-2673508015">bug</a> in the OpenHands github page requesting for a pod definition file and here is the response I got from maintainers.</p>
<blockquote>
<p>Hi <a class="user-mention" href="https://hashnode.com/@https://github.com/HighonAces">@HighonAces</a> <a target="_blank" href="https://github.com/HighonAces">, we don't</a> have an open source version of deploying OpenHands on a Kubernetes cluster, but at All Hands we have a (paid) solution for deploying OpenHands to larger teams. If you'd be interested in having us help you deploy to a team please jump on the <a target="_blank" href="https://join.slack.com/t/openhands-ai/shared_invite/zt-2ypg5jweb-d~6hObZDbXi_HEL8PDrbHg">OpenHands</a> slack and ping me <a target="_blank" href="https://join.slack.com/t/openhands-ai/shared_invite/zt-2ypg5jweb-d~6hObZDbXi_HEL8PDrbHg"></a>and Rob*** and we could discuss more.</p>
</blockquote>
<p>Honestly, I have no qualms about it. They developed a product. The developers from All Hands might have contributed a ton and they have every right to steer the path of OpenHands project as they see fit.</p>
<h2 id="heading-the-kubehustle">The kubehustle</h2>
<p>So I took things into my own hands. The challenging thing about deploying OpenHands in K8s is that everytime you initiate an agent session on the browser, it creates a new docker container to serve as an agent. This is something different from typical usecases. It also directly mounts /var/run/docker.sock on the container which is strict no-go from security perspective. So it took me multiple attempts to get it right. And here is the pod definiton.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-app-v2</span>  <span class="hljs-comment"># Changed to avoid conflict</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">docker-socket</span>
      <span class="hljs-attr">hostPath:</span>
        <span class="hljs-attr">path:</span> <span class="hljs-string">/var/run/docker.sock</span>
        <span class="hljs-attr">type:</span> <span class="hljs-string">Socket</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-state</span>
      <span class="hljs-attr">persistentVolumeClaim:</span>
        <span class="hljs-attr">claimName:</span> <span class="hljs-string">openhands-state-pvc</span>

  <span class="hljs-attr">securityContext:</span>
    <span class="hljs-attr">fsGroup:</span> <span class="hljs-number">42420</span>

  <span class="hljs-attr">containers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-app</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">docker.all-hands.dev/all-hands-ai/openhands:0.24</span>
      <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
      <span class="hljs-attr">securityContext:</span>
        <span class="hljs-attr">privileged:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3000</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SANDBOX_RUNTIME_CONTAINER_IMAGE</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik"</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">LOG_ALL_EVENTS</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"true"</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SANDBOX_HOST</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"172.17.0.1"</span>  <span class="hljs-comment"># Replace with your host’s Docker bridge IP</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SANDBOX_PORT</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"32315"</span>       <span class="hljs-comment"># Explicitly set the port</span>
      <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">docker-socket</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/run/docker.sock</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-state</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/openhands-state</span>  <span class="hljs-comment"># Adjusted to a typical path</span>

  <span class="hljs-comment"># Optional: Use hostNetwork to simplify access</span>
  <span class="hljs-attr">hostNetwork:</span> <span class="hljs-literal">true</span>
</code></pre>
<p>This definition runs single container and it does not address the security concerns. If I were to run this in my company, I would run it on isolated nodes to decrease the threat surface. I was also working on another solution where we can run dind(Docker in Docker) as a sidecar container to mitigate security risks but it is not working right now.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Pod</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-app-dind</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">docker-run</span>
      <span class="hljs-attr">emptyDir:</span> {}  <span class="hljs-comment"># Mount at /var/run for socket</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-state</span>
      <span class="hljs-attr">persistentVolumeClaim:</span>
        <span class="hljs-attr">claimName:</span> <span class="hljs-string">openhands-state-pvc-v2</span>  <span class="hljs-comment"># Updated PVC name for v2</span>

  <span class="hljs-attr">securityContext:</span>
    <span class="hljs-attr">fsGroup:</span> <span class="hljs-number">42420</span>

  <span class="hljs-attr">hostAliases:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">ip:</span> <span class="hljs-string">"127.0.0.1"</span>
      <span class="hljs-attr">hostnames:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">"host.docker.internal"</span>  <span class="hljs-comment"># Map host.docker.internal to localhost</span>

  <span class="hljs-attr">containers:</span>
    <span class="hljs-comment"># Main application container</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-app</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">docker.all-hands.dev/all-hands-ai/openhands:0.24</span>
      <span class="hljs-attr">imagePullPolicy:</span> <span class="hljs-string">Always</span>
      <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">3000</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SANDBOX_RUNTIME_CONTAINER_IMAGE</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"docker.all-hands.dev/all-hands-ai/runtime:0.24-nikolaik"</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">LOG_ALL_EVENTS</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"true"</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DOCKER_HOST</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"unix:///var/run/docker.sock"</span>  <span class="hljs-comment"># Use DinD’s socket</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">SANDBOX_PORT</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">"32315"</span>      <span class="hljs-comment"># Match the error’s port (verify if correct)</span>
      <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">docker-run</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/run</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">openhands-state</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/openhands-state</span>

    <span class="hljs-comment"># DinD sidecar container</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">dind</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">docker:dind</span>
      <span class="hljs-attr">securityContext:</span>
        <span class="hljs-attr">privileged:</span> <span class="hljs-literal">true</span>
      <span class="hljs-attr">args:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">"--host=unix:///var/run/docker.sock"</span>  <span class="hljs-comment"># Disable TCP</span>
      <span class="hljs-attr">volumeMounts:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">docker-run</span>
          <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/var/run</span>
      <span class="hljs-attr">env:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">DOCKER_TLS_CERTDIR</span>
          <span class="hljs-attr">value:</span> <span class="hljs-string">""</span>
      <span class="hljs-attr">resources:</span>
        <span class="hljs-attr">requests:</span>
          <span class="hljs-attr">memory:</span> <span class="hljs-string">"512Mi"</span>
          <span class="hljs-attr">cpu:</span> <span class="hljs-string">"500m"</span>
        <span class="hljs-attr">limits:</span>
          <span class="hljs-attr">memory:</span> <span class="hljs-string">"1Gi"</span>
          <span class="hljs-attr">cpu:</span> <span class="hljs-string">"1"</span>
</code></pre>
<p>This is not working as expected. Even if it works, you would be trading latency with security as we will be running nested runtimes. Another concern is that Kubernetes has moved from Docker as de-facto runtime to containerd. I have to test the solution with containerd and update it.</p>
<h2 id="heading-outro">Outro</h2>
<p>The future looks bright for humanity as we are ushering into the AI era. Increased productivity always results in increased quality of life for everyone. I wish Openhands become de-faco agent in developer world.</p>
]]></content:encoded></item><item><title><![CDATA[Democratization of coding using Github Copilot]]></title><description><![CDATA[Intro
Hope everyone is doing well. We are ushering into a new era where using AI tools are becoming part and parcel of everyone’s lives. There are promises of increased efficiency or fears of job loss etc. We often read the news of CEOs saying that 5...]]></description><link>https://srujanpakanati.com/democratization-of-coding-using-github-copilot</link><guid isPermaLink="true">https://srujanpakanati.com/democratization-of-coding-using-github-copilot</guid><category><![CDATA[GitHub]]></category><category><![CDATA[copilot]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Sun, 15 Dec 2024 01:32:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/wX2L8L-fGeA/upload/9628a9bcbc31c9c36a092f9d591a9b24.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-intro">Intro</h3>
<p>Hope everyone is doing well. We are ushering into a new era where using AI tools are becoming part and parcel of everyone’s lives. There are promises of increased efficiency or fears of job loss etc. We often read the news of <a target="_blank" href="https://fortune.com/2024/10/30/googles-code-ai-sundar-pichai/#:~:text=Over%2025%25%20of%20Google's%20code%20is%20now%20written%20by%20AI,says%20it's%20just%20the%20start&amp;text=Alphabet%20CEO%20Sundar%20Pichai.">CEOs</a> saying that 50% of new code being written by AI or developers getting 30% more efficient or Jansen Huang <a target="_blank" href="https://www.linkedin.com/posts/michaeljkanaan_nvidia-ceo-predicts-the-death-of-coding-activity-7170778383788810240-eXQH#:~:text=Speaking%20at%20the%20World%20Government,on%20tasks%20like%20education%20and">statement</a> on coding becoming obsolete. I personally thought how is this going to affect me until it did.  </p>
<p>This weekend I got a chance to use Github copilot and it is darn amazing. Being a Devops Engineer I am not a very good programmer because my daily work doesn’t require me to code and it is very difficult to catch up with the changes in programming trends or languages etc. I recently learnt golang but I cannot keep my go skills on the edge because I write a piece of code and then nothing for weeks or months altogether. Typically my pace is unbearably slow. I have to open 50 tabs to write 10 lines of code. I can easily get distracted or lose interest altogether. In general reading code is very easy but writing has a very steep learning curve. Github copilot changed everything for me. I can ask for a function and it is aware of the context and the file structure and writes very accurate function for my needs and also tells me what changes do I need to make.</p>
<h3 id="heading-project-fundata">Project Fundata</h3>
<p>If you are a long-term investor, you need to have access to company’s financial data like income statement and cashflow etc. This is called as Fundamental Data. I used to use TIKR which costs $20/month. The idea is to get data from alphavantage API for free and put it in MongoDB and visualize it using Grafana or something equivalent by writing queries. I have written a similar README and asked copilot for project structure. It read my readme file and gave the project structure exactly like I wanted.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734224283543/dfdca779-3452-456f-aa5c-d18959b2eb6c.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-yaml"><span class="hljs-attr">cmd/myapp/main.go:</span> <span class="hljs-string">Entry</span> <span class="hljs-string">point</span> <span class="hljs-string">of</span> <span class="hljs-string">the</span> <span class="hljs-string">application.</span>
<span class="hljs-attr">internal/api/handler.go:</span> <span class="hljs-string">Handles</span> <span class="hljs-string">HTTP</span> <span class="hljs-string">requests.</span>
<span class="hljs-attr">internal/db/mongo.go:</span> <span class="hljs-string">MongoDB</span> <span class="hljs-string">connection</span> <span class="hljs-string">and</span> <span class="hljs-string">operations.</span>
<span class="hljs-attr">internal/models/data.go:</span> <span class="hljs-string">Data</span> <span class="hljs-string">models.</span>
<span class="hljs-attr">internal/services/alpha_vantage.go:</span> <span class="hljs-string">Interactions</span> <span class="hljs-string">with</span> <span class="hljs-string">the</span> <span class="hljs-string">AlphaVantage</span> <span class="hljs-string">API.</span>
<span class="hljs-attr">pkg/utils/utils.go:</span> <span class="hljs-string">Utility</span> <span class="hljs-string">functions.</span>
<span class="hljs-attr">go.mod and go.sum:</span> <span class="hljs-string">Go</span> <span class="hljs-string">modules</span> <span class="hljs-string">files.</span>
<span class="hljs-attr">README.md:</span> <span class="hljs-string">Project</span> <span class="hljs-string">documentation.</span>
</code></pre>
<p>And it also gave this test to make me understand what goes where.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734224595700/8346386c-9058-49fd-9451-cda8e634d6b0.png" alt class="image--center mx-auto" /></p>
<p>In this example, I have not told it how the struct should look like but it took it from my readme and understood what I am trying to do. This is from my readme  </p>
<blockquote>
<p>Currently I have subscribed to TIKR which costs me $20/month to get fundamental data like earnings, cashflow etc. This project is to bring that cost down by having a self-implemented solution using AlphaVantage API and my impeccable coding skills in golang with the help of github copilot.</p>
<ul>
<li><p>Write a application which takes a post request with a json payload like <code>{"symbol": "META", "function": "cash_flow", "api_key":"xxxxx"}</code></p>
</li>
<li><p>Store the data in mongo-db which will be deployed in the same cluster</p>
</li>
<li><p>Use Grafana to visualize the data stored in mongo-db  </p>
</li>
</ul>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734224821305/c971694a-bcdd-4d4e-9104-6a3a2f1f588c.png" alt class="image--center mx-auto" /></p>
<p>Initially for testing I have put everything in the same file. When I gave this prompt, it gave me two different files to tidy-up the code.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734224987491/5db2c78d-e7d4-4aad-a137-e2b46954d0ed.png" alt class="image--center mx-auto" /></p>
<blockquote>
<p>Ensure that the handler function in handler.go is exported (i.e., its name should start with an uppercase letter):</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225073008/3cee4d77-cb54-4409-a2b7-43744e58c3a3.png" alt class="image--center mx-auto" /></p>
<p>It gave me that there is a vulnerability in using hard-coded credentials</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225164038/eaa0c6ba-7e64-494a-823d-28ff31fb487b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225199280/6ac16f81-609d-4a08-bf92-5e8ab436ab04.png" alt class="image--center mx-auto" /></p>
<p>Initially the struct is not accurate because it did not know what incoming data looks like(duh!) So I gave another prompt with the json structure then it gave me accurate struct.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225346771/a37c563a-5950-419d-9100-c987ddd98438.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225377984/16113bd7-a336-4a4c-a933-65a7adae274b.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225429823/9c988c68-5e92-4fa5-a202-4c3d3fed3967.png" alt class="image--center mx-auto" /></p>
<p>I never wrote code to connect with MongoDB in Golang. This is the first time I am seeing this and it worked without any error.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225602049/ffe4e1a9-ce74-4bf0-9ead-818e0f019f21.png" alt class="image--center mx-auto" /></p>
<p>I can ask for explanations.</p>
<blockquote>
<p>Make sure to replace "your_database_name" with the actual name of your MongoDB database.</p>
<p>This code updates the GetDividendInformation function to call the insertDividendData function, which inserts the dividend data into the MongoDB collection.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225784939/0e1a6ca2-b0ec-4859-a4e4-cb52a13a80ff.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225831290/f59eec7a-4975-4337-9c6c-a802e0284663.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734225851754/512ede9e-5dd0-42f2-939a-d47d87c0440c.png" alt class="image--center mx-auto" /></p>
<p>And then I asked it to get URI from secret.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1734226181713/2277c30f-a965-4e3f-a96d-7467f5e8dd8e.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-summary">Summary</h3>
<p>I could never dreamt of writing an entire project in 2 hours. Github copilot can make us extremely efficient and language agnostic. This opens up all new possibilities</p>
]]></content:encoded></item><item><title><![CDATA[Crossplane, Replacing Terraform?]]></title><description><![CDATA[In the previous blog post I have dipped into basics of crossplane and ways to work with it. Since then the project have come up with many new features. Let’s dig into some of the new ways we can use crossplane to make Infra management a little better...]]></description><link>https://srujanpakanati.com/crossplane-replacing-terraform</link><guid isPermaLink="true">https://srujanpakanati.com/crossplane-replacing-terraform</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[crossplane]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 09 Dec 2024 05:32:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1733603280837/3e8efd91-18c9-4784-b0d7-0eb4770787f7.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the previous <a target="_blank" href="https://hashnode.com/post/clt0ppgw3000009l2ajbt7oxn">blog</a> post I have dipped into basics of crossplane and ways to work with it. Since then the project have come up with many new features. Let’s dig into some of the new ways we can use crossplane to make Infra management a little better. The core advantage of Crossplane is that not only it helps you create infrastructure with ease, it also prevents from configuration drift by continuous reconciliation. Another advantage is that you can manage custom API groups using RBAC policies similar to native Kubernetes APIs.</p>
<h2 id="heading-core-components">Core Components</h2>
<p>Rather than devs raising a jira to create cloud resources, what if they can create themselves with a simple yaml file? Just as you write a definition file for pod what if you can write a definition file for a database?. Let see how it can be done but before that I have to introduce you to some concepts and definitions. This table briefly summarizes the definitons. Please go through the below table.</p>
<table><tbody><tr><td><p><strong><em>Component</em></strong></p></td><td><p><strong><em>Abbv</em></strong></p></td><td><p><strong><em>Scope</em></strong></p></td><td><p><strong><em>Summary</em></strong></p></td><td><p><strong><em>Example</em></strong></p></td></tr><tr><td><p>Provider</p></td><td><p></p></td><td><p>cluster</p></td><td><p>Creates new Kubernetes Custom Resource Definitions for an external service.</p></td><td><p>provider-aws-ec2, provider-aws-eks, provider-gcp-gke</p></td></tr><tr><td><p>ProviderConfig</p></td><td><p>PC</p></td><td><p>cluster</p></td><td><p>Applies settings for a Provider.</p></td><td><p></p></td></tr><tr><td><p>Managed Resource</p></td><td><p>MR</p></td><td><p>cluster</p></td><td><p>A Provider resource created and managed by Crossplane inside the Kubernetes cluster.</p></td><td><p>Bucket, dynamoDB, AMI, VPC </p></td></tr><tr><td><p>Composition</p></td><td><p></p></td><td><p>cluster</p></td><td><p>A template for creating multiple managed resources at once.</p></td><td><p><a target="_self" href="http://xsqlinstances.aws.platform.upbound.io">xsqlinstances.aws.platform.upbound.io</a></p></td></tr><tr><td><p>Composite Resources</p></td><td><p>XR</p></td><td><p>cluster</p></td><td><p>Uses a Composition template to create multiple managed resources as a single Kubernetes object.</p></td><td><p></p></td></tr><tr><td><p>CompositeResourceDefinitions</p></td><td><p>XRD</p></td><td><p>cluster</p></td><td><p>Defines the API schema for Composite Resources and Claims</p></td><td><p>XIRSA, xSQLInstances</p></td></tr><tr><td><p>claims</p></td><td><p>XC</p></td><td><p>namespace</p></td><td><p>Like a Composite Resource, but namespace scoped.</p></td><td><p></p></td></tr></tbody></table>

<h2 id="heading-crossplane-functions">Crossplane Functions</h2>
<p>There are multiple <a target="_blank" href="https://marketplace.upbound.io/functions">functions</a> that are supported by Crossplane. Functions are essential to write concise steps and remove redundancy. They also allow us to use conditionals and loops when creating resources(Imagine creating multiple subnets for a VPC). Composition can be written as steps of functions, each step is used to create different Managed Resources. There are many programming languages which you can use to write functions like Python and Golang. Along with those, there configuration languages such as <a target="_blank" href="https://www.kcl-lang.io/">KCL</a> and <a target="_blank" href="https://cuelang.org/">CUE</a> which can be a great help to write compositions.</p>
<h2 id="heading-an-example">An Example</h2>
<p>Enough with the theory but let’s see how we can leverage these in creating infrastructure. Let us take a scenario where I want to let users create VPC with subnets. Here is how my composition will look like</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="93a977a12aa7460c939576a02b2357c4"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/HighonAces/93a977a12aa7460c939576a02b2357c4" class="embed-card">https://gist.github.com/HighonAces/93a977a12aa7460c939576a02b2357c4</a></div><p> </p>
<p>And here is my CompositeResourceDefinition(XRD) to serve this composition. It lays out an OpenAPI format to call the XRD like required parameters and their format.</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="bdfcc6c2ad46be4884627515caf51abf"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/HighonAces/bdfcc6c2ad46be4884627515caf51abf" class="embed-card">https://gist.github.com/HighonAces/bdfcc6c2ad46be4884627515caf51abf</a></div><p> </p>
<p>These two files lays out the template for resource creation. Now I can create multiple Composite Resources or Composite Claims(namespace scoped) using the above two components. If I wanted to create a claim file in default namespace, here is how it will look like</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="42501bb4615e52588e5b419aff333f64"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/HighonAces/42501bb4615e52588e5b419aff333f64" class="embed-card">https://gist.github.com/HighonAces/42501bb4615e52588e5b419aff333f64</a></div><p> </p>
<p>Here is my claim file, things to be noted here are <code>compositionRef</code> is not necessary. I can use something like <code>compositionSelector</code> and get appropriate composition with matching labels. So only I apply these definitions, I can see all the resources reflected in AWS. I can also track the resource creation from CLI using commands like <code>crossplane beta trace</code>(CLI must be installed)</p>
<pre><code class="lang-yaml"><span class="hljs-string">➜</span>  <span class="hljs-string">network</span> <span class="hljs-string">crossplane</span> <span class="hljs-string">beta</span> <span class="hljs-string">trace</span> <span class="hljs-string">dbnetworks.db.srujanpakanati.com</span> <span class="hljs-string">storage-network</span> 
<span class="hljs-string">NAME</span>                                       <span class="hljs-string">SYNCED</span>   <span class="hljs-string">READY</span>   <span class="hljs-string">STATUS</span>
<span class="hljs-string">DBNetwork/storage-network</span> <span class="hljs-string">(default)</span>        <span class="hljs-attr">True     False   Waiting:</span> <span class="hljs-string">Claim</span> <span class="hljs-string">is</span> <span class="hljs-string">waiting</span> <span class="hljs-string">for</span> <span class="hljs-string">composite</span> <span class="hljs-string">resource</span> <span class="hljs-string">to</span> <span class="hljs-string">become</span> <span class="hljs-string">Ready</span>
<span class="hljs-string">└─</span> <span class="hljs-attr">XDBNetwork/storage-network-c5sqw        True     False   Creating:</span> <span class="hljs-string">...s-east-2b-192-168-64-0-18-private,</span> <span class="hljs-string">and</span> <span class="hljs-string">storage-network-c5sqw-vpc</span>
   <span class="hljs-string">├─</span> <span class="hljs-string">Subnet/storage-network-c5sqw-2zscc</span>   <span class="hljs-literal">True</span>     <span class="hljs-literal">False</span>   <span class="hljs-string">Creating</span>
   <span class="hljs-string">├─</span> <span class="hljs-string">Subnet/storage-network-c5sqw-lcf4k</span>   <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>    <span class="hljs-string">Available</span>
   <span class="hljs-string">└─</span> <span class="hljs-string">VPC/storage-network-c5sqw-qk9wj</span>      <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>    <span class="hljs-string">Available</span>

<span class="hljs-string">➜</span>  <span class="hljs-string">network</span> <span class="hljs-string">crossplane</span> <span class="hljs-string">beta</span> <span class="hljs-string">trace</span> <span class="hljs-string">dbnetworks.db.srujanpakanati.com</span> <span class="hljs-string">storage-network</span> 
<span class="hljs-string">NAME</span>                                       <span class="hljs-string">SYNCED</span>   <span class="hljs-string">READY</span>   <span class="hljs-string">STATUS</span>
<span class="hljs-string">DBNetwork/storage-network</span> <span class="hljs-string">(default)</span>        <span class="hljs-attr">True     False   Waiting:</span> <span class="hljs-string">Claim</span> <span class="hljs-string">is</span> <span class="hljs-string">waiting</span> <span class="hljs-string">for</span> <span class="hljs-string">composite</span> <span class="hljs-string">resource</span> <span class="hljs-string">to</span> <span class="hljs-string">become</span> <span class="hljs-string">Ready</span>
<span class="hljs-string">└─</span> <span class="hljs-attr">XDBNetwork/storage-network-c5sqw        True     False   Creating:</span> <span class="hljs-string">...s-east-2b-192-168-64-0-18-private,</span> <span class="hljs-string">and</span> <span class="hljs-string">storage-network-c5sqw-vpc</span>
   <span class="hljs-string">├─</span> <span class="hljs-string">Subnet/storage-network-c5sqw-2zscc</span>   <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>    <span class="hljs-string">Available</span>
   <span class="hljs-string">├─</span> <span class="hljs-string">Subnet/storage-network-c5sqw-lcf4k</span>   <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>    <span class="hljs-string">Available</span>
   <span class="hljs-string">└─</span> <span class="hljs-string">VPC/storage-network-c5sqw-qk9wj</span>      <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>    <span class="hljs-string">Available</span>



<span class="hljs-string">➜</span>  <span class="hljs-string">network</span> <span class="hljs-string">k</span> <span class="hljs-string">describe</span> <span class="hljs-string">vpc</span> <span class="hljs-string">storage-network-c5sqw-qk9wj</span> 
<span class="hljs-attr">Name:</span>         <span class="hljs-string">storage-network-c5sqw-qk9wj</span>
<span class="hljs-attr">Namespace:</span>    
<span class="hljs-attr">Labels:</span>       <span class="hljs-string">crossplane.io/claim-name=storage-network</span>
              <span class="hljs-string">crossplane.io/claim-namespace=default</span>
              <span class="hljs-string">crossplane.io/composite=storage-network-c5sqw</span>
<span class="hljs-attr">Annotations:  crossplane.io/composition-resource-name:</span> <span class="hljs-string">storage-network-c5sqw-vpc</span>
              <span class="hljs-attr">crossplane.io/external-create-pending:</span> <span class="hljs-number">2024-12-07T21:01:18Z</span>
              <span class="hljs-attr">crossplane.io/external-create-succeeded:</span> <span class="hljs-number">2024-12-07T21:01:18Z</span>
              <span class="hljs-attr">crossplane.io/external-name:</span> <span class="hljs-string">vpc-094642de4b2a114f6</span>
<span class="hljs-attr">API Version:</span>  <span class="hljs-string">ec2.aws.upbound.io/v1beta1</span>
<span class="hljs-attr">Kind:</span>         <span class="hljs-string">VPC</span>
<span class="hljs-attr">Metadata:</span>
  <span class="hljs-attr">Creation Timestamp:</span>  <span class="hljs-number">2024-12-07T21:01:16Z</span>
  <span class="hljs-attr">Finalizers:</span>
    <span class="hljs-string">finalizer.managedresource.crossplane.io</span>
  <span class="hljs-attr">Generate Name:</span>  <span class="hljs-string">storage-network-c5sqw-</span>
  <span class="hljs-attr">Generation:</span>     <span class="hljs-number">3</span>
  <span class="hljs-attr">Owner References:</span>
    <span class="hljs-attr">API Version:</span>           <span class="hljs-string">db.srujanpakanati.com/v1alpha1</span>
    <span class="hljs-attr">Block Owner Deletion:</span>  <span class="hljs-literal">true</span>
    <span class="hljs-attr">Controller:</span>            <span class="hljs-literal">true</span>
    <span class="hljs-attr">Kind:</span>                  <span class="hljs-string">XDBNetwork</span>
    <span class="hljs-attr">Name:</span>                  <span class="hljs-string">storage-network-c5sqw</span>
    <span class="hljs-attr">UID:</span>                   <span class="hljs-string">753881fe-b9fe-4953-8007-1a14114b520f</span>
  <span class="hljs-attr">Resource Version:</span>        <span class="hljs-number">42765</span>
  <span class="hljs-attr">UID:</span>                     <span class="hljs-string">e8fee79f-4d34-4078-af3f-f14cff6f1518</span>
<span class="hljs-attr">Spec:</span>
  <span class="hljs-attr">Deletion Policy:</span>  <span class="hljs-string">Delete</span>
  <span class="hljs-attr">For Provider:</span>
    <span class="hljs-attr">Cidr Block:</span>            <span class="hljs-number">192.168</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
    <span class="hljs-attr">Enable Dns Hostnames:</span>  <span class="hljs-literal">true</span>
    <span class="hljs-attr">Enable Dns Support:</span>    <span class="hljs-literal">true</span>
    <span class="hljs-attr">Instance Tenancy:</span>      <span class="hljs-string">default</span>
    <span class="hljs-attr">Region:</span>                <span class="hljs-string">us-east-2</span>
    <span class="hljs-attr">Tags:</span>
      <span class="hljs-attr">Name:</span>                         <span class="hljs-string">storage-network-c5sqw</span>
      <span class="hljs-attr">Crossplane - Kind:</span>            <span class="hljs-string">vpc.ec2.aws.upbound.io</span>
      <span class="hljs-attr">Crossplane - Name:</span>            <span class="hljs-string">storage-network-c5sqw-qk9wj</span>
      <span class="hljs-attr">Crossplane - Providerconfig:</span>  <span class="hljs-string">default</span>
  <span class="hljs-attr">Init Provider:</span>
  <span class="hljs-attr">Management Policies:</span>
    <span class="hljs-string">*</span>
  <span class="hljs-attr">Provider Config Ref:</span>
    <span class="hljs-attr">Name:</span>  <span class="hljs-string">default</span>
<span class="hljs-attr">Status:</span>
  <span class="hljs-attr">At Provider:</span>
    <span class="hljs-attr">Arn:</span>                                   <span class="hljs-string">arn:aws:ec2:us-east-2:135227014767:vpc/vpc-094642de4b2a114f6</span>
    <span class="hljs-attr">assignGeneratedIpv6CidrBlock:</span>          <span class="hljs-literal">false</span>
    <span class="hljs-attr">Cidr Block:</span>                            <span class="hljs-number">192.168</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span><span class="hljs-string">/16</span>
    <span class="hljs-attr">Default Network Acl Id:</span>                <span class="hljs-string">acl-051e434241f655bd1</span>
    <span class="hljs-attr">Default Route Table Id:</span>                <span class="hljs-string">rtb-0ea107329a3ddc22b</span>
    <span class="hljs-attr">Default Security Group Id:</span>             <span class="hljs-string">sg-0c3510da7998ad5ae</span>
    <span class="hljs-attr">Dhcp Options Id:</span>                       <span class="hljs-string">dopt-016e84c858d19736f</span>
    <span class="hljs-attr">Enable Dns Hostnames:</span>                  <span class="hljs-literal">true</span>
    <span class="hljs-attr">Enable Dns Support:</span>                    <span class="hljs-literal">true</span>
    <span class="hljs-attr">Enable Network Address Usage Metrics:</span>  <span class="hljs-literal">false</span>
    <span class="hljs-attr">Id:</span>                                    <span class="hljs-string">vpc-094642de4b2a114f6</span>
    <span class="hljs-attr">Instance Tenancy:</span>                      <span class="hljs-string">default</span>
    <span class="hljs-attr">ipv6AssociationId:</span>                     
    <span class="hljs-attr">ipv6CidrBlock:</span>                         
    <span class="hljs-attr">ipv6CidrBlockNetworkBorderGroup:</span>       
    <span class="hljs-attr">ipv6IpamPoolId:</span>                        
    <span class="hljs-attr">ipv6NetmaskLength:</span>                     <span class="hljs-number">0</span>
    <span class="hljs-attr">Main Route Table Id:</span>                   <span class="hljs-string">rtb-0ea107329a3ddc22b</span>
    <span class="hljs-attr">Owner Id:</span>                              <span class="hljs-number">135227014767</span>
    <span class="hljs-attr">Tags:</span>
      <span class="hljs-attr">Name:</span>                         <span class="hljs-string">storage-network-c5sqw</span>
      <span class="hljs-attr">Crossplane - Kind:</span>            <span class="hljs-string">vpc.ec2.aws.upbound.io</span>
      <span class="hljs-attr">Crossplane - Name:</span>            <span class="hljs-string">storage-network-c5sqw-qk9wj</span>
      <span class="hljs-attr">Crossplane - Providerconfig:</span>  <span class="hljs-string">default</span>
    <span class="hljs-attr">Tags All:</span>
      <span class="hljs-attr">Name:</span>                         <span class="hljs-string">storage-network-c5sqw</span>
      <span class="hljs-attr">Crossplane - Kind:</span>            <span class="hljs-string">vpc.ec2.aws.upbound.io</span>
      <span class="hljs-attr">Crossplane - Name:</span>            <span class="hljs-string">storage-network-c5sqw-qk9wj</span>
      <span class="hljs-attr">Crossplane - Providerconfig:</span>  <span class="hljs-string">default</span>
  <span class="hljs-attr">Conditions:</span>
    <span class="hljs-attr">Last Transition Time:</span>  <span class="hljs-number">2024-12-07T21:01:33Z</span>
    <span class="hljs-attr">Reason:</span>                <span class="hljs-string">Available</span>
    <span class="hljs-attr">Status:</span>                <span class="hljs-literal">True</span>
    <span class="hljs-attr">Type:</span>                  <span class="hljs-string">Ready</span>
    <span class="hljs-attr">Last Transition Time:</span>  <span class="hljs-number">2024-12-07T21:01:18Z</span>
    <span class="hljs-attr">Reason:</span>                <span class="hljs-string">ReconcileSuccess</span>
    <span class="hljs-attr">Status:</span>                <span class="hljs-literal">True</span>
    <span class="hljs-attr">Type:</span>                  <span class="hljs-string">Synced</span>
    <span class="hljs-attr">Last Transition Time:</span>  <span class="hljs-number">2024-12-07T21:01:30Z</span>
    <span class="hljs-attr">Reason:</span>                <span class="hljs-string">Success</span>
    <span class="hljs-attr">Status:</span>                <span class="hljs-literal">True</span>
    <span class="hljs-attr">Type:</span>                  <span class="hljs-string">LastAsyncOperation</span>
<span class="hljs-attr">Events:</span>
  <span class="hljs-string">Type</span>    <span class="hljs-string">Reason</span>                   <span class="hljs-string">Age</span>    <span class="hljs-string">From</span>                                          <span class="hljs-string">Message</span>
  <span class="hljs-string">----</span>    <span class="hljs-string">------</span>                   <span class="hljs-string">----</span>   <span class="hljs-string">----</span>                                          <span class="hljs-string">-------</span>
  <span class="hljs-string">Normal</span>  <span class="hljs-string">CreatedExternalResource</span>  <span class="hljs-string">5m50s</span>  <span class="hljs-string">managed/ec2.aws.upbound.io/v1beta1,</span> <span class="hljs-string">kind=vpc</span>  <span class="hljs-string">Successfully</span> <span class="hljs-string">requested</span> <span class="hljs-string">creation</span> <span class="hljs-string">of</span> <span class="hljs-string">external</span> <span class="hljs-string">resource</span>


<span class="hljs-string">➜</span>  <span class="hljs-string">network</span> <span class="hljs-string">k</span> <span class="hljs-string">get</span> <span class="hljs-string">dbnetworks.db.srujanpakanati.com</span>                                 
<span class="hljs-string">NAME</span>              <span class="hljs-string">SYNCED</span>   <span class="hljs-string">READY</span>   <span class="hljs-string">CONNECTION-SECRET</span>   <span class="hljs-string">AGE</span>
<span class="hljs-string">storage-network</span>   <span class="hljs-literal">True</span>     <span class="hljs-literal">True</span>                       <span class="hljs-string">14m</span>
</code></pre>
<p>As you can see that finally my network components have come up. To put everything in a nutshell<br />XRD → lays down the schema of the API</p>
<p>Composition → A blueprint to create composite resources for that API (Like a Class)</p>
<p>Claim → Like an object which gets a physical copy of that Composition</p>
<blockquote>
<p>Please note that I have written the composition to create SQLInstances hence the API extension dbnetworks.db.srujanpakanati.com. It is just a naming convention but nothing else</p>
</blockquote>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This is a very basic use-case of Crossplane. There are lot many features like importing existing resources, managementPolicies, writeConnectionSecretToRef and using ExternalSecretStore etc. Crossplane will be a good transition from Terraform if you are already on Kubernetes.</p>
]]></content:encoded></item><item><title><![CDATA[The "Kalki" Phenomenon]]></title><description><![CDATA[Disclaimer: I penned this blog post long ago but releasing it on the occasion of the Kalki 2898A.D OTT release

It's been almost 5 days since I watched the movie but some elements remain in my mind reverberating at a constant hum. Hence I am putting ...]]></description><link>https://srujanpakanati.com/the-kalki-phenomenon</link><guid isPermaLink="true">https://srujanpakanati.com/the-kalki-phenomenon</guid><category><![CDATA[karna]]></category><category><![CDATA[Kalki 2898 AD]]></category><category><![CDATA[Mahabharata]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Thu, 22 Aug 2024 23:20:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719886388785/c42eef9c-2c54-46d1-9480-eb592873ba41.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>Disclaimer: I penned this blog post long ago but releasing it on the occasion of the Kalki 2898A.D OTT release</p>
</blockquote>
<p>It's been almost 5 days since I watched the movie but some elements remain in my mind reverberating at a constant hum. Hence I am putting my review in words so that I can get some closure(in a positive way😜) with the movie.</p>
<h2 id="heading-the-review">The Review</h2>
<p>The movie starts at the end of Mahabharata where Lord Krishna confronts Aswathama on the battlefield of Kurukshetra. The visuals are exceptionally stunning in these sequences. The stories I have heard and read my entire life take a two-dimensional form in front of my eyes and are more flamboyant than I expected. The shot where the white horse runs behind Lord Krishna horripilated me. And as all good things come to an end, these scenes end and the movie takes us 6000 years in a time machine.</p>
<p>The dystopian world of Kashi city is well-built. The production design is thorough but the first half falls flat a few minutes after Prabhas' introduction. The unending fight sequences, The cameos that make you say "But why??🫤" and The song with Disha Patani in The Complex test your patience and make you wonder when the main story starts or did we get scammed again(Remember Adipurush?). I feel that the reason for this slow pace is due to the way the story has to split between the first and second parts. Some more informative scenes about the world of Kashi (like weapons or economy or food etc)could have been added instead of pointless cameos. But once Mr.Amitabh enters, the story shifts gears.</p>
<p>Before Dr.Amitabh, we must talk about Dr.Kamal Hasan. He looked in a way like a Buddhist monk who was in the process of becoming <a target="_blank" href="https://en.wikipedia.org/wiki/Sokushinbutsu">Sokushinbutsu</a>. His appearance and performance are a sight for sore eyes in the first half. Especially when he says "భయపడకు మరొప్రపంచం వస్తుంది" is eargasm for Sri Sri fanatics. The fight sequence between Amitabh and Prabhas was very well choreographed leading to the end of the first half on a high note.</p>
<p>The city of Shambala shows a stark difference from Kashi. Director Nag Ashwin took good care of building something different from Kashi. In the second half, the song slowed screenplay again but the climax war sequences are a visual treat. It looked something like from Hollywood. Prabhas getting revealed as Karna is something unexpected.</p>
<p>Overall, the movie definitely have some flaws but it is a step in right direction. For international audience who does not know anything about The Mahabharata, this movie would not click for them. But this movie opens pathways for bigger movies to come. All the actors are aptly cast except for Vijay Devarakonda. Arjuna being an important character in the epic, the audience were expecting someone more nuanced. Music by Santosh Narayanan is Lit!! but I felt like he left a lot of opportunities on the table.</p>
<hr />
<h2 id="heading-glorifying-karna">Glorifying Karna</h2>
<p>Prabhas being cast as Karna has reinvigorated the discussion about whether Karna is good or bad. The Mahabharata being epic as it is has all the grey characters in it. Sometimes we tend to look only at some aspects of a person and judge rather than looking at everything as a whole. Karna is one such character. All his life was driven by hatred and envy towards Pandavas because he believed he belonged somewhere he could never be. I agree that Kunti shouldn't have abandoned him but she is also not planning to give birth that morning either. As fate would have it she read mantra to test its effectiveness in inviting Lord Surya without thinking of unintended consequences.</p>
<p>Coming back to our discussion, Karna, being raised by a stable family, and has everything he needs but he has grown up to be an angry, envious man. Everything that he learnt or achieved, he did it out of spite with Pandavas. One cannot simply be added as one of "Dushta Chatushtaya" without any reason. The Vyas Mahabharata is very clear about it. There were lot of other versions of Mahabharata like one by <a target="_blank" href="https://www.exoticindiaart.com/book/details/mahabharata-composed-by-villiputhurar-in-tamil-set-of-10-volumes-in-7-books-uaa405/">Villiputhurar</a> which glorifies Karna. Members of Tamil intellectual society took the story of Karna and tried to portray it as higher caste atrocities on lower caste individuals like Karna. The Tamil industry has made the movie <a target="_blank" href="https://g.co/kgs/J8s5eio">Karnan</a> taking Villiputhurar's Mahabharata as a reference and NTR made <a target="_blank" href="https://en.wikipedia.org/wiki/Daana_Veera_Soora_Karna">Daana Veera Shoora Karna</a> which also has these misrepresentations because of Kondaveeti Venkatakavi.</p>
<p>My problem here is there is Kalki continues the trend of whitewashing Karna. One generation is already disillusioned by watching movies like Karnan and DVSK, there is no need for the other generation to go on the same path. What if people in 2898AD believes that Hitler was actually a good leader who brought back Germany from WW1 with electrifying growth and prosperity and gave quotes like "<a target="_blank" href="https://en.wikipedia.org/wiki/Arbeit_macht_frei#:~:text=Use%20by%20the%20Nazis,-Gross%2DRosen%20KZ&amp;text=The%20slogan%20Arbeit%20macht%20frei,number%20of%20Nazi%20concentration%20camps."><strong><em>Arbeit macht frei</em></strong></a><strong><em>"(Work makes one free)</em></strong> and genocide never happened??</p>
<p>One who fails to understand history is doomed to repeat it. I am all for the creative liberties that are being taken and they have every right to do so. Our society should be aware of truth from lies.</p>
<p>Ref:</p>
<p><a target="_blank" href="https://swarajyamag.com/culture/why-karna-was-not-the-hero-modern-retellings-of-mahabharata-make-him-out-to-be">https://swarajyamag.com/culture/why-karna-was-not-the-hero-modern-retellings-of-mahabharata-make-him-out-to-be</a></p>
<p><a target="_blank" href="https://talageri.blogspot.com/2021/10/karna-and-yudhisthira-in-mahabharata.html">https://talageri.blogspot.com/2021/10/karna-and-yudhisthira-in-mahabharata.html</a></p>
<p><a target="_blank" href="https://www.tamilbrahmins.com/threads/was-karna-good-or-bad.19842/">https://www.tamilbrahmins.com/threads/was-karna-good-or-bad.19842/</a></p>
]]></content:encoded></item><item><title><![CDATA[Using K8s Self Hosted Runners for GitHub Actions]]></title><description><![CDATA[As the popularity for GitHub Actions for CI increasing these days, so is the the bill for GitHub hosted Runners. As this article explains, the typical costs for GHR(GitHub hosted Runners) is 23x of AWS spot instances for the same compute. As it is co...]]></description><link>https://srujanpakanati.com/using-k8s-self-hosted-runners-for-github-actions</link><guid isPermaLink="true">https://srujanpakanati.com/using-k8s-self-hosted-runners-for-github-actions</guid><category><![CDATA[GitHub]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Tue, 16 Jul 2024 20:52:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/MAYEkmn7G6E/upload/063ca412b3eefa19023322affe57857c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As the popularity for GitHub Actions for CI increasing these days, so is the the bill for GitHub hosted Runners. As this <a target="_blank" href="https://vishnudeva.medium.com/cost-effective-github-actions-9409fa7b2147">article</a> explains, the typical costs for GHR(GitHub hosted Runners) is 23x of AWS spot instances for the same compute. As it is comparatively easy😜 to setup a managed (or self-managed) K8s cluster with Karpenter and spot-instances nodepool setup for GHA than to use GHR. Moreover if you have to have complete control over the CI process for compliance purposes. It is a no brainer to use SHR.</p>
<p>In order to setup K8s cluster as SH environment, you need to have Action Runner Controller installed. It is a K8s operator to manage SHR lifecycle. It comes with its own CRDs.</p>
<h2 id="heading-arc-installation">ARC Installation</h2>
<p>In order to install ARC you need to figure out authentication method first. I have used GitHubApp approach. So you will create an app and give required permissions and add the App to required repositories.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721001467774/a0ec76ca-0ee3-4669-b1c1-76410c91d9f9.png" alt class="image--center mx-auto" /></p>
<p>ARC needs to be authenticated with GitHub. In order to do that you need to create secret with App details so ARC can pick these up during installation.</p>
<pre><code class="lang-json">➜  k8s-gha k get secrets -n actions controller-manager -o json |jq '.data | keys'
[
  <span class="hljs-string">"github_app_id"</span>,
  <span class="hljs-string">"github_app_installation_id"</span>,
  <span class="hljs-string">"github_app_private_key"</span>
]
</code></pre>
<p>Now you can install ARC in the cluster using Helm</p>
<pre><code class="lang-plaintext">➜  ~ helm install actions-runner-controller actions-runner-controller/actions-runner-controller --namespace=actions --version=0.22.0 -f runner-controller.yaml
NAME: actions-runner-controller
LAST DEPLOYED: Sun Jul 14 16:36:39 2024
NAMESPACE: actions
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace actions -l "app.kubernetes.io/name=actions-runner-controller,app.kubernetes.io/instance=actions-runner-controller" -o jsonpath="{.items[0].metadata.name}")
  export CONTAINER_PORT=$(kubectl get pod --namespace actions $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace actions port-forward $POD_NAME 8080:$CONTAINER_PORT
</code></pre>
<p>Here is how the runner-controller file looks like</p>
<pre><code class="lang-yaml"><span class="hljs-attr">replicaCount:</span> <span class="hljs-number">1</span>
<span class="hljs-attr">webhookPort:</span> <span class="hljs-number">9443</span>
<span class="hljs-attr">syncPeriod:</span> <span class="hljs-string">1m</span>
<span class="hljs-attr">defaultScaleDownDelay:</span> <span class="hljs-string">5m</span>
<span class="hljs-attr">enableLeaderElection:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">authSecret:</span>
  <span class="hljs-attr">enabled:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">create:</span> <span class="hljs-literal">false</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">"controller-manager"</span>

<span class="hljs-attr">image:</span>
  <span class="hljs-attr">repository:</span> <span class="hljs-string">"summerwind/actions-runner-controller"</span>
  <span class="hljs-attr">actionsRunnerRepositoryAndTag:</span> <span class="hljs-string">"summerwind/actions-runner:ubuntu-20.04"</span>
  <span class="hljs-attr">dindSidecarRepositoryAndTag:</span> <span class="hljs-string">"docker:dind"</span>
  <span class="hljs-attr">pullPolicy:</span> <span class="hljs-string">IfNotPresent</span>

<span class="hljs-attr">serviceAccount:</span>
  <span class="hljs-attr">create:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">service:</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">ClusterIP</span>
  <span class="hljs-attr">port:</span> <span class="hljs-number">443</span>

<span class="hljs-attr">certManagerEnabled:</span> <span class="hljs-literal">true</span>

<span class="hljs-attr">logFormat:</span> <span class="hljs-string">text</span>

<span class="hljs-attr">githubWebhookServer:</span>
  <span class="hljs-attr">enabled:</span> <span class="hljs-literal">false</span>
</code></pre>
<p>Now you are all set-up 🎉.</p>
<p>Below are the CRDs that comes with ARC</p>
<pre><code class="lang-plaintext">➜  k8s-gha k get crds |grep summer
horizontalrunnerautoscalers.actions.summerwind.dev    2024-07-14T21:36:26Z
runnerdeployments.actions.summerwind.dev              2024-07-14T21:36:27Z
runnerreplicasets.actions.summerwind.dev              2024-07-14T21:36:28Z
runners.actions.summerwind.dev                        2024-07-14T21:36:29Z
runnersets.actions.summerwind.dev                     2024-07-14T21:36:32Z
</code></pre>
<p>This follows the same pattern of K8s workloads where runners are equivalent to Pods and runnerdeployments are Deployments. You need to create a runnerdeployment with the image name, labels so GHA can pick up runners from this deployment based on the label</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">actions.summerwind.dev/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">RunnerDeployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">karpenter.sh/do-not-evict:</span> <span class="hljs-string">"true"</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">self-hosted-runner-deployment</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">actions</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">repository:</span> <span class="hljs-string">HighonAces/actions-1</span>
      <span class="hljs-attr">image:</span> <span class="hljs-string">summerwind/actions-runner:ubuntu-20.04</span>
      <span class="hljs-attr">resources:</span>
        <span class="hljs-attr">requests:</span>
          <span class="hljs-attr">cpu:</span> <span class="hljs-string">1500m</span>
          <span class="hljs-attr">memory:</span> <span class="hljs-string">2000Mi</span>
</code></pre>
<p>You also need to create HorizontalRunnerAutoscaler to scale this deployment.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">items:</span>
<span class="hljs-bullet">-</span> <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">actions.summerwind.dev/v1alpha1</span>
  <span class="hljs-attr">kind:</span> <span class="hljs-string">HorizontalRunnerAutoscaler</span>
  <span class="hljs-attr">metadata:</span>
    <span class="hljs-attr">annotations:</span>
      <span class="hljs-attr">kubectl.kubernetes.io/last-applied-configuration:</span> <span class="hljs-string">|
        {"apiVersion":"actions.summerwind.dev/v1alpha1","kind":"HorizontalRunnerAutoscaler","metadata":{"annotations":{},"name":"self-hosted-runner-deployment-autoscaler","namespace":"actions"},"spec":{"maxReplicas":30,"minReplicas":0,"scaleTargetRef":{"kind":"RunnerDeployment","name":"self-hosted-runner-deployment"},"scaleUpTriggers":[{"duration":"30m","githubEvent":{"workflowJob":{}}}]}}
</span>    <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-string">"2024-07-14T21:52:23Z"</span>
    <span class="hljs-attr">generation:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">self-hosted-runner-deployment-autoscaler</span>
    <span class="hljs-attr">namespace:</span> <span class="hljs-string">actions</span>
    <span class="hljs-attr">resourceVersion:</span> <span class="hljs-string">"9689"</span>
    <span class="hljs-attr">uid:</span> <span class="hljs-string">41291b4a-cbd8-4af2-bfe3-02c2b29d8262</span>
  <span class="hljs-attr">spec:</span>
    <span class="hljs-attr">maxReplicas:</span> <span class="hljs-number">30</span>
    <span class="hljs-attr">minReplicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">scaleTargetRef:</span>
      <span class="hljs-attr">kind:</span> <span class="hljs-string">RunnerDeployment</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">self-hosted-runner-deployment</span>
    <span class="hljs-attr">scaleUpTriggers:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">duration:</span> <span class="hljs-string">30m</span>
      <span class="hljs-attr">githubEvent:</span>
        <span class="hljs-attr">workflowJob:</span> {}
  <span class="hljs-attr">status:</span>
    <span class="hljs-attr">desiredReplicas:</span> <span class="hljs-number">2</span>
    <span class="hljs-attr">lastSuccessfulScaleOutTime:</span> <span class="hljs-string">"2024-07-14T22:28:46Z"</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">List</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">resourceVersion:</span> <span class="hljs-string">""</span>
</code></pre>
<p>This continuously keeps 2 replicas running as I have not given any metrics definition. By making use of PercentageRunnersBusy, you can scaleup or scaledown. Here is the metrics definition from ARC documentation</p>
<pre><code class="lang-yaml">  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">PercentageRunnersBusy</span>
    <span class="hljs-attr">scaleUpThreshold:</span> <span class="hljs-string">'0.75'</span>
    <span class="hljs-attr">scaleDownThreshold:</span> <span class="hljs-string">'0.25'</span>
    <span class="hljs-attr">scaleUpFactor:</span> <span class="hljs-string">'2'</span>
    <span class="hljs-attr">scaleDownFactor:</span> <span class="hljs-string">'0.5'</span>
</code></pre>
<p>Once you have min number of runners running you can see the corresponding pods in cluster and same in GH runners page.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1721002803495/aaa5b594-42c9-4a85-b193-996aefa75653.png" alt class="image--center mx-auto" /></p>
<p>These runners will pickup the jobs whenever you have actions configured with this parameter <code>runs-on: self-hosted-linux</code>.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This setup is rather easy and customizable. It is a very good alternative for the GHR and I cannot wait to see what future holds for K8s based Self hosted runners.  </p>
<p>Source: <a target="_blank" href="https://medium.com/simform-engineering/how-to-setup-self-hosted-github-action-runner-on-kubernetes-c8825ccbb63c">https://medium.com/simform-engineering/how-to-setup-self-hosted-github-action-runner-on-kubernetes-c8825ccbb63c</a></p>
<p><a target="_blank" href="https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller">https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/about-actions-runner-controller</a></p>
<p><a target="_blank" href="https://github.com/actions/actions-runner-controller">https://github.com/actions/actions-runner-controller</a></p>
]]></content:encoded></item><item><title><![CDATA[Understanding KEDA]]></title><description><![CDATA[What is KEDA?
KEDA is Kubernetes-based Event Driven Autoscaler. It can be used to scale any container based resource like Deployments, StatefulSets etc whenever an event occurs and then scale back to zero when it is not necessary or it can be used to...]]></description><link>https://srujanpakanati.com/understanding-keda</link><guid isPermaLink="true">https://srujanpakanati.com/understanding-keda</guid><category><![CDATA[keda]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 15 Jul 2024 14:03:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/eUMEWE-7Ewg/upload/65a943cd641361c85be9dc1e974e6e8f.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-keda">What is KEDA?</h2>
<p>KEDA is Kubernetes-based Event Driven Autoscaler. It can be used to scale any container based resource like Deployments, StatefulSets etc whenever an event occurs and then scale back to zero when it is not necessary or it can be used to create Kubernetes Jobs during events. It can be used to scale both Horizontally and Vertically.</p>
<h2 id="heading-need-for-keda">Need for KEDA</h2>
<p>KEDA is an important player to maximise your usage efficiency. You simply cannot keep your deployments running in the hope of traffic/event occurs. You must be able to create resources at the minute of need and must be able to terminate them as soon as their purpose is served. This is especially useful for Jobs. KEDA can ingest metrics from any sources(currently 68 scalers available). Here are some of the examples.</p>
<ul>
<li><p>Vertical scaling of pods based on avg resource usage metrics from Prometheus or Datadog.</p>
</li>
<li><p>Scaling based on ALB metrics.</p>
</li>
<li><p>Scaling jobs based on Kafka or ActiveMQ queues.</p>
</li>
<li><p>Scaling based on query results from databases like MongoDb, DynamoDb and MySQL etc.</p>
</li>
<li><p>Scaling Github runners based number of jobs pending in GHA.</p>
</li>
</ul>
<p>These are some of the examples but you can use any event to scale your resources.</p>
<h2 id="heading-keda-architecture">KEDA Architecture</h2>
<p>Lets first see all the CRDs that KEDA offers</p>
<pre><code class="lang-plaintext">➜  keda k get crds |grep keda
cloudeventsources.eventing.keda.sh                    2024-07-12T02:20:03Z
clustertriggerauthentications.keda.sh                 2024-07-12T02:20:03Z
scaledjobs.keda.sh                                    2024-07-12T02:20:03Z
scaledobjects.keda.sh                                 2024-07-12T02:20:03Z
triggerauthentications.keda.sh                        2024-07-12T02:20:03Z
</code></pre>
<h3 id="heading-scaledobjects">ScaledObjects</h3>
<p>ScaledObjects defines what resource you want to scale and based on what triggers. The resource would be typically Deployment or StatefulSet.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">spec:</span>
  <span class="hljs-attr">scaleTargetRef:</span>
    <span class="hljs-attr">apiVersion:</span>    {<span class="hljs-string">api-version-of-target-resource</span>}         <span class="hljs-comment"># Optional. Default: apps/v1</span>
    <span class="hljs-attr">kind:</span>          {<span class="hljs-string">kind-of-target-resource</span>}                <span class="hljs-comment"># Optional. Default: Deployment</span>
    <span class="hljs-attr">name:</span>          {<span class="hljs-string">name-of-target-resource</span>}                <span class="hljs-comment"># Mandatory. Must be in the same namespace as the ScaledObject</span>
</code></pre>
<p>Triggers is the external source based on which you want to scale the said object. ScaledJobs are similar to ScaledObjects. The only difference is you create/scale Jobs based on the external source.</p>
<h3 id="heading-triggerauthentication">TriggerAuthentication</h3>
<p>As you define external triggers, you also need a way to authenticate to that source. For this purpose you need TriggerAuthentication. ClusterTriggerAuthentication is a similar one with cluster-scope.</p>
<h3 id="heading-cloudeventsource">CloudEventSource</h3>
<p>CloudEventSource resource can be used in KEDA for subscribing to events that are emitted to the user’s defined CloudEvent sink.</p>
<p>Here's how all works together. ScaledObject first checks if TriggerAuthentication works or not and if could able to get required metrics from the external source. Also checks if the object that it needs to be scaled is defined or not. Then it will create a Horizontal Pod Autoscaler(HPA) for the object and starts passing metrics to HPA based on the data from external source.</p>
<h2 id="heading-demo-using-mongodb-to-scale-deployment">Demo: Using MongoDB to scale Deployment</h2>
<p>Here I am trying to scale deployment based on MongoDb query results. I have created a deployment for nginx with one replica, pretty standard</p>
<p><img src="https://pbs.twimg.com/media/EbneVfrXgAAsYhb?format=png&amp;name=small" alt class="image--center mx-auto" /></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">nginx-dep</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-dep</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">nginx-dep</span>
  <span class="hljs-attr">strategy:</span> {}
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">creationTimestamp:</span> <span class="hljs-literal">null</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">nginx-dep</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">image:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">nginx</span>
        <span class="hljs-attr">resources:</span> {}
<span class="hljs-attr">status:</span> {}
</code></pre>
<p>My target is to scaledown this deployment to zero when my mongodb document has "enabled" value is set as "no" and scales it whenever the value is "yes". So my scaled object definition looks something like</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">keda.sh/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ScaledObject</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-so</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">scaleTargetRef:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">nginx-dep</span>
  <span class="hljs-attr">cooldownPeriod:</span>  <span class="hljs-number">30</span>
  <span class="hljs-attr">minReplicaCount:</span> <span class="hljs-number">0</span>
  <span class="hljs-attr">maxReplicaCount:</span> <span class="hljs-number">5</span>

  <span class="hljs-attr">triggers:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">mongodb</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">dbName:</span> <span class="hljs-string">"drinks"</span>
      <span class="hljs-attr">collection:</span> <span class="hljs-string">"softdrinks"</span>
      <span class="hljs-attr">query:</span> <span class="hljs-string">'{ "enabled": "yes" }'</span>
      <span class="hljs-attr">queryValue:</span> <span class="hljs-string">"1"</span>
    <span class="hljs-attr">authenticationRef:</span> 
      <span class="hljs-attr">name:</span> <span class="hljs-string">mongodb-trigger</span>
<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">keda.sh/v1alpha1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">TriggerAuthentication</span>
<span class="hljs-attr">metadata:</span> 
  <span class="hljs-attr">name:</span> <span class="hljs-string">mongodb-trigger</span>
<span class="hljs-attr">spec:</span> 
  <span class="hljs-attr">secretTargetRef:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">parameter:</span> <span class="hljs-string">connectionString</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">mongodb-secret</span>
      <span class="hljs-attr">key:</span> <span class="hljs-string">connect</span>
</code></pre>
<p>You need to create mongodb secret with the key connect and value should be mongodb connection string <code>mongodb+srv://:@</code><a target="_blank" href="http://cluster0.p3d97pg.mongodb.net/"><code>******.*****.mongodb.net/</code></a><code>&lt;dbname&gt;</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Secret</span>
<span class="hljs-attr">metadata:</span> 
  <span class="hljs-attr">name:</span> <span class="hljs-string">mongodb-secret</span>
<span class="hljs-attr">type:</span> <span class="hljs-string">Opaque</span>
<span class="hljs-attr">data:</span> 
  <span class="hljs-attr">connect:</span> <span class="hljs-string">bW9uZ29kYitzcnY6Ly86QCoqKioqKi4qKioqKi5tb25nb2RiLm5ldC8K==</span>
</code></pre>
<p>Now we have everything in place. Once you create ScaledObject and triggerauthentications, SO will create a HPA and converts the query metric to HPA understandable metric.</p>
<p>Here I have changed value {"enabled": "no"}. Now when you describe ScaledObject you will see</p>
<pre><code class="lang-plaintext">➜  keda k get so nginx-so                
NAME       SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS   AUTHENTICATION    READY   ACTIVE   FALLBACK   PAUSED    AGE
nginx-so   apps/v1.Deployment   nginx-dep         0     5     mongodb    mongodb-trigger   True    False    False      Unknown   28m
</code></pre>
<pre><code class="lang-plaintext">Name:         nginx-so
Namespace:    default
Labels:       scaledobject.keda.sh/name=nginx-so
Annotations:  &lt;none&gt;
API Version:  keda.sh/v1alpha1
Kind:         ScaledObject
Metadata:
  Creation Timestamp:  2024-07-13T06:44:57Z
  Finalizers:
    finalizer.keda.sh
  Generation:        1
  Resource Version:  144637
  UID:               9b385df8-88a4-4d0a-bc68-4e7ffea7f8f7
Spec:
  Cooldown Period:    30
  Max Replica Count:  5
  Min Replica Count:  0
  Scale Target Ref:
    Name:  nginx-dep
  Triggers:
    Authentication Ref:
      Name:  mongodb-trigger
    Metadata:
      Collection:   softdrinks
      Db Name:      drinks
      Query:        { "enabled": "yes" }
      Query Value:  1
    Type:           mongodb
Status:
  Conditions:
    Message:  ScaledObject is defined correctly and is ready for scaling
    Reason:   ScaledObjectReady
    Status:   True
    Type:     Ready
    Message:  Scaling is not performed because triggers are not active
    Reason:   ScalerNotActive
    Status:   False
    Type:     Active
    Message:  No fallbacks are active on this scaled object
    Reason:   NoFallbackFound
    Status:   False
    Type:     Fallback
    Status:   Unknown
    Type:     Paused
  External Metric Names:
    s0-mongodb-softdrinks
  Health:
    s0-mongodb-softdrinks:
      Number Of Failures:  0
      Status:              Happy
  Hpa Name:                keda-hpa-nginx-so
  Last Active Time:        2024-07-13T07:01:28Z
  Original Replica Count:  0
  Scale Target GVKR:
    Group:            apps
    Kind:             Deployment
    Resource:         deployments
    Version:          v1
  Scale Target Kind:  apps/v1.Deployment
Events:
  Type     Reason                      Age   From           Message
  ----     ------                      ----  ----           -------
  Warning  KEDAScalerFailed            29m   keda-operator  failed to parsing mongoDB metadata, because of missing required field in scaler config: no host given
  Warning  ScaledObjectCheckFailed     29m   keda-operator  failed to ensure HPA is correctly created for ScaledObject
  Normal   KEDAScalersStarted          29m   keda-operator  Scaler mongodb is built.
  Normal   KEDAScalersStarted          29m   keda-operator  Started scalers watch
  Normal   ScaledObjectReady           29m   keda-operator  ScaledObject is ready for scaling
  Normal   KEDAScaleTargetActivated    28m   keda-operator  Scaled apps/v1.Deployment default/nginx-dep from 0 to 1, triggered by mongoDBScaler
  Normal   KEDAScaleTargetDeactivated  12m   keda-operator  Deactivated apps/v1.Deployment default/nginx-dep from 1 to 0
</code></pre>
<p>The statements<br /><code>Message: Scaling is not performed because triggers are not active</code>. and <code>Normal KEDAScaleTargetDeactivated 12m keda-operator Deactivated apps/v1.Deployment default/nginx-dep from 1 to 0</code> shows that the trigger is not active so scaled to zero.</p>
<p>HPA reflects the same</p>
<pre><code class="lang-plaintext">➜  keda k get hpa
NAME                REFERENCE              TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-nginx-so   Deployment/nginx-dep   &lt;unknown&gt;/1 (avg)   1         5         0          32m
➜  keda k describe hpa keda-hpa-nginx-so 
Name:                                              keda-hpa-nginx-so
Namespace:                                         default
Labels:                                            app.kubernetes.io/managed-by=keda-operator
                                                   app.kubernetes.io/name=keda-hpa-nginx-so
                                                   app.kubernetes.io/part-of=nginx-so
                                                   app.kubernetes.io/version=2.14.0
                                                   scaledobject.keda.sh/name=nginx-so
Annotations:                                       &lt;none&gt;
CreationTimestamp:                                 Sat, 13 Jul 2024 01:44:58 -0500
Reference:                                         Deployment/nginx-dep
Metrics:                                           ( current / target )
  "s0-mongodb-softdrinks" (target average value):  &lt;unknown&gt; / 1
Min replicas:                                      1
Max replicas:                                      5
Deployment pods:                                   0 current / 0 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    SucceededGetScale   the HPA controller was able to get the target's current scale
  ScalingActive   False   ScalingDisabled     scaling is disabled since the replica count of the target is zero
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
</code></pre>
<p>Now if I change enabled to yes</p>
<pre><code class="lang-plaintext">Name:         nginx-so
Namespace:    default
Labels:       scaledobject.keda.sh/name=nginx-so
Annotations:  &lt;none&gt;
API Version:  keda.sh/v1alpha1
Kind:         ScaledObject
Metadata:
  Creation Timestamp:  2024-07-13T06:44:57Z
  Finalizers:
    finalizer.keda.sh
  Generation:        1
  Resource Version:  146171
  UID:               9b385df8-88a4-4d0a-bc68-4e7ffea7f8f7
Spec:
  Cooldown Period:    30
  Max Replica Count:  5
  Min Replica Count:  0
  Scale Target Ref:
    Name:  nginx-dep
  Triggers:
    Authentication Ref:
      Name:  mongodb-trigger
    Metadata:
      Collection:   softdrinks
      Db Name:      drinks
      Query:        { "enabled": "yes" }
      Query Value:  1
    Type:           mongodb
Status:
  Conditions:
    Message:  ScaledObject is defined correctly and is ready for scaling
    Reason:   ScaledObjectReady
    Status:   True
    Type:     Ready
    Message:  Scaling is performed because triggers are active
    Reason:   ScalerActive
    Status:   True
    Type:     Active
    Message:  No fallbacks are active on this scaled object
    Reason:   NoFallbackFound
    Status:   False
    Type:     Fallback
    Status:   Unknown
    Type:     Paused
  External Metric Names:
    s0-mongodb-softdrinks
  Health:
    s0-mongodb-softdrinks:
      Number Of Failures:  0
      Status:              Happy
  Hpa Name:                keda-hpa-nginx-so
  Last Active Time:        2024-07-13T07:19:58Z
  Original Replica Count:  0
  Scale Target GVKR:
    Group:            apps
    Kind:             Deployment
    Resource:         deployments
    Version:          v1
  Scale Target Kind:  apps/v1.Deployment
Events:
  Type     Reason                      Age                From           Message
  ----     ------                      ----               ----           -------
  Warning  KEDAScalerFailed            35m                keda-operator  failed to parsing mongoDB metadata, because of missing required field in scaler config: no host given
  Warning  ScaledObjectCheckFailed     35m                keda-operator  failed to ensure HPA is correctly created for ScaledObject
  Normal   KEDAScalersStarted          35m                keda-operator  Scaler mongodb is built.
  Normal   KEDAScalersStarted          35m                keda-operator  Started scalers watch
  Normal   ScaledObjectReady           35m                keda-operator  ScaledObject is ready for scaling
  Normal   KEDAScaleTargetDeactivated  18m                keda-operator  Deactivated apps/v1.Deployment default/nginx-dep from 1 to 0
  Normal   KEDAScaleTargetActivated    34s (x2 over 33m)  keda-operator  Scaled apps/v1.Deployment default/nginx-dep from 0 to 1, triggered by mongoDBScaler
</code></pre>
<p>and HPA looks like</p>
<pre><code class="lang-plaintext">Name:                                              keda-hpa-nginx-so
Namespace:                                         default
Labels:                                            app.kubernetes.io/managed-by=keda-operator
                                                   app.kubernetes.io/name=keda-hpa-nginx-so
                                                   app.kubernetes.io/part-of=nginx-so
                                                   app.kubernetes.io/version=2.14.0
                                                   scaledobject.keda.sh/name=nginx-so
Annotations:                                       &lt;none&gt;
CreationTimestamp:                                 Sat, 13 Jul 2024 01:44:58 -0500
Reference:                                         Deployment/nginx-dep
Metrics:                                           ( current / target )
  "s0-mongodb-softdrinks" (target average value):  1 / 1
Min replicas:                                      1
Max replicas:                                      5
Deployment pods:                                   1 current / 1 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from external metric s0-mongodb-softdrinks(&amp;LabelSelector{MatchLabels:map[string]string{scaledobject.keda.sh/name: nginx-so,},MatchExpressions:[]LabelSelectorRequirement{},})
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
Events:           &lt;none&gt;
</code></pre>
<p>Source:</p>
<p><a target="_blank" href="https://devtron.ai/blog/introduction-to-kubernetes-event-driven-autoscaling-keda/#keda-architecture-and-components">https://devtron.ai/blog/introduction-to-kubernetes-event-driven-autoscaling-keda/#keda-architecture-and-components</a></p>
<p><a target="_blank" href="https://medium.com/@mohammadsaquib.ee/mongodb-powered-autoscaling-harnessing-keda-to-scale-applications-dynamically-based-on-database-f38a68e71db6">https://medium.com/@mohammadsaquib.ee/mongodb-powered-autoscaling-harnessing-keda-to-scale-applications-dynamically-based-on-database-f38a68e71db6</a></p>
<p><a target="_blank" href="https://keda.sh/docs/2.14/scalers/mongodb/">https://keda.sh/docs/2.14/scalers/mongodb/</a></p>
]]></content:encoded></item><item><title><![CDATA[10 Kubernetes Interview Questions]]></title><description><![CDATA[As Kubernetes celebrated its 10th birthday bash on the 6th of June, I decided to write a post about 10 interview questions that I would ask a potential employee to gauge their skills on Kubernetes and its eco-system.
1. Why do need Gateway API when t...]]></description><link>https://srujanpakanati.com/10-kubernetes-interview-questions</link><guid isPermaLink="true">https://srujanpakanati.com/10-kubernetes-interview-questions</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[interview questions]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Wed, 12 Jun 2024 03:28:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/80VTQEkRh1c/upload/44358b839507b7a44b4dc8667f1f546d.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As Kubernetes celebrated its 10th birthday bash on the 6th of June, I decided to write a post about 10 interview questions that I would ask a potential employee to gauge their skills on Kubernetes and its eco-system.</p>
<h2 id="heading-1-why-do-need-gateway-api-when-there-is-ingress">1. Why do need Gateway API when there is Ingress?</h2>
<p>Ingress is used to manage external traffic into the cluster. Rather than creating induvidual LBs for each service, Ingress allows us to use one load balancer to serve traffic to all the services. That being said, there are many drawbacks in Ingress implementation. Ingress only supports L7 HTTP traffic. It does not support other L7 protocols or L4 protocols like TCP and UDP. This opens room for many problems. Moreover there is no support for advanced features like rate-limiting, A/B testing etc. This makes customers use vendor-specific annotations which leads to vendor lock-in. Ingress does not also support Header based routing. All these drawbacks made Ingress inferior product. Hence Gateway API is introduced</p>
<p>Gateway API is modern, extensible traffic management solution in K8s which support both L4 and L7 protocols. GatewayAPI completely embraces the extensibility of K8s by introducing new Custom Resources like <code>Gateway</code> , <code>GatewayClass</code> and <code>HTTPRoute</code>. This establishes a clear set of boundary for each CR making them replacable. It supports all advanced features like request mirroring, fine grained traffic metrics etc. GatewayAPI also sets standard set of definition files for API objects so one vendor can be replcaed easily with other vendors. This is a pain point in the Ingress' case.</p>
<h2 id="heading-2-how-do-you-manage-secrets-in-k8s">2. How do you manage secrets in K8s?</h2>
<p>Secrets are an interesting aspect of Kubernetes. The inbuilt Kubernetes secrets are not encrypted at rest but just encoded. So using inbuilt secrets is a huge security vulnerability. If your etcd cluster data is compromised then all your secrets can be accessed in a plain text format. One way to prevent this is to encrypt ectd data at rest.</p>
<p>One can use cloud-specific secret stores to manage secrets and use cloud-native tools like External Secrets Operator, secret store CSI driver etc. There are also alternatives like Hashicorp Vault along with vault sidecar container that directly inserts secrets into pod by skipping K8s secrets as a whole. This is the most secure way of managing secrets.</p>
<h2 id="heading-3-how-do-you-debug-applications-in-k8s">3. How do you debug applications in K8s?</h2>
<p>Debugging apps in K8s environment is a challenging task as the pods will not/should not have debugging tools like Curl or nc etc(rightfully so for security purposes). This can be addressed by using ephemeral containers like busybox using kubectl debug command. This helps with troubleshooting pods, services etc and understanding where the issue lies</p>
<p><code>kubectl debug -it ephemeral-demo --image=busybox:1.28 --target=ephemeral-demo</code></p>
<p>Apart from that you can analyze K8s events, use <code>kubectl describe</code> to get detailed understanding of state of application. You can analyize logs using <code>kubectl logs</code> or port-forward for further debugging.</p>
<h2 id="heading-4-what-are-custom-resources">4. What are Custom Resources?</h2>
<p>While using K8s features, you can also extend them by developing new CRDs as per requirement. You can create new CRDs and Custom Controllers to manage them using frameworks like OperatorSDK, Kubebuilder etc. This feature makes K8s extensible and programmable.</p>
<h2 id="heading-5-when-do-you-not-deploy-in-kubernetes">5. When do you not deploy in Kubernetes?</h2>
<p>Though K8s tries to appeal to wide variety of workloads there might be use-cases where K8s is not ideal solution for your problem like</p>
<ol>
<li><p>If you have small number of identical applications.</p>
</li>
<li><p>If your applications are not optimized for the cloud.</p>
</li>
<li><p>If your app is still monolithic.</p>
</li>
<li><p>If you cannot keep-up with the k8s releases and migrations.</p>
</li>
<li><p>You don't need to scale or deploy new applications.</p>
</li>
</ol>
<h2 id="heading-6-how-do-you-scale-k8s-resources">6. How do you scale K8s resources?</h2>
<p>K8s is built by keeping scalability in mind. Ideally, K8s can scale up to 5000 nodes or 150,000 pods. There are lot of built-in features to scale pods horizontally and vertically like HPA and VPA. You can set the CPU and memory threshold and once these numbers are hit then HPA will scale the number of pods. There is <a target="_blank" href="https://keda.sh/">KEDA</a>(Kubernetes-based Event Driven Autoscaler) using which we can scale applications based on custom metrics.</p>
<p>When it comes to scaling a K8s cluster, there are tools like Cluster Autoscaler which can add nodes if there are un-schedulable pods but Cluster Autoscaler treats all the nodes same. That is where Karpenter comes into picture. You can setup nodePools for different application types or for different regions etc. Karpenter is a best solution out there for scaling cluster.</p>
<h2 id="heading-7-how-do-you-handle-logs-in-k8s">7. How do you handle logs in K8s?</h2>
<p>For small clusters and where you do not need to retain logs for any purposes, <code>kubectl logs</code> command is just fine. But if you need to retain, aggregate and analyze logs then you have to look at logging solutions like Splunk, Logz etc. These are paid services. Apart from these you can use stacks like EFK, ELK and Promtail, Grafana and Loki where one tool collects the logs and forwards them(Fluentd, Logstash and Promtail) one acts as a database (Elasticsearch, Loki) where as other tool is used for querying, visualization and alerts.</p>
<h2 id="heading-8-why-do-you-need-to-use-servicemesh">8. Why do you need to use ServiceMesh?</h2>
<p>Service mesh can be a great infrastructure add-on to the K8s cluster. Service Meshes like Istio, Cilium and Linkerd can offer lot of features out of box like mTLS, canary deployment, progressive deployment, traffic splitting, A/B testing and circuit breaking etc. Service meshes increase observability, security and reliability of apps in the cluster.<br />The caveat is that the service mesh comes with lot of complexity and architectural overhead to the cluster. If your workloads are simple and do not need additional latency(though minimal) or complexity, you can stay away from them.</p>
<h2 id="heading-9-how-do-you-ensure-security-of-k8s-cluster">9. How do you ensure security of K8s cluster?</h2>
<p>Though security is a vast topic, I can try to be brief. Firstly, by using network policies, apps can restrict access to other namespaces. Using RBAC to make sure only users/service accounts with proper permissions are accessing the resources. Using Policy-as-a-Code tools like OPA or Kyverno to set guardrails around apps getting deployed. mTLS for app to app communication. Setting threat monitoring tools like Falco and PSA for early detection and mitigation. Setting up seperate clusters for apps handling sensitive data to make them PCI-DSS, HIPPA and GDPR compatible.</p>
<p>If I am using managed K8s like EKS or GKE, using cloud security features like KMS, Golden AMIs etc.</p>
<h2 id="heading-10-how-do-you-handle-stateful-applications-in-k8s">10. How do you handle Stateful applications in K8s?</h2>
<p>Traditionally Kubernetes has a bad reputation for stateful applications like PostgreSQL because popular belief is that K8s is only meant for stateless workloads. That is not true at all. StatefulSets are used to maintain the identity of pods across restarts and redeployments. However, I would not use storage in K8s nodes to create StorageClass or PVs because of the ephemeral nature of nodes, Using something like EBS volume to create SC using CSI driver and creating PVC from it is recommended.</p>
<p>Alternatively, there are many OS solutions like Cloud Native PG and StackGres which offer better approach to the problem. Running databases on Kubernetes is still an evolving topic which can see many changes in coming days.</p>
]]></content:encoded></item><item><title><![CDATA[Installing EFK Stack in EKS]]></title><description><![CDATA[Intro
EFK stands for ElasticSearch, Fluentd and Kibana. It is similar to the ELK stack where we are replacing Logstash with Fluentd. This is a free, open-source alternative to Splunk for log aggregation, processing and visualisation. I have been goin...]]></description><link>https://srujanpakanati.com/installing-efk-stack-in-eks</link><guid isPermaLink="true">https://srujanpakanati.com/installing-efk-stack-in-eks</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[elasticsearch]]></category><category><![CDATA[EFK]]></category><category><![CDATA[logging]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Thu, 06 Jun 2024 16:03:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717689637929/5585dd8f-cead-4965-ae9d-9b72c162aa2e.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-intro">Intro</h2>
<p>EFK stands for ElasticSearch, Fluentd and Kibana. It is similar to the ELK stack where we are replacing Logstash with Fluentd. This is a free, open-source alternative to Splunk for log aggregation, processing and visualisation. I have been going through the KodeKloud <a target="_blank" href="https://learn.kodekloud.com/user/courses/learn-by-doing-deploying-and-managing-the-efk-stack-on-kubernetes">course</a> on EFK stack but there you are using local volume for persistent storage which goes against the Kubernetes principles. Here I am trying to deploy the EFK stack on EKS cluster using EBS add-on.</p>
<p>Here is a small primer on tools that the stack is based upon</p>
<h3 id="heading-elasticsearchhttpswwwelasticcoelasticsearch"><a target="_blank" href="https://www.elastic.co/elasticsearch">Elasticsearch</a></h3>
<p>It is a distributed NoSQL database and search and analytics engine based on Apache Lucene. I have been using elasticsearch to store logs since the beginning of my career.</p>
<h3 id="heading-fluentdhttpswwwfluentdorg"><a target="_blank" href="https://www.fluentd.org/">Fluentd</a></h3>
<p>Fluentd is a lightweight log-forwarder and indexer. It is deployed as Daemonset and collects all the logs and forwards them to elasticsearch.</p>
<h3 id="heading-kibana">Kibana</h3>
<p>Kibana is a querying and log-visualization dashboard. It can be used to query logs and application monitoring.</p>
<h2 id="heading-creating-eks-cluster">Creating EKS cluster</h2>
<p>I have used eksctl command to create the cluster. There are more efficient ways like writing clusterConfig for more verbose and consistent cluster creation. But stupid me always do Ctrl+R to see and predict which cluster I have created previously😆</p>
<pre><code class="lang-yaml"><span class="hljs-string">eksctl</span> <span class="hljs-string">create</span> <span class="hljs-string">cluster</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--name</span> <span class="hljs-string">test-efk-stack</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--region</span> <span class="hljs-string">us-east-1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--version</span> <span class="hljs-number">1.29</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--with-oidc</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--node-type</span> <span class="hljs-string">t3.medium</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--nodes</span> <span class="hljs-number">2</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--managed</span>
</code></pre>
<p>Creating an IAM role and associating it with SA</p>
<pre><code class="lang-yaml"><span class="hljs-string">eksctl</span> <span class="hljs-string">create</span> <span class="hljs-string">iamserviceaccount</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--name</span> <span class="hljs-string">"ebs-csi-controller-sa"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--namespace</span> <span class="hljs-string">"kube-system"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cluster</span> <span class="hljs-string">test-efk-stack</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--region</span> <span class="hljs-string">us-east-1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--attach-policy-arn</span> <span class="hljs-string">$POLICY_ARN</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--role-only</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--role-name</span> <span class="hljs-string">$ROLE_NAME</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--approve</span>
</code></pre>
<p>Now I am installing the EBS addon to the cluster</p>
<pre><code class="lang-yaml"><span class="hljs-string">eksctl</span> <span class="hljs-string">create</span> <span class="hljs-string">addon</span> <span class="hljs-string">\</span>                                            
  <span class="hljs-string">--name</span> <span class="hljs-string">"aws-ebs-csi-driver"</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--cluster</span> <span class="hljs-string">$EKS_CLUSTER_NAME</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--region=us-east-1</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--service-account-role-arn</span> <span class="hljs-string">$ACCOUNT_ROLE_ARN</span> <span class="hljs-string">\</span>
  <span class="hljs-string">--force</span>
</code></pre>
<p>The EBS csi driver is now installed.</p>
<pre><code class="lang-powershell">➜  ~ k get pods <span class="hljs-literal">-n</span> kube<span class="hljs-literal">-system</span> 
NAME                                  READY   STATUS    RESTARTS   AGE
aws<span class="hljs-literal">-node</span><span class="hljs-literal">-bpfkn</span>                        <span class="hljs-number">2</span>/<span class="hljs-number">2</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">21</span>m
aws<span class="hljs-literal">-node</span><span class="hljs-literal">-nbwm6</span>                        <span class="hljs-number">2</span>/<span class="hljs-number">2</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">21</span>m
coredns<span class="hljs-literal">-54d6f577c6</span><span class="hljs-literal">-4ghdt</span>              <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">28</span>m
coredns<span class="hljs-literal">-54d6f577c6</span><span class="hljs-literal">-hm5dj</span>              <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">28</span>m
ebs<span class="hljs-literal">-csi</span><span class="hljs-literal">-controller</span><span class="hljs-literal">-86b8d8bb96</span><span class="hljs-literal">-256dg</span>   <span class="hljs-number">6</span>/<span class="hljs-number">6</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>m1s
ebs<span class="hljs-literal">-csi</span><span class="hljs-literal">-controller</span><span class="hljs-literal">-86b8d8bb96</span><span class="hljs-literal">-kfzhb</span>   <span class="hljs-number">6</span>/<span class="hljs-number">6</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>m1s
ebs<span class="hljs-literal">-csi</span><span class="hljs-literal">-node</span><span class="hljs-literal">-5jqfm</span>                    <span class="hljs-number">3</span>/<span class="hljs-number">3</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>m1s
ebs<span class="hljs-literal">-csi</span><span class="hljs-literal">-node</span><span class="hljs-literal">-6rgjg</span>                    <span class="hljs-number">3</span>/<span class="hljs-number">3</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">3</span>m1s
kube<span class="hljs-literal">-proxy</span><span class="hljs-literal">-wl4z7</span>                      <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">21</span>m
kube<span class="hljs-literal">-proxy</span><span class="hljs-literal">-wxcls</span>                      <span class="hljs-number">1</span>/<span class="hljs-number">1</span>     Running   <span class="hljs-number">0</span>          <span class="hljs-number">21</span>m
</code></pre>
<p>Note: I have followed parts of this <a target="_blank" href="https://joachim8675309.medium.com/eks-ebs-storage-with-eksctl-3e526f534215">blog post</a> to create cluster</p>
<hr />
<h2 id="heading-installing-elasticsearch">Installing Elasticsearch</h2>
<p>We install the database first and then we will install Fluentd and finally Kibana.</p>
<p>You can install using the Helm chart provided in this <a target="_blank" href="https://medium.com/@tech_18484/simplifying-kubernetes-logging-with-efk-stack-158da47ce982">blog</a> and I tried the same. But the problem is that in default provisions 3 pods and each pod provisions 30GB which is unnecessary. I can use my own values but I went in another way and installed Stateful set and Service directly using this <a target="_blank" href="https://github.com/MerNat/elk-stack-kubernetes/tree/master">github</a> repo. I had to change some things around but it finally started to work.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">StatefulSet</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">es-cluster</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">efk</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">serviceName:</span> <span class="hljs-string">elasticsearch</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">1</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">app:</span> <span class="hljs-string">elasticsearch</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">app:</span> <span class="hljs-string">elasticsearch</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">elasticsearch</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">docker.elastic.co/elasticsearch/elasticsearch:8.5.1</span>
          <span class="hljs-attr">resources:</span>
            <span class="hljs-attr">limits:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">1000m</span>
            <span class="hljs-attr">requests:</span>
              <span class="hljs-attr">cpu:</span> <span class="hljs-string">100m</span>
          <span class="hljs-attr">ports:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">9200</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">rest</span>
              <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">9300</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">inter-node</span>
              <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
          <span class="hljs-attr">volumeMounts:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/elasticsearch/data</span>
          <span class="hljs-attr">env:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">cluster.name</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">k8s-logs</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">network.host</span>
              <span class="hljs-attr">value:</span> <span class="hljs-number">0.0</span><span class="hljs-number">.0</span><span class="hljs-number">.0</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">node.name</span>
              <span class="hljs-attr">valueFrom:</span>
                <span class="hljs-attr">fieldRef:</span>
                  <span class="hljs-attr">fieldPath:</span> <span class="hljs-string">metadata.name</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">discovery.seed_hosts</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"es-cluster-0.elasticsearch"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">discovery.type</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">single-node</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">xpack.license.self_generated.type</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"trial"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">xpack.security.enabled</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"true"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">xpack.monitoring.collection.enabled</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"true"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ES_JAVA_OPTS</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"-Xms256m -Xmx256m"</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">ELASTIC_PASSWORD</span>
              <span class="hljs-attr">value:</span> <span class="hljs-string">"elasticpassword"</span>
      <span class="hljs-attr">initContainers:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">fix-permissions</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
          <span class="hljs-attr">command:</span>
            [<span class="hljs-string">"sh"</span>, <span class="hljs-string">"-c"</span>, <span class="hljs-string">"chown -R 1000:1000 /usr/share/elasticsearch/data"</span>]
          <span class="hljs-attr">securityContext:</span>
            <span class="hljs-attr">privileged:</span> <span class="hljs-literal">true</span>
          <span class="hljs-attr">volumeMounts:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">data</span>
              <span class="hljs-attr">mountPath:</span> <span class="hljs-string">/usr/share/elasticsearch/data</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">increase-vm-max-map</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
          <span class="hljs-attr">command:</span> [<span class="hljs-string">"sysctl"</span>, <span class="hljs-string">"-w"</span>, <span class="hljs-string">"vm.max_map_count=262144"</span>]
          <span class="hljs-attr">securityContext:</span>
            <span class="hljs-attr">privileged:</span> <span class="hljs-literal">true</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">increase-fd-ulimit</span>
          <span class="hljs-attr">image:</span> <span class="hljs-string">busybox</span>
          <span class="hljs-attr">command:</span> [<span class="hljs-string">"sh"</span>, <span class="hljs-string">"-c"</span>, <span class="hljs-string">"ulimit -n 65536"</span>]
          <span class="hljs-attr">securityContext:</span>
            <span class="hljs-attr">privileged:</span> <span class="hljs-literal">true</span>
  <span class="hljs-attr">volumeClaimTemplates:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">metadata:</span>
        <span class="hljs-attr">name:</span> <span class="hljs-string">data</span>
        <span class="hljs-attr">labels:</span>
          <span class="hljs-attr">app:</span> <span class="hljs-string">elasticsearch</span>
      <span class="hljs-attr">spec:</span>
        <span class="hljs-attr">accessModes:</span> [<span class="hljs-string">"ReadWriteOnce"</span>]
        <span class="hljs-attr">storageClassName:</span> <span class="hljs-string">gp2</span>
        <span class="hljs-attr">resources:</span>
          <span class="hljs-attr">requests:</span>
            <span class="hljs-attr">storage:</span> <span class="hljs-string">5Gi</span>
</code></pre>
<pre><code class="lang-yaml"><span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">elasticsearch</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">efk</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">elasticsearch</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">app:</span> <span class="hljs-string">elasticsearch</span>
  <span class="hljs-attr">ports:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">9200</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">rest</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">port:</span> <span class="hljs-number">9300</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">inter-node</span>
  <span class="hljs-attr">type:</span> <span class="hljs-string">LoadBalancer</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1717641924153/a27d43d7-d09d-4cd9-afb6-bbb0c9de6bb5.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-installing-kibana-and-fluentd">Installing Kibana and Fluentd</h2>
<p>Kibana can be installed using simple Helm command</p>
<p><code>helm install kibana --set service.type=LoadBalancer elastic/kibana -n efk</code></p>
<p>After successfully installing Kibana, I have installed Fluentd and modified the config file as well.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>EFK is a powerful opensource alternative for Splunk. This cost-effective solution works for the small to mid-level enterprises who have restricted budgets. In future articles we'll explore scaling and securing EFK stack.</p>
]]></content:encoded></item><item><title><![CDATA[Simplify Kubernetes Secret Management with External Secrets Operator]]></title><description><![CDATA[Intro
Managing secrets in Kubernetes is tricky as the "Secrets" are not really Secrets but just encoded configMaps. You cannot store Secrets definitions in Helm on source code or you cannot rotate these secrets. So there must be a better way of manag...]]></description><link>https://srujanpakanati.com/simplify-kubernetes-secret-management-with-external-secrets-operator</link><guid isPermaLink="true">https://srujanpakanati.com/simplify-kubernetes-secret-management-with-external-secrets-operator</guid><category><![CDATA[awssecretsmanager]]></category><category><![CDATA[ESO ]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[cloud native]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 20 May 2024 21:34:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/9cXMJHaViTM/upload/faf6edc3a876b9dc39ddb97e334faedd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-intro">Intro</h2>
<p>Managing secrets in Kubernetes is tricky as the "Secrets" are not really Secrets but just encoded configMaps. You cannot store Secrets definitions in Helm on source code or you cannot rotate these secrets. So there must be a better way of managing secrets. You might already have secrets somewhere else like Vault or AWS SecretsManager that you want to access in the cluster. One way of addressing this issue is by using External Secrets Operator(ESO)</p>
<h2 id="heading-eso-architecture">ESO Architecture</h2>
<p><a target="_blank" href="https://external-secrets.io/latest/pictures/diagrams-high-level-simple.png"><img src="https://external-secrets.io/latest/pictures/diagrams-high-level-simple.png" alt="ESO Arch" /></a></p>
<p>ESO extends Kubernetes functionality using Custom Resources. Firstly you can install ESO in cluster using Helm</p>
<pre><code class="lang-plaintext">helm repo add external-secrets https://charts.external-secrets.io

helm install external-secrets \
   external-secrets/external-secrets \
    -n external-secrets \
    --create-namespace \
</code></pre>
<p>This installs bunch of CRDs from ESO</p>
<pre><code class="lang-plaintext">➜  ESOperator k get crds |grep external-secrets
acraccesstokens.generators.external-secrets.io          2024-05-20T16:34:03Z
clusterexternalsecrets.external-secrets.io              2024-05-20T16:34:03Z
clustersecretstores.external-secrets.io                 2024-05-20T16:34:03Z
ecrauthorizationtokens.generators.external-secrets.io   2024-05-20T16:34:03Z
externalsecrets.external-secrets.io                     2024-05-20T16:34:03Z
fakes.generators.external-secrets.io                    2024-05-20T16:34:03Z
gcraccesstokens.generators.external-secrets.io          2024-05-20T16:34:03Z
githubaccesstokens.generators.external-secrets.io       2024-05-20T16:34:03Z
passwords.generators.external-secrets.io                2024-05-20T16:34:03Z
pushsecrets.external-secrets.io                         2024-05-20T16:34:03Z
secretstores.external-secrets.io                        2024-05-20T16:34:03Z
vaultdynamicsecrets.generators.external-secrets.io      2024-05-20T16:34:03Z
webhooks.generators.external-secrets.io                 2024-05-20T16:34:03Z
</code></pre>
<p>Of these the important ones to understand the architecture are</p>
<h3 id="heading-secretstore-secretstoresexternal-secretsio">SecretStore <code>secretstores.external-secrets.io</code></h3>
<p>SecretStore is like a centralized location where all your secrets are located like Vault, AWS Secrets Manager, Azure Key Vault etc. All the providers are listed in their <a target="_blank" href="https://external-secrets.io/latest/provider/aws-secrets-manager/">documentation</a>. You can configure it using the resource definition file. For AWS SM I have created new user and gave it permissions for the SM and configured the SecretStore with the below definition file</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">external-secrets.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">SecretStore</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">secretstore-sample</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">provider:</span>
    <span class="hljs-attr">aws:</span>
      <span class="hljs-attr">service:</span> <span class="hljs-string">SecretsManager</span>
      <span class="hljs-attr">region:</span> <span class="hljs-string">us-east-2</span>
      <span class="hljs-attr">auth:</span>
        <span class="hljs-attr">secretRef:</span>
          <span class="hljs-attr">accessKeyIDSecretRef:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">awssm-secret</span>
            <span class="hljs-attr">key:</span> <span class="hljs-string">access-key</span>
          <span class="hljs-attr">secretAccessKeySecretRef:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">awssm-secret</span>
            <span class="hljs-attr">key:</span> <span class="hljs-string">secret-access-key</span>
</code></pre>
<p>Once you configure it, you have to make sure it is Ready</p>
<pre><code class="lang-plaintext">➜  ESOperator k get secretstores.external-secrets.io        
NAME                 AGE     STATUS   CAPABILITIES   READY
secretstore-sample   3h42m   Valid    ReadWrite      True
</code></pre>
<p>Now I can access all the secrets from SM in the cluster. To get one secret, you need to write definition for externalSecret</p>
<h3 id="heading-externalsecret-externalsecretsexternal-secretsio">ExternalSecret <code>externalsecrets.external-secrets.io</code></h3>
<p>As the name suggests, the external secret is a secret from external source. It depends on the secretstore to provide the secret. I have used below definition file to fetch secret from the AWSSM named <code>DB_CREDENTIALS</code></p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">external-secrets.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">ExternalSecret</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">example</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">refreshInterval:</span> <span class="hljs-string">10m</span>
  <span class="hljs-attr">secretStoreRef:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">secretstore-sample</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">SecretStore</span>
  <span class="hljs-attr">target:</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">kube-secret</span>
    <span class="hljs-attr">creationPolicy:</span> <span class="hljs-string">Owner</span>
  <span class="hljs-attr">dataFrom:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">extract:</span>
      <span class="hljs-attr">key:</span> <span class="hljs-string">DB_CREDENTIALS</span>
</code></pre>
<p>Here target.name is the name of secret that you want to be created. As you can see that the secret is created here.</p>
<pre><code class="lang-yaml"><span class="hljs-string">NAME</span>           <span class="hljs-string">TYPE</span>     <span class="hljs-string">DATA</span>   <span class="hljs-string">AGE</span>
<span class="hljs-string">awssm-secret</span>   <span class="hljs-string">Opaque</span>   <span class="hljs-number">2</span>      <span class="hljs-string">4h32m</span>
<span class="hljs-string">kube-secret</span>    <span class="hljs-string">Opaque</span>   <span class="hljs-number">2</span>      <span class="hljs-string">3h52m</span>
</code></pre>
<p>The thing is the secret is not encrypted. You have to do <a target="_blank" href="https://techexpertise.medium.com/encrypting-the-secret-data-at-etcd-store-on-a-minikube-k8s-cluster-2338c68263a5">etcd encryption at rest</a> to protect all the data. Using these two can make your cluster secure and robust.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I can see that ESO is extensible and can be used in various scenarios. This is a compelling alternative to vault sidecar injection. Especially when you take cost into consideration.</p>
]]></content:encoded></item><item><title><![CDATA[Istio v1.22: Trying latest Ambient mode]]></title><description><![CDATA[Need for Istio
Istio is a service mesh for Kubernetes clusters. It offers many features like zero trust network using mTLS, Traffic management, Routing, observability using Kiali etc. Well how does this do it? By inserting a lightweight sidecar conta...]]></description><link>https://srujanpakanati.com/istio-v122-trying-latest-ambient-mode</link><guid isPermaLink="true">https://srujanpakanati.com/istio-v122-trying-latest-ambient-mode</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[#istio]]></category><category><![CDATA[istio service mesh]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Mon, 20 May 2024 15:34:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716219688251/51c5ba16-05bd-4a4a-a189-c3977f98f112.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-need-for-istio">Need for Istio</h2>
<p>Istio is a service mesh for Kubernetes clusters. It offers many features like zero trust network using mTLS, Traffic management, Routing, observability using Kiali etc. Well how does this do it? By inserting a lightweight sidecar container as proxy in all the pods. So traffic flows to and from these proxy containers. In layman terms, if Kubernetes cluster is a town with paved roads, istio installs highways with traffic signals, CCTV cameras installed, and a centralized Traffic HQ.</p>
<p><a target="_blank" href="https://istio.io/latest/blog/2022/introducing-ambient-mesh/traditional-istio.png"><img src="https://istio.io/latest/blog/2022/introducing-ambient-mesh/traditional-istio.png" alt="Istio’s traditional model deploys Envoy proxies as sidecars within the workloads’ pods" /></a></p>
<h2 id="heading-limitations-with-istio">Limitations with Istio</h2>
<p>The problem with istio is that it needs dedicated resources for sidecar containers. Generally 128Mb memory and 100m CPU is given to the istio sidecars. The resource consumption is minimal, but when you add it to all the pods running in a cluster, it is a significant amount of resources.</p>
<p>The other problem is that traffic between pods has to do two hops i.e. to and from sidecar container, which adds to the latency, Yes minimal (~10-20ms) but still it adds up. This <a target="_blank" href="https://istio.io/latest/blog/2022/introducing-ambient-mesh/">article</a> explores some more of it.</p>
<h2 id="heading-what-is-istio-ambient-mesh">What is Istio Ambient Mesh</h2>
<p>On the surface, it is just a new way of proxying without injecting sidecar containers. This ups the istio's game bringing it equivalent to Cilium where Cilium uses eBPF for manipulating traffic. Istio now separates the concerns of L4 and L7 proxying by using two different ways to proxy traffic. This revolutionises the way of using Istio.</p>
<p>For L4 proxying, istio uses node proxy called ztunnel. It is a Daemonset which is deployed on every node and modifies the bidirectional traffic. It secures traffic using mTLS.</p>
<p>For L7 proxying, Istio uses waypoint proxy. It can be deployed as one for each service running in parallel with  </p>
<h2 id="heading-testing-istio-in-ambient-mode">Testing Istio in Ambient Mode</h2>
<p>For this I am trying out the test setup for ambient mode written in Istio's <a target="_blank" href="https://istio.io/latest/blog/2022/get-started-ambient/">blog</a>  </p>
<p>I have created a Kind cluster</p>
<pre><code class="lang-plaintext">➜  terraform_projects git:(main) ✗ kind create cluster --config=- &lt;&lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ambient
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
Creating cluster "ambient" ...
 ✓ Ensuring node image (kindest/node:v1.29.2) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-ambient"
You can now use your cluster with:

kubectl cluster-info --context kind-ambient

Thanks for using kind! 😊
➜  terraform_projects git:(main) ✗ k get nodes
NAME                    STATUS   ROLES           AGE     VERSION
ambient-control-plane   Ready    control-plane   5m58s   v1.29.2
ambient-worker          Ready    &lt;none&gt;          5m34s   v1.29.2
ambient-worker2         Ready    &lt;none&gt;          5m34s   v1.29.2
</code></pre>
<p>Now to install istio, I am using istioctl</p>
<pre><code class="lang-plaintext">➜  ~ istioctl install --set profile=ambient

This will install the Istio 1.21.2 "ambient" profile (with components: Istio core, Istiod, CNI, and Ztunnel) into the cluster. Proceed? (y/N) y
✔ Istio core installed                                                                                                                                                                  
✔ Istiod installed                                                                                                                                                                      
✔ CNI installed                                                                                                                                                                         
✔ Ztunnel installed                                                                                                                                                                     
✔ Installation complete                                                                                                                                                                Made this installation the default for injection and validation.
</code></pre>
<p>If we check the pods installed in istio-system,</p>
<pre><code class="lang-plaintext">➜  ~ k get pods -n istio-system 
NAME                     READY   STATUS    RESTARTS   AGE
istio-cni-node-blxdv     1/1     Running   0          8m2s
istio-cni-node-k5q9p     1/1     Running   0          8m2s
istio-cni-node-nmg86     1/1     Running   0          8m2s
istiod-ff94b9d97-6tzfp   1/1     Running   0          8m17s
ztunnel-dsfdh            1/1     Running   0          7m28s
ztunnel-kwnz7            1/1     Running   0          7m28s
ztunnel-z78gf            1/1     Running   0          7m28s
</code></pre>
<p>Both ztunnel and istio CNI are installed as daemonSets. ztunnel is to intercept the traffic flowing between nodes and CNI intercepts traffic between pods in a same node and redirects it through ztunnel. Like forcing the traffic to implement the ambient mode.</p>
<p>To test it, I have installed bookinfo application, sleep and notsleep pods as well.</p>
<pre><code class="lang-plaintext">➜  ~ k apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/bookinfo/platform/kube/bookinfo.yaml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
➜  ~ k get pods
NAME                             READY   STATUS              RESTARTS   AGE
details-v1-cf74bb974-8bmhv       0/1     ContainerCreating   0          32s
productpage-v1-87d54dd59-2mxrt   1/1     Running             0          32s
ratings-v1-7c4bbf97db-cxmps      1/1     Running             0          32s
reviews-v1-5fd6d4f8f8-sfzv4      0/1     ContainerCreating   0          32s
reviews-v2-6f9b55c5db-9wh4k      0/1     ContainerCreating   0          32s
reviews-v3-7d99fd7978-jm5sr      0/1     ContainerCreating   0          32s
➜  ~ kubectl apply -f https://raw.githubusercontent.com/linsun/sample-apps/main/sleep/sleep.yaml
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created
➜  ~ kubectl apply -f https://raw.githubusercontent.com/linsun/sample-apps/main/sleep/notsleep.yaml


serviceaccount/notsleep created
service/notsleep created
deployment.apps/notsleep created
</code></pre>
<p>As the ambient mode is enabled to induvidual namespaces, and we haven't enabled it yet. You can see that I can hit other pods from sleep pod and get response</p>
<pre><code class="lang-plaintext">➜  ~ kubectl exec deploy/sleep -- curl -s http://productpage:9080/           
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;title&gt;Simple Bookstore App&lt;/title&gt;
&lt;meta charset="utf-8"&gt;
&lt;meta http-equiv="X-UA-Compatible" content="IE=edge"&gt;
&lt;meta name="viewport" content="width=device-width, initial-scale=1"&gt;
</code></pre>
<p>After setting the ambient mode using <code>kubectl label namespace default istio.io/dataplane-mode=ambient</code> I could see that the traffic is now flowing through ztunnel</p>
<pre><code class="lang-plaintext">➜  ~ kubectl label namespace default istio.io/dataplane-mode=ambient
namespace/default labeled
➜  ~ kubectl exec deploy/sleep -- curl -s http://productpage:9080/ | head -n1
&lt;!DOCTYPE html&gt;
➜  ~ kubectl exec deploy/sleep -- curl -s http://productpage:9080/ | head -n5
&lt;!DOCTYPE html&gt;
&lt;html&gt;
  &lt;head&gt;
    &lt;title&gt;Simple Bookstore App&lt;/title&gt;
&lt;meta charset="utf-8"&gt;
</code></pre>
<p>Here are the ztunnel logs for the same</p>
<pre><code class="lang-plaintext">2024-05-17T20:25:18.143503Z     info    access  connection complete     src.addr=10.244.1.4:42786 src.workload=sleep-97576b68f-ftqsh src.namespace=default src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.2.8:15008 dst.hbone_addr=10.244.2.8:9080 dst.service=productpage.default.svc.cluster.local dst.workload=productpage-v1-87d54dd59-jvj79 dst.namespace=default dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=79 bytes_recv=1918 duration="128ms"
2024-05-17T20:25:19.755188Z     info    access  connection complete     src.addr=10.244.1.4:42796 src.workload=sleep-97576b68f-ftqsh src.namespace=default src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.2.8:15008 dst.hbone_addr=10.244.2.8:9080 dst.service=productpage.default.svc.cluster.local dst.workload=productpage-v1-87d54dd59-jvj79 dst.namespace=default dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=79 bytes_recv=1918 duration="5ms"
2024-05-17T20:25:21.033103Z     info    access  connection complete     src.addr=10.244.1.4:42810 src.workload=sleep-97576b68f-ftqsh src.namespace=default src.identity="spiffe://cluster.local/ns/default/sa/sleep" dst.addr=10.244.2.8:15008 dst.hbone_addr=10.244.2.8:9080 dst.service=productpage.default.svc.cluster.local dst.workload=productpage-v1-87d54dd59-jvj79 dst.namespace=default dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" direction="outbound" bytes_sent=79 bytes_recv=1918 duration="7ms"
</code></pre>
<p>Other than mTLS, you can set Authorization policies for different workloads.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">security.istio.io/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">AuthorizationPolicy</span>
<span class="hljs-attr">metadata:</span>
 <span class="hljs-attr">name:</span> <span class="hljs-string">productpage-viewer</span>
 <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
 <span class="hljs-attr">selector:</span>
   <span class="hljs-attr">matchLabels:</span>
     <span class="hljs-attr">app:</span> <span class="hljs-string">productpage</span>
 <span class="hljs-attr">action:</span> <span class="hljs-string">ALLOW</span>
 <span class="hljs-attr">rules:</span>
 <span class="hljs-bullet">-</span> <span class="hljs-attr">from:</span>
   <span class="hljs-bullet">-</span> <span class="hljs-attr">source:</span>
       <span class="hljs-attr">principals:</span> [<span class="hljs-string">"cluster.local/ns/default/sa/sleep"</span>, <span class="hljs-string">"cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"</span>]
</code></pre>
<pre><code class="lang-plaintext">➜  ~ kubectl exec deploy/sleep -- curl -s http://productpage:9080/ | head -n1
&lt;!DOCTYPE html&gt;
➜  ~ kubectl exec deploy/notsleep -- curl -s http://productpage:9080/ | head -n1


command terminated with exit code 56
</code></pre>
<p>By using istio gateway, you can bring up waypoint proxy to do L7 loadbalancing. Which creates waypoint proxy pods which will not be attached to application pods but stay as stand-alone pods.</p>
<h3 id="heading-trying-on-eks">Trying on EKS</h3>
<p>I also tried the same demo in the EKS cluster, ztunnel daemonset is not coming up.</p>
<pre><code class="lang-plaintext">➜  ~ k get pods -n istio-system 
NAME                      READY   STATUS    RESTARTS   AGE
istio-cni-node-26bn9      1/1     Running   0          5m27s
istio-cni-node-kzd82      1/1     Running   0          5m27s
istio-cni-node-x99n2      1/1     Running   0          5m27s
istiod-5888647857-d9vdh   0/1     Pending   0          5m22s
ztunnel-7qr8l             0/1     Pending   0          5m22s
ztunnel-pgpsn             0/1     Pending   0          5m22s
ztunnel-z2kvc             0/1     Running   0          5m22s
</code></pre>
<p>The logs show that the pods are failing with <code>XDS client connection error</code></p>
<pre><code class="lang-plaintext">2024-05-17T17:33:58.934625Z     warn    xds::client:xds{id=29}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:34:13.939490Z     warn    xds::client:xds{id=30}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:34:28.943546Z     warn    xds::client:xds{id=31}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:34:43.947796Z     warn    xds::client:xds{id=32}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:34:58.950405Z     warn    xds::client:xds{id=33}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:35:13.955870Z     warn    xds::client:xds{id=34}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:35:28.960054Z     warn    xds::client:xds{id=35}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:35:43.964064Z     warn    xds::client:xds{id=36}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:35:58.968351Z     warn    xds::client:xds{id=37}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:36:13.972471Z     warn    xds::client:xds{id=38}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:36:28.975409Z     warn    xds::client:xds{id=39}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
2024-05-17T17:36:43.979856Z     warn    xds::client:xds{id=40}  XDS client connection error: gRPC connection error:status: Unknown, message: "client error (Connect)", source: tcp connect error: Connection refused (os error 111), retrying in 15s
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I believe this is Istio's attempt to stay relevant and adress all the customer concerns and it hit nail in the head. Looking forward to test it further.</p>
]]></content:encoded></item><item><title><![CDATA[Understanding Zero Trust Networks, mTLS and SPIFFE]]></title><description><![CDATA[Need for ZTN
Firstly, Zero Trust Network is where no network entity is trusted by default or with respect to its position in the network. Every workload, node, machine needs to continuously identify itself.
Shortcomings of traditional network archite...]]></description><link>https://srujanpakanati.com/understanding-zero-trust-networks-mtls-and-spiffe</link><guid isPermaLink="true">https://srujanpakanati.com/understanding-zero-trust-networks-mtls-and-spiffe</guid><category><![CDATA[spiffe]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[spire]]></category><category><![CDATA[mTLS]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Tue, 30 Apr 2024 19:32:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1714505446961/259a2309-6ae5-4b6d-b5af-98b0c63b07f4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-need-for-ztn">Need for ZTN</h2>
<p>Firstly, Zero Trust Network is where no network entity is trusted by default or with respect to its position in the network. Every workload, node, machine needs to continuously identify itself.</p>
<h3 id="heading-shortcomings-of-traditional-network-architecture">Shortcomings of traditional network architecture.</h3>
<p>Traditionally IT companies used to have a network with firewall and all the entities within the network are trusted by default. This is also called as castle and moat architecture. We all have used VPNs to connect to the network. The problem with this approach is that once an attacker has access to the network. He have free reign over the network. He could access all the data he could. Now-a-days, the networks are spanned across different clouds and hence the traditional network architecture will not work.</p>
<h3 id="heading-additional-risks-due-to-microservices">Additional risks due to microservices</h3>
<p>The emergence of microservices architecture means that the sensitive data now flows continuously between all the microservices, posing additional risks of spoofing attacks on microservices. You can have application-specific solutions to hold certificates of other workloads like Java truststore but a lack of coherent solutions across all the different workloads can be a problem.</p>
<h3 id="heading-ephemeral-nature-of-underlying-hardware-in-the-cloud-native-ecosystem">Ephemeral nature of underlying hardware in the cloud-native ecosystem.</h3>
<p>When using Kubernetes or similar container orchestration tools, the underlying nodes are changed and replaced continuously, so you cannot keep whitelisting the IP addresses or instanceIDs as the nodes change. This is not feasible and not practical.</p>
<h2 id="heading-mtls-and-spiffe">mTLS and SPIFFE</h2>
<p>As we all know the modern internet works on TLS certificates. mTLS is using TLS certs to mutually authenticate each other. mTLS is a proven way of implementing Zero Trust Network. Here you can have CA which issues certs to all the required workloads. The workloads can authenticate each other using TLS certificates.</p>
<p><a target="_blank" href="https://spiffe.io/">SPIFFE</a> is</p>
<blockquote>
<p><strong>SPIFFE</strong>, the Secure Production Identity Framework for Everyone, is a set of open-source standards for securely identifying software systems in dynamic and heterogeneous environments. Systems that adopt SPIFFE can easily and reliably mutually authenticate wherever they are running.</p>
</blockquote>
<p>SPIFFE sets the standard of how an ZTN can be implemented. It works by having a workload API running on all the nodes and issuing SVID(Spiffe verifiable Identity Document) to the workloads which can use to authenticate themselves. These documents are short-lived, constantly rotated and cryptographically signed making them more secure.</p>
<p><a target="_blank" href="https://spiffe.io/docs/latest/spire-about/spire-concepts/">SPIRE</a> is</p>
<blockquote>
<p>SPIRE is a production-ready implementation of the <a target="_blank" href="https://spiffe.io/docs/latest/spiffe/overview/"><strong>SPIFFE APIs</strong> that perfo</a>rms node and workload attestation in order to securely issue SVIDs to workloads, and verify the SVIDs of other workloads, based on a predefined set of conditions.</p>
</blockquote>
<p>The SPIFFE project has created the standards and also an implementation of these standards so that the companies can implement mTLS. It works similarly to Hashicorp Vault or Istio where there is a SPIRE Server which is centrally located and there are SPIRE Agents which run in all the nodes. This <a target="_blank" href="https://spiffe.io/docs/latest/spire-about/spire-concepts/#a-day-in-the-life-of-an-svid">part</a> of the docs details how this entire process works. SPIFFE and SPIRE are primarily created for Cloud Native workloads but can be worked with traditional VMs as well. There are multiple ways to set up SPIRE server and agent, the easiest one will be having both inside the same Kubernetes <a target="_blank" href="https://spiffe.io/docs/latest/try/getting-started-k8s/">cluster</a>.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Though SPIFFE is not the only way it is a compelling solution to approach Zero Trust Network using mTLS in CloudNative environments</p>
]]></content:encoded></item><item><title><![CDATA[Introducing Uwubernetes i.e. Kubernetes v1.30 ❤️]]></title><description><![CDATA[Kubernetes project is progressing with a great pace and is being widely adopted by different sectors for wide variety of workloads. Hence there are lot of new features being added to the project to make it more robust and stable. This project also ha...]]></description><link>https://srujanpakanati.com/introducing-uwubernetes-ie-kubernetes-v130</link><guid isPermaLink="true">https://srujanpakanati.com/introducing-uwubernetes-ie-kubernetes-v130</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Fri, 19 Apr 2024 17:36:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1713544980898/3c3578c9-5e67-401e-8ade-255faf2703f4.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes project is progressing with a great pace and is being widely adopted by different sectors for wide variety of workloads. Hence there are lot of new features being added to the project to make it more robust and stable. This project also have stable, beta and alpha features introduced to them. You can find more about Uwubernetes(I know, it is cute and weird at the same time 😜) <a target="_blank" href="https://kubernetes.io/blog/2024/04/17/kubernetes-v1-30-release/">here</a></p>
<h2 id="heading-stable-features">Stable Features</h2>
<ul>
<li><p>Volume manger is refactored to get more information about how volumes are mounted after kubelet restart</p>
</li>
<li><p>Prevent unauthorized volume mode conversion during volume restore . From 1.30 only the SA that have right permissions can make changes while restoring snapshot to PV. More information <a target="_blank" href="https://kubernetes.io/docs/concepts/storage/volume-snapshots/#convert-volume-mode">here</a></p>
</li>
<li><p>Using <code>.spec.schedulingGates</code> you can now control when the pod is declared to be ready. This reduces burden on Scheduler to place an unplacable pod in a node due to resource constraints or similar issues. You can find more about SchedulingGates <a target="_blank" href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-scheduling-readiness/">here</a>.</p>
</li>
<li><p>You can now use <code>minDomains</code> parameter in PodTopologySpread constaints as it graduates. PodTopologySpread is an interesting feature to distribute pods across the cluster/regions which comes with a great constraints and requires significant learning. Read about it <a target="_blank" href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/">here</a>.</p>
</li>
<li><p>Kubernetes now uses <a target="_blank" href="https://www.kubernetes.dev/blog/2024/03/19/go-workspaces-in-kubernetes/">Go workspaces</a>. This does not effect end users but effects developers.</p>
</li>
</ul>
<h2 id="heading-beta-features">Beta Features</h2>
<ul>
<li><p>You can now get logs from induvidual nodes using features in kubelet like <code>NodeLogQuery</code>, <code>enableSystemLogHandler</code> and <code>enableSystemLogQuery.</code> See more information <a target="_blank" href="https://kubernetes.io/docs/concepts/cluster-administration/system-logs/#log-query">here</a>.</p>
</li>
<li><p>CRD validation ratcheting using feature gate <code>CRDValidationRatcheting</code> .</p>
</li>
<li><p>Contextual logging. Using this the developers can add contextual details to logs with <code>WithValues</code> and <code>WithName</code>.</p>
</li>
<li><p>Make Kubernetes aware of the LoadBalancer behaviour. Read more <a target="_blank" href="https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-ip-mode">here</a>.</p>
</li>
</ul>
<p>There are ton of <a target="_blank" href="https://kubernetes.io/blog/2024/04/17/kubernetes-v1-30-release/#new-alpha-features">alpha features</a> that are being added to the Uwubernetes 😂. The strong developer community and wide variety of use cases making Kubernetes de-facto way for the next two decades.</p>
]]></content:encoded></item><item><title><![CDATA[Checking out HAProxy Ingress Controller]]></title><description><![CDATA[Introduction
For me HAProxy is reminiscent of the days when I am still figuring out my way around software world. I started my career while working with HAProxy load-balancer. Back then we used to use automated shell scripts to dynamically configure ...]]></description><link>https://srujanpakanati.com/checking-out-haproxy-ingress-controller</link><guid isPermaLink="true">https://srujanpakanati.com/checking-out-haproxy-ingress-controller</guid><category><![CDATA[Haproxy]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ingress]]></category><category><![CDATA[Ingress Controllers]]></category><category><![CDATA[Load Balancing]]></category><dc:creator><![CDATA[Srujan Reddy]]></dc:creator><pubDate>Thu, 28 Mar 2024 21:15:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1711660487852/0f42eee8-d44a-4a95-aeb1-832152a64109.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>For me HAProxy is reminiscent of the days when I am still figuring out my way around software world. I started my career while working with HAProxy load-balancer. Back then we used to use automated shell scripts to dynamically configure backends for load-balancing. Both of us have came long way since then 😀</p>
<h2 id="heading-yet-another-ingress-controller">Yet another Ingress Controller</h2>
<p>In the eco-system over crowded with multiple Ingress controllers, HAProxy comes with an enterprise focused approach and innovative approaches like rootless containers and QUIC protocol. Coming from a legacy of LBing in VMs, all the existing features are extended to the Ingress Controller as well. <a target="_blank" href="https://github.com/haproxytech/kubernetes-ingress/blob/master/documentation/controller.md">Here</a> is the list of all config map options that are available.</p>
<h2 id="heading-installation">Installation</h2>
<p>The HAProxy Ingress controller installation is straightforward. Here I have created a simple EKS cluster and installed HAProxy using Helm</p>
<pre><code class="lang-markdown"><span class="hljs-section">## Cluster</span>
➜  ~ k get nodes
NAME                                       STATUS   ROLES    AGE     VERSION
ip-10-0-1-221.us-east-2.compute.internal   Ready    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span></span>   2m15s   v1.27.9-eks-5e0fdde
ip-10-0-2-143.us-east-2.compute.internal   Ready    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span></span>   2m21s   v1.27.9-eks-5e0fdde

<span class="hljs-section">## HAProxy installation</span>
➜  ~ helm install haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
  --create-namespace \
  --namespace haproxy-controller
NAME: haproxy-kubernetes-ingress
LAST DEPLOYED: Tue Mar 26 16:31:10 2024
NAMESPACE: haproxy-controller
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
HAProxy Kubernetes Ingress Controller has been successfully installed.

Controller image deployed is: "haproxytech/kubernetes-ingress:1.11.2".
Your controller is of a "Deployment" kind. Your controller service is running as a "NodePort" type.
RBAC authorization is enabled.
Controller ingress.class is set to "haproxy" so make sure to use same annotation for
Ingress resource.

Service ports mapped are:
<span class="hljs-bullet">  -</span> name: http
<span class="hljs-code">    containerPort: 8080
    protocol: TCP
  - name: https
    containerPort: 8443
    protocol: TCP
  - name: stat
    containerPort: 1024
    protocol: TCP
  - name: quic
    containerPort: 8443
    protocol: UDP
</span>
<span class="hljs-section">## Components Installed</span>

➜  learn<span class="hljs-emphasis">_haproxy_</span>ingress k get all -n haproxy-controller                                         
NAME                                              READY   STATUS      RESTARTS   AGE
pod/haproxy-kubernetes-ingress-6b9d5f976c-bvqfd   1/1     Running     0          18s
pod/haproxy-kubernetes-ingress-6b9d5f976c-v2lb4   1/1     Running     0          18s
pod/haproxy-kubernetes-ingress-crdjob-1-qzr5p     0/1     Completed   0          18s

NAME                                 TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                  AGE
service/haproxy-kubernetes-ingress   NodePort   10.110.187.218   <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">none</span>&gt;</span></span>        80:32563/TCP,443:30875/TCP,443:30875/UDP,1024:32400/TCP,6060:31395/TCP   18s

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/haproxy-kubernetes-ingress   2/2     2            2           18s

NAME                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/haproxy-kubernetes-ingress-6b9d5f976c   2         2         2       18s
</code></pre>
<h2 id="heading-testing-with-sample-application">Testing with sample application</h2>
<p><em>On a side note, I am trying the ingress controller in minikube so Ingress runs as a nodePort service. I have to portforward these ports using command</em></p>
<pre><code class="lang-markdown">➜  learn<span class="hljs-emphasis">_haproxy_</span>ingress minikube service haproxy-kubernetes-ingress -n haproxy-controller --url
http://127.0.0.1:50314
http://127.0.0.1:50315
http://127.0.0.1:50316
http://127.0.0.1:50317
http://127.0.0.1:50318
</code></pre>
<p><em>Now you have to figure out which host port forwards to which service port in ingress. I found 50317 is HTTP(80) and 50315 is HTTPS(443). If you are trying this out, it may be same for you or might be different</em></p>
<p>For trying the Ingress, I have followed their book "<a target="_blank" href="https://www.haproxy.com/content-library/ebooks/haproxy-in-kubernetes-supercharge-your-ingress-routing">HAProxy in Kubernetes - Supercharge Your Ingress Routing</a>".</p>
<p>I have deployed sample application using below definition  </p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">replicas:</span> <span class="hljs-number">2</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">matchLabels:</span>
      <span class="hljs-attr">run:</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">template:</span>
    <span class="hljs-attr">metadata:</span>
      <span class="hljs-attr">labels:</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">app</span>
    <span class="hljs-attr">spec:</span>
      <span class="hljs-attr">containers:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
        <span class="hljs-attr">image:</span> <span class="hljs-string">jmalloc/echo-server</span>
        <span class="hljs-attr">ports:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">containerPort:</span> <span class="hljs-number">8080</span>

<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Service</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">labels:</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-attr">haproxy.org/check:</span> <span class="hljs-string">"enabled"</span>
    <span class="hljs-attr">haproxy.org/forwarded-for:</span> <span class="hljs-string">"enabled"</span>
    <span class="hljs-attr">haproxy.org/load-balance:</span> <span class="hljs-string">"roundrobin"</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">selector:</span>
    <span class="hljs-attr">run:</span> <span class="hljs-string">app</span>
  <span class="hljs-attr">ports:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">port-1</span>
    <span class="hljs-attr">port:</span> <span class="hljs-number">80</span>
    <span class="hljs-attr">protocol:</span> <span class="hljs-string">TCP</span>
    <span class="hljs-attr">targetPort:</span> <span class="hljs-number">8080</span>

<span class="hljs-meta">---</span>
<span class="hljs-attr">apiVersion:</span> <span class="hljs-string">networking.k8s.io/v1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">web-ingress</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">foo.bar</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
          <span class="hljs-attr">pathType:</span> <span class="hljs-string">Prefix</span>
          <span class="hljs-attr">backend:</span>
            <span class="hljs-attr">service:</span>
              <span class="hljs-attr">name:</span> <span class="hljs-string">app</span>
              <span class="hljs-attr">port:</span> 
                <span class="hljs-attr">number:</span> <span class="hljs-number">80</span>
</code></pre>
<h2 id="heading-features">Features</h2>
<p>HAProxy Ingress Controller comes with all the features HAProxy has. It also has support to create TLS certs on the go for induvidual services as required.</p>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">extensions/v1beta1</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">Ingress</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">annotations:</span>
    <span class="hljs-comment"># add an annotation indicating the issuer to</span>
<span class="hljs-string">use</span>
    <span class="hljs-attr">cert-manager.io/cluster-issuer:</span>
<span class="hljs-string">letsencrypt-staging</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">mysite-ingress</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">default</span>
<span class="hljs-attr">spec: rules:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">host:</span> <span class="hljs-string">mysite.com</span>
    <span class="hljs-attr">http:</span>
      <span class="hljs-attr">paths:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">path:</span> <span class="hljs-string">/</span>
        <span class="hljs-attr">backend:</span>
          <span class="hljs-attr">serviceName:</span> <span class="hljs-string">mysite-service</span>
          <span class="hljs-attr">servicePort:</span> <span class="hljs-number">80</span>
  <span class="hljs-attr">tls:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">secretName:</span> <span class="hljs-string">mysite-cert</span>
    <span class="hljs-attr">hosts:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">mysite.com</span>
</code></pre>
<p>It will allow you to define backend, global and default groups to have fine-grained access controls like setting up <code>algorithm</code> <code>keep-alive</code> and <code>forwardfor</code> etc.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>HAProxy ingress is a compelling choice for the companies looking for enterprise supported tooling for safety and compliance and for companies who already uses HAProxy LB for their legacy systems.</p>
]]></content:encoded></item></channel></rss>