KEDA, Scale Your Kubernetes Workload on Your Own Terms

Kubernetes is a powerful platform to host various kind of workloads, and these workloads vary in their need of scale. For that, Kubernetes has a built-in functionality to scale these workloads based on their resources consumption like CPU and memory. However, there is no built-in way to scale workloads based on events that happen outside of the cluster; e.g. the length of a storage queue in the cloud. KEDA came in to fill this gap with various built-in scalers that come with the package. You can also write your own scaler that responds to your own events and needs. In this session we will understand what KEDA is, how it works, and how we can build our own scaler that scales our workloads to our own events and needs.

    Emad Alashi
    Lead Consultant, Telstra Purple

    Emad Alashi is a software developer whose main interest is in web development, Software Architecture, Software Management, and the human interaction caught in between.

    Emad speaks regularly in conferences and user groups, including NDC Sydney, Microsoft Ignite Australia, and in local user groups and code camps like Vic.Net, Azure Meetup, Azure Bootcamps, and Alt.Net in Melbourne Australia.

    He is 4-times ASP.NET/IIS MVP, he hosts the DotNetArabi technical podcast, and on whatever time left he writes to his blog on emadashi.com.

    Emad, and currently works as a Lead Consultant Telstra Purple, he can be found on twitter @emadashi.

    Programutvikling uses cookies to see how you use our website. We also have embeds from YouTube and Vimeo. How do you feel about that?