Google wants to make Kubernetes even easier to use for developers

New Autopilot function will automatically scale the cluster based on the requirements

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

A new functionality will allow users ofGoogleKubernetesEngine (GKE) to offload the provisioning and management of their container infrastructure to an automated process.

The new mode, dubbedAutopilot, is designed to automatically provision and take care of the cluster’s underlying infrastructure, including its nodes and node pools.

“Autopilot can help, allowing businesses to embrace Kubernetes and simplifying operations by managing the cluster infrastructure, control plane, and nodes,” observed Drew Bradstock, the Group Product Manager of GKE.

Optimum resource utilization

Optimum resource utilization

The company added that GKE is effectively managed Kubernetes offered by Google, similar to theAmazonEKS and Azure KS offerings from Amazon andMicrosoftrespectively.

While all platforms make it substantially easier to provision and manage nodes, the new Autopilot mode will help GKE users by automatically rolling out clusters based on the required workload.

In other words, Autopilot will provision and scale the underlying compute infrastructure based on the required workload. It’ll automatically adjust the resources required.

According to GKE’s website, the Autopilot clusters are pre-configured with an optimized cluster configuration that is ready for production workloads. Speaking with TechCrunch Bradstock shares that with the Autopilot mode, Google is allowing its users to take advantage of the best practices of its site reliability engineering (SRE) teams who’ve been running GKE clusters in production inside the company for a long time.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

In terms of costs, Google says you’re billed per second for the vCPU, memory and disk resource requests, while they are running. In fact, this might just make your deployments more efficient, both in terms of their use and cost.

ViaTechCrunch

With almost two decades of writing and reporting on Linux, Mayank Sharma would like everyone to think he’sTechRadar Pro’sexpert on the topic. Of course, he’s just as interested in other computing topics, particularly cybersecurity, cloud, containers, and coding.

iStorage Group acquires Kanguru Solutions as it looks to expand security offering

Phishing attacks surge in 2024 as cybercriminals adopt AI tools and multi-channel tactics

Arcane season 2 finally gave us the huge Caitlyn and Vi moment we’ve been waiting for – and its creators say ‘we couldn’t have done it in season one’