Using containers I can easily ship applications between machines and start to think of my cluster as a single computer. Each machine acts as additional CPU cores with the ability to execute my applications and run an operating system, but the goal is not to interact with the locally installed OS directly. Instead we want to treat the local OS as firmware for the underlying hardware resources.
Now we just need a good scheduler.
The Linux kernel does a wonderful job of scheduling applications on a single host system. Chances are if we run multiple applications on a single system the kernel will attempt to use as many CPU cores as possible to ensure that our various applications run in parallel.
When it comes to a cluster of machines the job of scheduling applications becomes an exercise for the operations team. Today for many organizations scheduling is handled by the fine folks running that team. Yet, unfortunately the use of a human scheduler requires humans to keep track of where applications are running. Sometimes this means using complicated error-prone spreadsheets or a configuration management tool with Puppet. Either way these tools don’t really offer the robust scheduling that is necessary to react to these real time events. This is where Kubernetes fits in.
The inspiration for this post came from Kelsey Hightower (@kelseyhightower).