Run ads IRL with AdQuick

With AdQuick, you can now easily plan, deploy and measure campaigns just as easily as digital ads, making them a no-brainer to add to your team’s toolbox.

You can learn more at www.AdQuick.com

Containers vs VMs: Understanding the Architecture That Powers Modern Cloud Infrastructure

ResearchAudio Weekly

Containers vs VMs: Understanding the Architecture That Powers Modern Cloud Infrastructure

A visual, beginner-friendly guide to how isolation works, when to use each approach, and navigating the container ecosystem

10 min read • Visual diagrams • Real-world examples

WHAT YOU'LL LEARN:

✓ The core problem both technologies solve • ✓ How VMs and containers actually work • ✓ When to choose each one • ✓ Container runtime options explained

Part 1

The Problem: Why We Need Isolation

Let's start with a real scenario. You have one powerful server. You want to run three applications on it:

📦

App A

Needs Python 2.7

📦

App B

Needs Python 3.11

📦

App C

Needs Java 8 + specific libs

The problem: You can't install Python 2.7 and Python 3.11 as the default Python on the same system. App C's libraries might conflict with what App A needs. If App B crashes and corrupts shared memory, it could bring down the others.

You need isolation—a way to make each application believe it has its own dedicated computer, even though they're sharing one physical machine.

💡 The Apartment Building Analogy

Virtual Machines = Separate Houses

Each house has its own foundation, plumbing, electrical, heating, and walls. Completely independent. If one house has a plumbing problem, others are unaffected. But building each house is expensive and slow.

Containers = Apartments in One Building

Apartments share the foundation, plumbing, and electrical (the kernel), but each has its own interior space and locks. Much faster and cheaper to add apartments. But if the building's foundation has a problem, everyone is affected.

Part 2

How Virtual Machines Work

A virtual machine is exactly what it sounds like: a fake computer running inside a real computer. Special software called a hypervisor creates this illusion.

The hypervisor takes your physical CPU, RAM, and storage, and divides them up. Each VM gets a slice and believes it has its own dedicated hardware.

WHAT HAPPENS WHEN YOU START A VM:

1

Hypervisor allocates resources — Carves out CPU cores, RAM, and disk space for the VM

2

BIOS initializes — Just like a real computer, the VM runs through startup routines

3

Full OS boots — The entire guest operating system loads (Ubuntu, Windows, etc.)

4

Your app finally runs — After 30-60 seconds, your application can start

Virtual Machine Architecture

VM 1

Your App

Libraries

Full Guest OS

Ubuntu 22.04

~1-2 GB RAM

VM 2

Your App

Libraries

Full Guest OS

CentOS 8

~1-2 GB RAM

VM 3

Your App

Libraries

Full Guest OS

Windows Server

~2-4 GB RAM

↓ ↓ ↓

HYPERVISOR

Manages and isolates VMs • Examples: VMware, Hyper-V, KVM

PHYSICAL SERVER

CPU, RAM, Storage, Network

Notice: Each VM runs a complete operating system. That's 1-4 GB of RAM per VM before your app even starts.

Part 3

How Containers Work

Containers take a completely different approach. Instead of simulating hardware and booting a full OS, they use features already built into the Linux kernel to create isolated spaces.

Think of it this way: the kernel is like a building manager. Containers are like telling the manager "give this tenant their own mailbox, their own apartment number, and don't let them see other tenants"—but they're all still in the same building.

Two kernel features make this possible:

👁 Namespaces

What a process can SEE

Each container gets its own isolated view of: process IDs, network interfaces, filesystem mounts, user IDs, and hostnames. Container A cannot see Container B's processes.

📈 Cgroups

What a process can USE

Limits how much CPU, memory, disk I/O, and network bandwidth each container can consume. Prevents one container from hogging all resources.

WHAT HAPPENS WHEN YOU START A CONTAINER:

1

Create namespaces — Kernel creates isolated views (takes milliseconds)

2

Set cgroup limits — Kernel sets resource boundaries (takes milliseconds)

3

Your app runs immediately — No OS boot needed. Total time: under 1 second.

Container Architecture

Container 1

Node.js

Libs only

Container 2

Python

Libs only

Container 3

Redis

Libs only

Container 4

Nginx

Libs only

↓ ↓ ↓ ↓

CONTAINER RUNTIME

Docker, containerd, Podman

SHARED LINUX KERNEL

All containers use the SAME kernel • No separate OS per container

PHYSICAL SERVER

Notice: No Guest OS layer. Containers only contain apps and their dependencies. Overhead per container: ~5 MB instead of 1-2 GB.

Part 4

When to Use Each: Decision Guide

Both technologies are still widely used—often together. Here's how to decide:

Use Virtual Machines when you need: different operating systems (like running Windows applications on Linux hosts), strong security boundaries for compliance requirements in banking or healthcare, specific kernel versions for legacy applications, isolation for running untrusted code from external users, or when regulations explicitly mandate VM-level separation.

Use Containers when you need: fast startup and rapid scaling (going from 3 to 30 instances in seconds), microservices architecture with many small services working together, efficient CI/CD pipelines with quick build-test-deploy cycles, consistent environments where development matches staging and production, high workload density with hundreds of services per host, or Kubernetes orchestration for cloud-native applications.

In practice, many organizations use both. A common pattern is running containers inside VMs—you get the security isolation of VMs at the infrastructure level, while still benefiting from container density and speed for your applications. Cloud providers like AWS, Google Cloud, and Azure all run customer containers inside VMs for this reason.

Part 5

Container Runtimes: Your Options

"Container" describes the concept. Multiple tools implement it, organized in three layers where higher layers provide more features and lower layers do the actual work.

Layer 1: User Tools. These are what you type commands into. Docker is the most common choice for development—it provides the full platform with CLI, build tools, and Docker Compose. Podman is a Docker-compatible alternative that runs without a daemon and can run containers as a regular user (rootless). nerdctl is a Docker-compatible CLI that talks directly to containerd.

Layer 2: CRI Runtime. This is what Kubernetes talks to. containerd is the industry standard—it powers EKS, GKE, and AKS. CRI-O is the OpenShift default, built specifically for Kubernetes with nothing extra.

Layer 3: OCI Runtime. This is what actually creates the container by setting up namespaces and cgroups. runc is the default (written in Go). crun is a faster alternative written in C. gVisor intercepts system calls and runs them in userspace, providing extra sandboxing for untrusted code.

Which should you use? For local development, Docker Desktop or Podman. For Kubernetes in production, your cluster likely already uses containerd. If you need rootless containers without a daemon, choose Podman. If you're running untrusted user code, consider gVisor for the extra isolation layer.

Key Takeaway

Virtual machines virtualize hardware, giving you complete isolation but with heavy overhead. Containers virtualize the operating system, sharing the kernel for lightweight and fast isolation. Most modern applications use containers for speed and density, while VMs handle workloads needing different operating systems or stronger security boundaries. In practice, they often run together—containers inside VMs—for defense in depth.

Found this helpful?

Forward this to colleagues learning about infrastructure or Kubernetes.

Sources

Docker Documentation, Kubernetes Documentation, CNCF Container Runtime Landscape, Open Container Initiative, Linux Kernel Documentation (namespaces, cgroups)

ResearchAudio

Technical concepts explained, weekly.

UnsubscribePreferences

Keep Reading