language: en kae3g ← back to index

kae3g 9501: What Is Compute? Cloud, P2P, and Networked Power

Phase 1: Foundations & Philosophy | Week 1 | Reading Time: 12 minutes

What You'll Learn

Prerequisites

From "A Computer" to "Compute"

In Essay 9500, we defined a computer as a single, universal machine.

But modern computing transcends individual machines:

Compute (noun): Processing power as a fungible resource, distributed across networks, rented by the second, accessed from anywhere.

The shift:

Then: "I own a computer"
Now:  "I rent compute"

Then: "Where's my machine?"
Now:  "Where's my data? Where's my process running? I don't know—and I don't care."

This is the cloudification of computing: treating CPU cycles, memory, and storage as commodities like electricity or water.

The Three Models of Compute

1. Centralized Compute (The Cloud)

Definition: Large datacenters owned by companies (AWS, Google, Microsoft) rent processing power to users.

Architecture:

Your Laptop/Phone
      ↓ (Internet)
Cloud Datacenter (Virginia, Oregon, Ireland, Singapore...)
├─ Millions of servers
├─ Your VMs/containers running here
└─ You pay per second/hour

Pros:

Cons:

Economic Model:

Examples:

2. Peer-to-Peer Compute (P2P)

Definition: Computation distributed across participant machines, with no central authority.

Architecture:

Your Computer ←→ Peer A
     ↕              ↕
   Peer B  ←→    Peer C
     ↕              ↕
   Peer D  ←→    Peer E

(No central server - everyone connects to everyone)

Pros:

Cons:

Historical Examples:

Modern Applications:

3. Edge Compute

Definition: Processing at the "edge" of the network, close to data sources.

Architecture:

Central Cloud (Virginia)
      ↕
Regional Edge (San Francisco)
      ↕
Local Edge (Your ISP's datacenter)
      ↕
Device Edge (Your phone/laptop)
      ↕
Sensors/IoT (Your smartwatch, car, thermostat)

Why Edge Matters:

  1. Latency: Physical distance = delay
    • Light travels at 300,000 km/s
    • San Francisco ↔ Virginia: ~4,000 km = 13ms minimum (one way)
    • Round trip: 26ms + processing time
    • For real-time applications (gaming, AR/VR, autonomous vehicles), this is too slow
  2. Bandwidth: Moving data is expensive
    • Self-driving car generates 4 TB/day of sensor data
    • Sending all data to cloud: impractical
    • Process locally, send only insights
  3. Privacy: Keep sensitive data local
    • Medical devices: process on device, never send raw data
    • Security cameras: detect motion locally, only upload alerts
  4. Reliability: Work offline
    • If internet dies, edge devices keep working
    • Airplane mode: your phone still processes photos, plays music

Examples:

The Compute Continuum

Modern systems use all three models in combination:

Example: A Self-Driving Car

┌─────────────────────────────────────────────────────────────┐
│ Cloud Compute (Datacenter)                                  │
│ - Train ML models on fleet data (100,000 GPUs)             │
│ - Map updates, traffic patterns                            │
│ - Over-the-air software updates                            │
└─────────────────────────────────────────────────────────────┘
                          ↑ ↓
                      (4G/5G Network)
                          ↑ ↓
┌─────────────────────────────────────────────────────────────┐
│ Edge Compute (In Car)                                       │
│ - Real-time sensor fusion (cameras, lidar, radar)          │
│ - Path planning, obstacle avoidance                        │
│ - Inference on pretrained models (100ms response)          │
│ - Works offline (in tunnels, rural areas)                  │
└─────────────────────────────────────────────────────────────┘
                          ↑ ↓
                   (CAN bus, internal)
                          ↑ ↓
┌─────────────────────────────────────────────────────────────┐
│ Device Compute (Sensors)                                    │
│ - Camera: compress images                                   │
│ - Lidar: range finding                                      │
│ - Radar: velocity detection                                 │
└─────────────────────────────────────────────────────────────┘

The division of labor is strategic:

The Economic Shift: Compute as Commodity

Traditional Computing (Pre-2006)

Want to run a website?
1. Buy servers ($10,000)
2. Install in your office
3. Pay for power, cooling, internet
4. Maintain hardware yourself
5. Over-provision (what if you get popular?)

Capital-intensive, slow to scale, risky.

Cloud Computing (2006+)

Want to run a website?
1. Write code
2. Deploy to AWS/Vercel/Fly.io
3. Pay $5/month (or $0.50 if low traffic)
4. Scale automatically
5. Someone else handles hardware

Low upfront cost, fast scaling, pay-as-you-go.

The Pricing Model

AWS EC2 (example):

Implications:

Location, Location, Location

Why Geography Matters

1. Latency (Speed of light is fixed!)

User in Tokyo → Server in Virginia:
- Distance: 11,000 km
- Light speed travel: 37ms (minimum)
- Actual internet route: 150-200ms (goes through routers, undersea cables)
- User perceives lag

User in Tokyo → Server in Tokyo:
- Distance: <100 km
- Latency: 5-10ms
- Feels instant

2. Data Sovereignty (Where data lives = which laws apply)

EU user data on EU servers:
- Must comply with GDPR (strict privacy)
- EU government can subpoena

EU user data on US servers:
- Must comply with GDPR AND US CLOUD Act
- Both EU and US governments can subpoena
- More legal complexity

3. Availability (Network failures are geographic)

US-EAST-1 (Virginia) goes down:
- Affects US East Coast users most
- Multi-region deployments stay up
- But: routing can cascade failures

Example: Cloudflare's approach

The P2P Alternative: Urbit's Vision

Problem with Cloud: You rent someone else's computer. They have all the power.

Problem with P2P: Hard to program, unpredictable performance.

Urbit's synthesis:

You own a "planet" (your personal server)
- Could be running on your hardware at home
- Could be hosted by a third party (but YOU own it)
- Other planets connect directly to yours (P2P)
- But with clean abstractions (not raw P2P chaos)

Key insight: Ownership of compute, not just rental.

(We'll explore Urbit deeply in Phase 4 - it's a radical reimagining of networked computing)

Hands-On: Understanding Latency

Exercise 1: Ping the World

Open a terminal and run:

# Ping Google's DNS (usually very close)
ping 8.8.8.8

# Typical results:
# - Same city: 5-15ms
# - Same continent: 20-50ms
# - Across ocean: 100-200ms
# - Satellite internet: 500-700ms

Insight: Every network request has this baseline latency.

For an API call:

Design implication: Minimize round-trips. Batch requests. Cache aggressively.

Exercise 2: Calculate Cloud Costs

Scenario: You run a web app

Monthly cost:

Servers: 2 × $0.0168 × 24 × 30 = $24.19
Database: $0.034 × 24 × 30 = $24.48
Storage: $10
Bandwidth: $45
Total: ~$104/month

Compare to:

Trade-offs everywhere!

Exercise 3: Explore P2P in Action

Try BitTorrent (legal torrents only!):

  1. Download a Linux ISO via torrent (e.g., Ubuntu)
  2. Watch the peer list: you're connected to dozens of strangers
  3. Notice: no central server, yet the download works
  4. Each peer uploads to others (P2P reciprocity)

Insight: P2P scales beautifully for popular content (more peers = faster). But for rare content, you need seeders (someone hosting).

The Philosophical Implications

Centralization vs Decentralization

Centralized (Cloud):

Decentralized (P2P):

The Valley's Position:

We embrace selective centralization. Use cloud for convenience, but design systems that could run P2P. Avoid lock-in. Own your data. Choose your dependencies consciously.

Sovereignty and Compute

Who controls your compute?

Scenario 1: Your app on AWS

Scenario 2: Your app on your hardware

Scenario 3: Your app on a P2P network

The middle ground:

We'll explore this deeply in Phase 5 (Synthesis & Integration).

The Future: Compute Trends

1. Serverless Everywhere

Current: You manage VMs/containers
Future: You write functions, cloud runs them
Example: Cloudflare Workers, AWS Lambda@Edge

Pros: Pay per request (can be $0 for low-traffic sites)
Cons: Vendor lock-in, cold start latency, less control

2. Edge AI

Current: ML models run in cloud
Future: Models run on device (phone, laptop, IoT)
Example: Apple Silicon's Neural Engine, Google Tensor

Why: Privacy (data never leaves device), latency (instant), offline (works anywhere)

3. Quantum Computing (Still Early)

Current: Classical computers (bits: 0 or 1)
Future: Quantum computers (qubits: superposition of 0 and 1)
Status: Experimental (not ready for general use)

What it's good for: Optimization, simulation, cryptography
What it's NOT good for: General-purpose computing (your laptop won't be quantum)

4. Homomorphic Encryption

Problem: To process data in cloud, you must decrypt it (cloud provider sees it)
Future: Compute on encrypted data, cloud never sees plaintext
Status: Mathematically possible, but too slow today (100-1000x overhead)

If this works: True "zero-knowledge" cloud computing (ultimate privacy)

Try This

Exercise 1: Deploy to the Cloud (Beginner-Friendly)

Option A: Static site (free!)

  1. Create an HTML file: index.html
  2. Push to GitHub
  3. Enable GitHub Pages
  4. Your site is now live! (Uses GitHub's compute/bandwidth)

Option B: Full backend (free tier)

  1. Write a simple API in Python/Node.js
  2. Deploy to Render.com or Fly.io (free tier: 512MB RAM)
  3. Your API is now accessible worldwide

Insight: You're using compute you don't own, accessed via URLs.

Exercise 2: Measure Your Compute Usage

On your laptop, run:

# macOS/Linux
top

# See:
# - CPU usage (how much compute you're using)
# - Memory usage
# - Running processes

Now think:

Exercise 3: Design a Distributed System

Scenario: Build a messaging app (like Signal, WhatsApp)

Questions:

  1. Where do messages live?
    • Centralized server? (Easy, but operator sees everything)
    • P2P? (Private, but how do you send to offline users?)
    • Hybrid? (Server as relay for offline delivery, but encrypted end-to-end?)
  2. Where does compute happen?
    • Encrypt/decrypt in browser/app? (Privacy, but slower)
    • Server-side? (Fast, but server sees plaintext)
  3. How do you find users?
    • Central directory? (Phone number → user ID mapping)
    • Distributed hash table? (P2P, but harder to implement)

Real-world answer: Signal uses hybrid:

Trade-offs everywhere! No perfect solution.

Going Deeper

Related Essays

External Resources

For the Technically Curious

Reflection Questions

  1. If "compute" is a commodity, who owns it? (AWS owns the hardware, you rent cycles—but what about data sovereignty?)
  2. Is P2P inevitable, or will centralized cloud dominate? (Network effects favor centralization, but privacy concerns push toward P2P)
  3. What does it mean to "own" your compute? (Hardware? The right to run programs? Access to internet? All three?)
  4. How much latency is acceptable? (For chat: 100ms OK. For gaming: 20ms max. For autonomous vehicles: 1ms required)
  5. Should compute be free? (Like roads/libraries? Or like electricity? Or purely market-based?)

Summary

Compute is:

Key Insights:

In the Valley:

Next: We'll explore the Unix philosophy—how to structure computation simply, regardless of where it runs.

Navigation:
← Previous: 9500 (what is a computer) | Phase 1 Index | Next: 9502 (ode to nocturnal time)

Bridge to Narrative: For a character-driven take on distributed systems, see 9960 (The Grainhouse) - our vision for sovereign computing!

Metadata:

Copyright © 2025 kae3g | Dual-licensed under Apache-2.0 / MIT
Competitive technology in service of clarity and beauty


← back to index