Why docker pull Freezes While Chrome Still Loads Fast
Picture the usual scene on a constrained or divided network: Firefox reaches GitHub without drama, your IDE downloads plugins, yet docker pull alpine:latest sits forever on Pulling fs layer or collapses with TLS handshake timeout, i/o timeout, or similarly vague transport errors. That split is not mystical. Browsers largely honor the operating system proxy table that Clash can populate when you toggle System Proxy, and they can also carry their own SOCKS-aware stacks. The Docker daemon, by contrast, is a long-running service that opens outbound connections from a different context. Unless you explicitly configure engine-level HTTP proxies or place traffic on a visible route before user-space libraries get involved, registry requests may still march straight toward a path that congests, rate-limits, or simply never completes TLS within the timeout budget.
Developers then try quick fixes—exporting HTTPS_PROXY in an interactive shell—which helps curl but often leaves docker pull unchanged because the CLI merely talks to the daemon over a local socket and the daemon performs the real HTTPS work. On Docker Desktop you may have a separate proxy panel; on Linux you may need /etc/docker/daemon.json or systemd drop-ins. Each path is valid, but it is easy to create a half-working setup where human tools behave while the engine does not. Clash TUN mode offers a complementary strategy: transparent interception at the network layer routes those daemon connections through the same sorted rules: section that already steers browsers, which makes Docker Hub pull failure symptoms easier to reason about from connection logs instead of chasing environment-variable whack-a-mole.
Search intent anchor: If you landed here after searching for TLS handshake timeout alongside Docker, you are probably looking for a repeatable chain—DNS, TCP, TLS, and bulk data—that you can bisect. TUN gives you a single observation lens: does the flow hit your intended outbound policy before blaming Docker Hub itself?
Typical Symptoms: Stuck Pulling, Partial Blobs, and Cryptic TLS Errors
Failures rarely arrive as a single canonical string across clients. Compose might stall after the first service image while others succeed. CI runners built on the same laptop work because they inherit different proxies. Kubernetes nodes behind corporate firewalls sometimes only break for multi-arch manifests. Still, a compact pattern emerges: the client reaches the manifest stage, begins fetching layers, then either idles with no throughput or throws transport-layer errors that reference TLS or deadlines. When the root cause is path-related rather than an outage at the registry, repeating the pull eventually shows identical timing fingerprints—minutes of silence, abrupt reset, or handshake retry loops.
Layered failures are worth naming. Authentication to auth.docker.io might succeed while blob downloads from CDN fronts crawl. IPv6 paths occasionally flap when dual-stack policies disagree with your rule bundles. Rate limiting presents as HTTP 429 class responses rather than TLS faults, yet engineers still conflate them under “Docker pull stuck” in notes. Collecting one verbose trace (docker pull --progress=plain on recent CLI releases where available, or enabling client debug where documented) distinguishes manifest issues from data-plane stalls and saves you from toggling unrelated knobs.
What You Should Have Before Touching Docker Again
Prerequisites are intentionally boring because skipped basics cause false negatives. Install a maintained Mihomo-compatible desktop client—examples such as Clash Verge Rev receive regular core updates and expose TUN toggles with clearer guardrails than abandoned forks. Load a profile that passes ordinary browsing tests in Rule mode, refresh stale RULE-SET providers when domestic versus overseas buckets matter, and avoid enabling experimental features you do not understand yet. If corporate policy forbids altering routes, stop and follow internal guidance; this article assumes lawful network use aligned with contracts you accepted.
Hardware virtualization and competing VPN clients deserve mention. Full-tunnel VPN software often owns the default route, which can mute TUN until you adjust split settings. Hypervisors and nested environments may isolate Docker’s data plane from host interception. Expect to iterate: validate host-level reachability first, then map the same constraints inward. That sequencing prevents chasing Clash DNS tweaks when the real issue is a Hyper-V switch or WSL virtual NIC sitting off the diverted route.
Step 1: Confirm the Profile—not Just the GUI Icon—Is Healthy
Open your client dashboard and perform ordinary latency tests on the nodes you actually route through. Glance at recent logs for red errors about permission denial, outdated geodata, or provider download failures. If subscription refresh endpoints cannot be reached without manual intervention, fix that baseline before expecting registry traffic to behave. Pay attention to MATCH fallbacks: an overly aggressive catch-all that sends domestic CDN nodes through a distant relay inflates latency until TLS deadlines fire, which mimics classic timeout symptoms even when the proxy chain is “technically up.”
Temporarily unset shell exports such as HTTP_PROXY, HTTPS_PROXY, and ALL_PROXY while you test TUN. Duplicate chaining—environment variables pointing at the same mixed port your stack already handles—can confuse diagnosis and sometimes double-wrap CONNECT attempts. Once TUN stabilizes, you can reintroduce per-tool overrides for edge cases deliberately rather than inheriting a decade-old .bashrc block that nobody audits.
Step 2: Enable TUN Mode and Approve the OS Prompts Once
Navigate to the TUN section of your graphical client, enable it, and grant the privileged operations the OS requests. On Windows this commonly involves Wintun-style drivers and UAC consent; on macOS you will see privacy language reminiscent of VPN profiles; on Linux, helper capabilities vary by packaging. After activation, confirm the UI reports TUN online without repeated failure loops. If the adapter flaps, fix that instability before blaming Docker—flapping routes produce intermittent TLS errors that look deceptively like remote faults.
Keep Rule mode engaged unless you are performing a controlled experiment. Global mode can be useful to prove reachability quickly, but it hides mis-tagged destinations that will bite you later when you return to split policies. When Global pull succeeds while Rule fails, you have strong evidence that a specific rule line needs tuning for a registry hostname or CDN edge rather than that your engine is broken.
Lawful use: Respect local telecommunications regulations and workplace acceptable-use policies. Modifying system routing is powerful; use it only where you have clear authorization, and do not bypass security controls your organization mandates.
Step 3: Map Docker Hub Hostnames Into Your Rule Mental Model
Docker Hub is not a single IP painted on a wall. Client workflows touch registry-1.docker.io for manifests and layer metadata, auth.docker.io for token exchange, and frequently hit CDN fronts such as production.cloudflare.docker.com during blob transfer. Educational articles sometimes oversimplify by saying “allow docker.io” while forgetting sibling hostnames that still determine success. Inspect your sorted rules: each of these should classify to a policy you can explain out loud—domestic direct, overseas relay, or a dedicated low-latency group—not accidentally fall through to a REJECT leftover from an aggressive ad block list import.
If you run split routing for mainland optimization, order matters. Put narrow DOMAIN-SUFFIX entries for registry infrastructure near the top, above broad GEOIP buckets that might mis-tag an edge server after BGP changes. When diagnosing, temporarily mirror those lines into a dedicated PROXY policy to see if throughput normalizes; then tighten back toward the minimal effective configuration. The goal is predictable container development proxy behavior: images pull reliably during standups, not only when Global mode is on.
Step 4: Prove Host TLS Before Arguing With the Daemon
From a terminal on the Docker host with proxy variables cleared, run conservative HTTPS probes such as curl -Iv https://registry-1.docker.io/v2/ and watch for clean certificate presentation plus HTTP 401 Unauthorized—expected when unauthenticated—rather than stalled handshakes. Parallel-watch your client’s live connections table during the probe. You should see flows classified according to your YAML, not mysteriously absent. If curl fails identically while TUN claims to be active, your problem still lives above Docker in the stack; fix routing first.
Optional checks include resolving each hostname with the same resolver path Docker will use. Mismatch between systemd-resolved, Docker’s embedded DNS, and Clash’s dns stanza can yield subtle delays. When image pull timeout traces line up with DNS retry storms, add explicit nameserver-policy mappings or align fake-ip usage with documented patterns for your core version rather than guessing.
Step 5: Align Docker Desktop or Linux Engine Settings With the Route You Chose
On Docker Desktop for Windows or macOS, review Resources → Proxies (wording shifts between releases). If you already route the host via TUN, manual proxy fields may be redundant; nevertheless, some builds behave more predictably when engine-level proxy mirrors the intended egress. If you populate fields, ensure schemes and bypass lists match internal docs—miswritten no-proxy ranges are a classic source of “works in browser, fails in daemon” tales.
On native Linux daemons, consult distribution documentation for systemd service overrides or daemon.json proxies blocks when you need explicit engine awareness beyond kernel routing. TUN may still be enough when the daemon’s packets originate on the routed host interface, but rootless setups, user-defined bridges, or snap confinement can diverge. When in doubt, observe from ss -tnp during a pull which local address initiates outbound SYNs and whether they traverse the tunnel interface metrics you expect.
WSL2, Colima, and Nested VMs: Where Host TUN Stops Short
Developers increasingly execute docker pull inside WSL distributions or lightweight VMs such as Colima. Those environments often perform their own NAT, so packets never touch the host TUN path you painstakingly validated. Remedies include configuring inner proxies, sharing Windows host network settings per vendor guides, or switching WSL networking modes when platforms offer mirrored bridges. Write this down in your team runbooks: host proof, then inner proof. Skipping the second step burns hours blaming Clash for a bridge that never saw interception in the first place.
Similarly, Kubernetes workers that pull images on boot exhibit different systemd ordering; nodes might fetch before your user session launches the graphical Clash client. Servers therefore lean on daemon-level proxies or permanently configured routes rather than ad-hoc GUI toggles. The conceptual takeaway still holds—align engine egress with policy—but the automation surface changes.
A Practical Verification Checklist You Can Reuse
- Disable temporary proxy exports in your shell to isolate TUN effectiveness for interactive tests.
- Confirm the client shows TUN active and logs no permission or driver errors on fresh boot.
- Run curl TLS probes to registry hosts and watch live flow classifications in the UI.
- Pull a tiny image such as
alpine:latestbefore multi-gigabyte development stacks. - Compare Rule versus Global outcomes to separate routing mistakes from upstream outages.
- Capture one verbose pull trace when failures recur so timestamps align with connection logs.
Troubleshooting When TLS Timeouts Persist
Path MTU or middlebox interference
Sometimes TLS appears to “hang” when large server certificates traverse links with constrained MTU and black-hole ICMP. Mihomo family cores expose advanced knobs for MSS-related tuning documented upstream; consider them when WAN links are fragile even though browsers occasionally mask the issue by retrying differently.
IPv6 egress surprises
If dual-stack clients race toward IPv6 paths your rules undervalue, handshake delays can stack. Either align IP-CIDR6 coverage thoughtfully or verify whether Docker and the OS prefer the same address family during the failing window.
Stale authentication or corporate filtering
Interception appliances that break TLS without trusting corporate roots may poison registry sessions selectively. Those cases are not fixed by Clash; they require trust store alignment or approved exempt paths negotiated with IT.
Frequently Asked Questions
Does enabling TUN replace configuring Docker’s own proxy fields? Often for host-native daemons, but not always for isolated VMs. Treat them as complementary tools: TUN for unified split routing, explicit Docker proxy settings when the engine’s network namespace truly sits off-host.
Will pulls become slower when everything traverses inspection logs? Properly tuned Rule mode should direct only overseas registry edges through relays. Domestic mirrors—if you maintain them—can remain DIRECT, sometimes improving throughput versus accidental foreign detours.
Can I automate this for a headless server? Desktop TUN workflows center on interactive permission grants. Servers typically need persistent daemon proxies, routing tables managed by infrastructure tools, or dedicated egress gateways rather than manual GUI toggles.
Why Clash-Style Routing Helps Container Workloads More Than One-Off Proxy Hack
SaaS VPN clients marketed with cartoon mascots rarely expose the transparent levers developers need: per-domain policies you can read in YAML, live connection rows that map to rule lines, and community maintainers who ship timely core updates when protocols evolve. Browser-only extensions cannot see the daemon that actually pulls layers. Single-purpose HTTP forwarders leave QUIC and bespoke resolver paths opaque. By contrast, a Mihomo-compatible stack with thoughtful Clash TUN usage gives you one place to align browsers, terminals, and—when routes permit—Docker Hub sessions behind the same intentional policy.
Abandoned forks freeze mid-decade transports and leave you guessing whether a Docker Hub pull failure stems from the network or from obsolete dependencies. Maintained Clash-era clients continue integrating modern nodes, scheduler fixes, and diagnostics that shorten the gap between “TLS handshake timeout” on the console and the concrete rule or DNS tweak that fixes it. If you want that workflow without hunting forum attachments with mystery checksums, pull current verified builds from our hub: