TTFB Explained: What's a Good Time to First Byte?
Published 2026-03-29 · Last modified 2026-03-29
Time to First Byte (TTFB) is the single most telling metric for server-side performance. It measures the duration from the moment a client sends an HTTP request to the moment it receives the first byte of the response. A high TTFB means your server is struggling — and every millisecond of delay cascades into slower page loads, worse Core Web Vitals scores, and frustrated users. In this article, we'll break down exactly what TTFB includes, what good and bad values look like, and how to systematically reduce yours.
What TTFB Actually Measures
TTFB is often misunderstood as purely a "server speed" metric, but it actually encompasses three distinct phases:
- DNS lookup time — Resolving the hostname to an IP address. For a well-configured domain with low TTL and fast nameservers, this is typically 10–50 ms. If you're seeing 200+ ms here, check your DNS configuration.
- Connection time — Establishing the TCP connection (and TLS handshake for HTTPS). This is largely a function of geographic distance between the client and server. Each round trip adds latency equal to the ping time between the two endpoints.
- Server processing time — The time your server spends generating the response after receiving the full request. This is the component you have the most control over.
When you run a TTFB test on GF.dev, you'll see these phases broken out individually, making it easy to identify which stage is the bottleneck.
What's a Good TTFB?
Google's guidance as part of Core Web Vitals classifies TTFB into three buckets:
- Good: under 800 ms — Google considers this acceptable, but for competitive sites you should aim much lower.
- Needs Improvement: 800 ms – 1800 ms — Users will perceive delay at this level, and it will drag down your Largest Contentful Paint (LCP) score.
- Poor: over 1800 ms — Significant user experience degradation. Pages will feel unresponsive.
In practice, here are more actionable benchmarks based on server type:
- Static file / CDN edge hit: 5–50 ms
- Cached dynamic page (Varnish, Redis, or application-level cache): 20–100 ms
- Uncached dynamic page (typical CMS like WordPress): 200–500 ms
- Complex application page (heavy database queries, external API calls): 300–800 ms
If your uncached dynamic pages consistently exceed 500 ms, you have optimization work to do.
Diagnosing High TTFB
Start by isolating which component of TTFB is slow. Run the GF.dev TTFB Test and examine the timing breakdown.
High DNS Time
If DNS resolution accounts for a disproportionate share of TTFB, the fixes are straightforward:
- Switch to a faster DNS provider (Cloudflare DNS, Route 53, or Google Cloud DNS typically resolve in under 20 ms globally).
- Ensure your TTL values aren't absurdly low. While low TTLs help during migrations, a 60-second TTL means more DNS lookups. For stable records, 300–3600 seconds is reasonable.
- Verify there are no CNAME chains adding extra resolution steps. Use our DNS Lookup tool to trace the full resolution path.
High Connection Time
Connection time is dominated by network latency. If your server is in Virginia and your users are in Tokyo, physics dictates a minimum of ~150 ms per round trip. Solutions:
- Deploy a CDN — Cloudflare, Fastly, or AWS CloudFront can serve cached content from edge nodes worldwide, effectively eliminating the geographic penalty.
- Enable TLS 1.3 — TLS 1.3 reduces the handshake from two round trips to one (and supports 0-RTT resumption for returning visitors). Check your TLS configuration.
- Use HTTP/2 or HTTP/3 — Multiplexed connections reduce the overhead of subsequent requests. HTTP/3 (QUIC) further improves connection establishment time.
High Server Processing Time
This is where most TTFB problems live. If the DNS and connection phases are fast but overall TTFB is high, your application code or server configuration is the bottleneck. Common culprits:
- Unoptimized database queries — Enable slow query logging in MySQL (
slow_query_log = 1, long_query_time = 0.5) or PostgreSQL (log_min_duration_statement = 500). Look for queries without proper indexes, or N+1 query patterns.
- Missing application caching — If every page request triggers fresh database queries and template rendering, you're wasting CPU cycles. Implement page-level caching with tools like Varnish, or object-level caching with Redis or Memcached.
- PHP without OPcache — If you're running WordPress or Laravel, ensure OPcache is enabled. Without it, PHP re-parses and compiles every script on every request.
- Blocking external API calls — If your page generation depends on a third-party API, that API's latency becomes part of your TTFB. Move these calls to asynchronous background processes where possible.
- Underpowered hardware — Check CPU utilization with
top and memory usage with free -m. If your server is consistently above 80% CPU or swapping memory to disk, it's time to scale up or out.
Reducing TTFB: A Practical Approach
Work through these optimizations in order of impact:
- Enable server-level caching — Add Varnish or Nginx FastCGI caching in front of your application. This alone can drop TTFB from 400 ms to under 20 ms for cached pages.
- Optimize your database — Add missing indexes, enable query caching, and audit your ORM for unnecessary queries. Use
EXPLAIN to analyze slow queries.
- Enable OPcache / bytecode caching — A zero-effort win for interpreted languages. For PHP, add
opcache.enable=1 and opcache.memory_consumption=128 to your php.ini.
- Tune your web server — Ensure Nginx or Apache worker processes match your hardware. Too few workers means requests queue; too many means context switching overhead. A starting point for Nginx:
worker_processes auto; with worker_connections 1024;.
- Deploy a CDN — Even if you only cache static assets, you free your origin server to focus on dynamic requests. If the CDN supports edge caching of dynamic content (e.g., Cloudflare APO for WordPress), the improvement is dramatic.
- Upgrade your TLS configuration — Enable TLS 1.3, configure OCSP stapling, and use an appropriately sized certificate chain. Read more in our TLS deep dive.
Measuring TTFB Correctly
A single TTFB measurement is meaningless. You need to account for variability by testing multiple times and from multiple locations. Here's a sound methodology:
- Run the GF.dev TTFB Test at least three times in succession. Discard the first result (which includes DNS caching warmup) and average the remaining ones.
- Test from different geographic regions to separate server processing time from network latency.
- Test both cached and uncached responses. Append a random query string (
?cachebust=12345) to bypass CDN caching and measure true origin TTFB.
- Measure at different times of day. A server that responds in 100 ms at 3 AM and 900 ms at 3 PM has a capacity problem.
TTFB and SEO
Google has confirmed that server response time affects search rankings. While TTFB isn't a direct ranking signal, it's a foundational component of Largest Contentful Paint (LCP), which is a Core Web Vital. A high TTFB makes it mathematically impossible to achieve a good LCP score — no matter how optimized your frontend is, the page can't start rendering until the first byte arrives.
For a broader view of how server configuration affects your site's health, see our Web Server Performance Troubleshooting pillar guide, which covers HTTP headers, server signature exposure, and network-level diagnostics alongside TTFB.