The Difference Between an API That Works and an API Developers Enjoy Using
by Eric Hanson, Backend Developer at Clean Systems Consulting
What developers actually evaluate
When an engineering team evaluates an API — for a payment provider, a shipping carrier, a communications platform — functional correctness is assumed. If the API cannot do the thing it claims to do, the evaluation ends immediately.
What differentiates the APIs teams build with enthusiastically from the ones they build with reluctantly is a layer above correctness: predictability, ergonomics, honesty about failure, and the absence of friction.
These are not marketing qualities. They translate directly to engineering hours — how long it takes to integrate, how often integrations break, how much defensive code is required.
Predictability: the API that behaves the same everywhere
Predictability means that if you have used one part of the API, you know how to use another part. Naming conventions are consistent. Error shapes are the same. Pagination works the same way on every list endpoint. Authentication is handled the same way everywhere.
The opposite of predictability is what happens when an API was built by multiple teams over multiple years without enforced conventions. Developers discover that /users paginates with cursor but /orders paginates with page and per_page. /customers/{id} returns 404 for missing resources but /invoices/{id} returns 200 with { "found": false }. Every endpoint is its own archaeology project.
The tax on unpredictability is paid every time a developer touches a new part of the API. It is cumulative and invisible until you quantify the debugging time across an organization.
Honesty about failure
An API that is honest about its failures does three things: it reports errors accurately (the correct HTTP status code), it reports errors specifically (enough detail to diagnose the problem), and it is transparent about what it does not know (a 503 that says "cannot reach payment processor" is more honest than a 500 that hides the cause).
An API that masks errors is worse to integrate with than an API that fails clearly. A 200 response with an error field hidden in the body is a broken contract — the calling code has to check a second, undocumented success condition on every response. Silent data loss (partial operations that claim success) is a class of failure that is extremely hard to detect and debug downstream.
Developers build trust in an API by seeing how it fails. An API that fails predictably and clearly is an API they can build reliable systems on. An API that fails opaquely is an API they write layers of defensive code around.
Low time-to-first-successful-call
This is the most concrete measure of API ergonomics: how long from reading the documentation to making a successful API call?
The elements that drive this metric:
Authentication that works in the first try. OAuth flows with unclear redirect URIs, key formats that are not documented, token expiry behaviors that only reveal themselves after successful auth — all of these add friction before the developer has accomplished anything.
A working example for the first real use case. Not "Hello World." The first thing the developer actually needs to do. If they are integrating a payment API, they need a working POST /charges example with a test card number before they care about anything else.
A test environment that behaves like production. Sandbox environments that return different error codes, have different rate limits, or are missing endpoints that exist in production destroy developer confidence. They cannot tell if their integration is correct or if the sandbox is broken.
SDKs that work out of the box. A well-maintained SDK for the developer's language eliminates authentication boilerplate, request serialization, error handling, retry logic, and pagination. An outdated SDK that does not support recent features or has known bugs is worse than no SDK — it misleads developers into thinking the integration is simpler than it is.
Designing for progressive discovery
Developers rarely read the full documentation before starting. They start with the endpoint they need, succeed or fail, and expand from there. An API designed for progressive discovery means that a developer can accomplish the first 20% of use cases with 20% of the knowledge.
This suggests:
- The most common operations should require the fewest parameters
- Defaults should be sensible (not zero, not null, not something that causes confusing behavior)
- Optional parameters should genuinely be optional — their absence should result in correct, useful behavior, not an error
- The error messages for parameter errors should name the parameter and the constraint, so a developer can fix the call without reading the documentation
The documentation feedback loop
Good documentation is not just complete — it is accurate in the edge cases. The edge cases are where most API documentation fails because they were not thought about during design and not tested during development.
Run through your own API documentation with fresh eyes at least twice a year. Attempt to accomplish a common integration task using only the documentation — no internal knowledge. Every place you reach for internal context is a documentation gap. Every place you are confused is either a documentation problem or a design problem. Fix both.
The investment compounds: good documentation reduces support volume, which gives the team more time to improve the product rather than answer the same questions repeatedly.