Home

Interface Driven Development in the Age of AI

Your company is going to use AI for coding. Not a prediction, not a hot take. Just what's happening. The engineers who resist will be slower than those who don't. Same for companies. The economics are too compelling to ignore.

The problem is that AI-generated code gets the job done but the implementation is usually messy. Variable names that don't quite fit. Abstractions that leak. Edge cases handled three different ways in the same file. It works, but you wouldn't want to maintain it.

So how do you capture the productivity without creating a codebase you'll regret?

Interface Driven Development.

The Insight

Interfaces don't care what's behind them. An interface defines a contract. The implementation can be beautiful or ugly, hand-crafted or AI-generated. As long as it fulfils the contract, everything works.

typescript
interface IBlobStorageProvider {
  upload(key: string, data: Buffer): Promise<Result<void, StorageError>>;
  download(key: string): Promise<Result<Buffer, StorageError>>;
  delete(key: string): Promise<Result<void, StorageError>>;
}

That's the contract. The implementation could be S3, Azure Blob Storage, MinIO, or a folder on disk. Could be written by you, by a junior, or by Claude. The rest of the codebase doesn't care.

Isolate the AI-generated code behind interfaces. Let the AI draft implementations. They might be suboptimal. Weird patterns. But if they're isolated, you can come back and fix them when you have time, when AI gets better, or never if they don't cause problems. You can write tests that verify the contract. You can swap implementations without touching anything else.

The implementation becomes a black box. Internal quality matters less than external behaviour.

The Java Patterns Were Right

My formal programming education was in Java, and despite this I spent years dismissing it as over-engineered. All those interfaces. The Hungarian notation. The dependency injection frameworks. The factory factory factories.

Turns out, those patterns exist for a reason. The removal of these patterns in the JavaScript world is a prime example of a Chesterton's Fence. You should never remove a pattern until you know why it was there in the first place.

Interface segregation means your components depend on contracts, not implementations. Dependency injection means those contracts are passed in, not instantiated internally. Encapsulation means the ugly details stay hidden behind clean boundaries.

These patterns were designed for large teams working on large codebases over long periods. They exist to manage complexity, to let pieces change independently, to enable testing and refactoring.

AI-assisted development has the same properties. Multiple "authors" (you and various AI models), developing at a breakneck pace with varying quality of output. Code that needs to evolve as tools improve. The Java patterns solve exactly these problems.

typescript
class UserService implements IUserService {
  constructor(
    private readonly userRepository: IUserRepository,
    private readonly emailProvider: IEmailProvider,
    private readonly blobStorage: IBlobStorageProvider
  ) {}

  // ...
}

Every dependency is an interface. Every dependency is injected. The UserService doesn't know if those implementations were written by a senior engineer or generated by AI in 30 seconds. Doesn't need to.

Isolate Your Dependencies

This applies to external services too. You don't want to couple to S3. You want to couple to IBlobStorageProvider.

typescript
// Don't do this
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

class UserService {
  private s3 = new S3Client({ region: "ap-southeast-2" });

  uploadAvatar = async (userId: string, data: Buffer) => {
    await this.s3.send(new PutObjectCommand({
      Bucket: "user-avatars",
      Key: `${userId}/avatar.png`,
      Body: data,
    }));
  };
}
typescript
// Do this instead
class UserService {
  constructor(private readonly blobStorage: IBlobStorageProvider) {}

  uploadAvatar = async (userId: string, data: Buffer) => {
    return this.blobStorage.upload(`${userId}/avatar.png`, data);
  };
}

The second version is testable. You can mock the blob storage, swap S3 for R2, run locally against MinIO. The AI doesn't need to understand AWS SDK intricacies to generate the UserService code, because UserService doesn't know about AWS.

This isn't new advice. But it becomes critical when AI is generating your code. AI will happily reach for S3Client directly. It will inline configuration. It will create tight coupling because tight coupling is faster to write.

Your job is to define the boundaries. The AI fills in the gaps.

The Changing Role of Engineers

The uncomfortable truth: the role of software engineers is shifting.

Less time writing implementation code. More time on architecture decisions, interface design, validation, and product specification. Which components exist? What contracts do they expose? What error cases exist? Does the implementation actually meet the requirements? Do the tests pass? Do the tests represent the requirements? What does "done" actually mean?

This isn't a demotion. It's a shift toward higher-leverage work. Implementation details that used to take days now take minutes. What remains is the work that requires judgement: deciding what to build, how to structure it, and whether it's actually correct.

The engineers who thrive will be the ones who can design clean interfaces that AI can implement, write specifications clear enough that AI (and humans) can verify correctness, spot when AI-generated code violates architectural principles, and know when to accept "good enough" and when to demand better.

The Testing Story

Interface-driven development makes testing almost trivial.

Each component has a contract. You write tests against that contract. The implementation can change entirely and the tests still work, because they test behaviour, not implementation.

typescript
describe("IBlobStorageProvider", () => {
  let provider: IBlobStorageProvider;

  beforeEach(() => {
    provider = new S3BlobStorageProvider(testConfig);
  });

  it("should upload and download a file", async () => {
    const data = Buffer.from("test content");
    await provider.upload("test-key", data);
    const result = await provider.download("test-key");
    expect(result.isOk()).toBe(true);
    expect(result.value).toEqual(data);
  });
});

Let AI generate the implementation. Run the tests. If they pass, you're done. If they don't, iterate. The tests are the specification. The implementation is just code that makes them pass.

This is contract testing. Not new, but it becomes essential when you can't trust the implementation details.

The TDD Reality

Here's the thing no one talks about: very few companies actually practice Test Driven Development. The textbooks say write tests first. Reality is that most teams write tests after implementation, if they write them at all.

This probably isn't going to change. It's not natural for most people to work this way. You don't know the shape of the solution until you've explored the problem. Writing tests for code that doesn't exist requires a level of upfront clarity that's rare in real-world development.

And that's fine.

Tests written after implementation still serve two purposes: they confirm the implementation meets the contract, and they ensure functionality doesn't regress when someone (or some AI) changes things later. That's valuable even if it's not the TDD ideal.

But if we're not writing tests first, and we're not validating tests first, then the interface becomes the specification.

The interface is what you design upfront. What you think through carefully. What you commit to before any implementation exists. When you get the interface right, the tests almost write themselves. They're just assertions that the implementation fulfils the contract you already defined.

typescript
// The interface IS the specification
interface IUserService {
  createUser(email: string): Promise<Result<User, CreateUserError>>;
  getUser(id: string): Promise<Result<User, GetUserError>>;
  deleteUser(id: string): Promise<Result<void, DeleteUserError>>;
}

// Tests follow naturally from the interface
it("should create a user with valid email", ...);
it("should return NotFound when user doesn't exist", ...);
it("should delete an existing user", ...);

If you're not doing TDD, interface design becomes even more critical. It's the one artefact you create before implementation. The one thing you can validate without running code. The contract that everything else depends on.

Get the interface wrong, and the tests you write will validate the wrong behaviour. Get the interface right, and implementation quality matters far less.

Fix It Later

This is the part that feels wrong to experienced engineers: sometimes you should ship code you know is suboptimal.

The AI generates an implementation. It works. The tests pass. But the code is ugly. Inefficient. Hard to read.

Ship it anyway.

The interface is clean. The contract is clear. The tests verify correctness. The ugly implementation is isolated behind a boundary. It's not infecting the rest of your codebase.

You can come back later. Ask a better AI model to refactor it. Assign it to an intern who wants to learn. Ignore it forever if it never causes problems.

You have the option. Because it's isolated, you can change it without changing anything else. That's the freedom interfaces buy you.

What This Looks Like in Practice

My workflow when building a new feature:

  1. Define the interfaces first. What components exist? What methods do they expose? What can go wrong?

  2. Write the composition root. Wire up the dependencies. This is where implementations get chosen.

  3. Generate implementations with AI. Give it the interface. Let it fill in the code.

  4. Write tests against the interface. Verify the contract is fulfilled.

  5. Review for architectural violations. Is the implementation reaching outside its boundary? Leaking abstractions? Coupled to things it shouldn't know about?

  6. Ship it. Move on.

  7. Come back when needed. If the implementation becomes a problem, fix it then. Not before.

The junior engineer in me wants to clean everything up before committing. The senior engineer has learned that time is finite and perfection is expensive.

The Uncomfortable Economics

An engineer using AI effectively can produce 3-10x more code than one who doesn't. Quality might be lower per line, but output is dramatically higher. In a competitive market, that's an enormous advantage.

Companies that figure this out will ship faster. Companies that don't will wonder why their competitors seem to move so quickly.

The answer isn't to reject AI. It's to adapt your practices so AI-generated code doesn't become a liability. Interface-driven development is that adaptation. Capture the productivity gains without sacrificing the ability to maintain and evolve your codebase.

Conclusion

The patterns that made large-scale Java development manageable are the same patterns that make AI-assisted development safe. Interfaces define contracts that isolate implementation details. Dependency injection keeps components loosely coupled. Encapsulation hides the ugly parts behind clean boundaries. Contract testing verifies behaviour without knowing implementation.

AI is going to write a lot of your code. Your job is to make sure that code lives in a box where its quality doesn't matter as much. Define the interfaces. Write the tests. Let AI handle the implementation. Come back and fix it when you need to, or don't.

The engineers who adapt will be more productive than ever. The ones who don't will spend their time complaining about AI code quality while their competitors ship features.

The future isn't AI replacing engineers. It's engineers defining architecture and interfaces while AI fills in the implementation. Interface-driven development isn't just good practice anymore, it is critical.