Will Autonomous Coding Systems Use Object Oriented Programming?

Over the years a succession of programming paradigms have attempted to solve the problem of complexity at scale in software. None have been as popular as the Object Oriented approach. Whether or not you learned in a university program, a bootcamp, or taught yourself, chances are you have encountered Object Oriented Design (OOD).

On its face OOD is a very intuitive approach to programming, which makes it very easy to teach. The ideas of message passing, encapsulation, and inheritance flow very nicely with our understanding of the material world. We get fantastic toy examples, which read nicely, like a naive store where items are abstract objects that can expose a price method which can be invoked by a register etc. etc.

However in the real world we must use real dependencies, those “items” almost certainly are instantiated via a database whose API is defined through years of thought, effort, trial and error. But that means you need to be initiated, you need to understand why the choices for that API were made by the developer. Then you find that in the end your original concept for your design is either trivially implemented by your dependency or that your design is in a hopeless contradiction with the dependency’s use cases. As you implement your software you usually find that in one way or another you have to compromise your design choices, especially when it comes to OOD.

The reality is that we are in a hodge podge world of development. The paradigm we choose is usually the one best suited to solve the problem we want to solve. The issue is with viewing OOD as a default mode, which can compromise either OOD principles by using a different paradigm or violate the principles of the given paradigm by introducing OOD principles to satisfy a larger codebase.

This isn’t to say that objects, interfaces, and encapsulation have no place in our designs, but rather to say that we restrict ourselves and our programs if we think purely in terms of OOD. I want to propose a new approach based on my experience programming with Large Language Models. In this and the next few articles I want to explore using the power of contradiction and LLMs to see if we can move on from Object Oriented domination and more importantly to see how close I can get to building an autonomous coding engine.

The Emerging Strategy

While my goal here is to attempt to define a new formal method for creating software I do want to substantiate my claims a little bit with some informal evidence. I have had conversations with some other developers who are beginning to fully embrace LLM development. In all of these cases we have arrived at similar conclusions as to what strategy works best for development with LLMs.

Namely our process looks something like this:

  1. Have some abstract design or idea that we would like to implement.
  2. Ask the LLM to create a design and/or an implementation plan. Usually involving phases and a checklist.
  3. Write that plan out to some markdown file or the repository’s wiki so that state can be recovered if the current chat context is lost.
  4. Have the LLM implement the plan using Test Driven Development (TDD) to ensure that the LLM’s code works correctly and has proper test coverage. This is especially important for production codebases.
  5. Once tests are passed, have the LLM document the changes so future LLM invocations can benefit from the implementation learnings.

For me this process has been incredibly successful in accurately conveying to the LLM complex behaviors and re-factors needed to fully implement well made software. I created this blog as a homebrew utilizing a low dependency markdown rendering pipeline and the Google Cloud Platform for hosting. The blog itself has decent test coverage thanks to utilizing this method, although there are some minor differences between the actual implementation and what I intended.

I have been developing in a private mono-repo but I have published the blog’s source code as a standalone repository if you’re curious about what I created!

https://github.com/Avynn/blog

My experience developing this blog has given me reason to believe that there is an underlying theme to utilizing LLMs efficiently and by working the informal process developers are using right now into a formal method we may be able to automate significant portions of the development process.

Resolving Contradictions

In working with LLMs I have found that resolving contradictions between intent and result, usually stemming from choices assumed by the LLM, is the primary difficulty when using these tools. For instance, you can ask an agent to go ahead and implement a feature without any sort of plan and depending on how complex your task is you’ll get varying degrees of success.

In bad cases the LLM can make clumsy decisions that lead to subtle bugs that the attention mechanism simply misses. Then your agent may spin its wheels and implement gradual debugging, making choices that contradict the original prompt’s intent or at the very least wasting tokens on a problem that could be easily solved by an experienced developer.

The end result might be what the developer was looking for, but in most cases I have found that the result is messy and deficient. There is some contradiction between what the developer wants and what the LLM produces. The process then becomes taking these contradictions and attempting to find a way to communicate them to the LLM so that it can revise the program to become more in-line with the developer’s intent. Sometimes through exploring the implementation the developer finds that there is an issue with their intent, and they need to figure out what the correct intention is.

When programming boils down to providing instructions to LLM agents which have some combination of Retrieval Augmented Generation (RAG) and Cache-Augmented Generation (CAG) hooked into your IDE’s LSP with a variety of MCPs for code modification and debugging we have to change our perspective on how we design our features. Even with all these tools an agent’s context is ultimately limited, meaning it forgets things over time, it makes errors and contradicts itself if left alone for too long. The biggest hurdle to implementation is now the context window. What we’re doing as developers is looking for that sweet spot of abstraction, just large enough for the context window to be able to get significant work done.

I believe that the reason why the context window size still matters is because it is the only range where the LLM can detect contradictions. So the above strategy works because we start at a high enough level of abstraction that the LLM can fit the entire breadth of the design into its context. Then for each step along the way the LLM only needs to focus on the segment of the design it worked on at the higher level.

The mechanic we have then is a sort of pogo-stick jumping up and down the layers of abstraction. Each time it hits the ground it stamps out another block of code that can fit in its context.

We can imagine a jump on the Pogo stick looking something along the lines of:

  • A user works with the LLM on an implementation plan
  • The LLM reads the first phase of the plan and creates a checklist
  • The checklist is expanded by future LLM calls that use specialized context from the conversation.
  • Finally a context is reached where the LLM produces a command for your IDE’s MCP to write out some code.

If we continue with the metaphor, sometimes we get stuck on a bad assumption at some layer or a subtle contradiction is introduced where two generated code blocks meet. Every time we have to retry something because we broke test coverage is a bit of a failure, because now we have to spend tokens fixing the mistake.

In my experience starting with an object oriented design amplifies this effect. I think that the reason is that there is significant ambiguity and room for error when specifying object oriented designs. This effect isn’t unfamiliar, especially if you have worked on a team of other developers. There is always at least a small debate over what particular pattern to use in what particular instance, and whether or not your specific use case is applicable. This leads to a contradiction between intent and implementation that gets amplified as early design decisions made autonomously get amplified over time. Sometimes these decisions can grow unnoticed and a significant amount of quota has to be spent fixing them.

So if we zoom out to our pogo-stick metaphor what we’re looking for is a method that optimizes our pogo stick traversal towards our goal.

Optimizing the Emerging Strategy

The optimal pogo-stick path is guided by contradictions. The primary contradiction we’re managing is the contradiction between the state of the software and its current behavior and the state and behavior of the software we want. Fortunately this can be ascertained deterministically via tests. So we have just reinvented Test Driven Development, right?

I would say that “reinvented” is a strong word. What we’re doing here is making the core measurement of contradiction whether or not our tests pass and coverage reaches some threshold. There is a conclusion and a further question that follows from this. The conclusion is that once our tests are written the LLM basically has a worksheet to fill out with a deterministic method to let it know when it is done. The question is, of course, but what tests to write?

The answer of course is derived from the high level design. The design is also informed by the intent of the programmer. What LLMs bring to the table is the ability to evaluate designs semantically, but like implementations they are limited by context and many lack a mechanism for acknowledging a contradiction between their design and the programmer’s intent, and rarely do LLMs ask clarifying questions.

So when designing we have to acknowledge some deficiencies in terms of self awareness from our agent, we have to acknowledge that it struggles with subtle contradictions, especially when they arise across the borders of context. By addressing these deficiencies in modern agents we can get closer to a method that can enable long running development sessions that can implement complex features with relatively little human oversight.

In next week’s article I’ll introduce Hegel, an enlightenment philosopher who has inspired this work, and a scaffolding for reasoning based on his work that I think can address some of the problems with the agents above.