The source code for this blog is available on GitHub.

Blog.

AI on the team

Cover Image for AI on the team
Luke Logan
Luke Logan

AI is everywhere. The people who can benefit financially from us developers and engineers using AI tend to amplify how much AI is being used, and the benefits it brings.

The people who do not have a financial interest (us normal people) have a different, boots on the ground kind of experience with AI.

What is the field like out there?

Engineers are all over the place when it comes to AI, and just like everything in software development, its application is a tradeoff.

One end of the spectrum has the engineers who are all-in on the AI. There are great tools out there to generate code (cursor, claude code) and make the development of new features much faster. They feel productive, and have a lot of activity on their github account. They know which models to use and stay up to date with the crushing weight of news about the newest, next big thing. These developers/engineers tend to be younger than their peers, also implying less experience, and may work at smaller companies or on greenfield projects.

On the other end of the spectrum are the seasoned engineers with tons of old-school experience. These are the ones who have battle scars of when they dropped a database or two, and lived through a couple code-reds. They know what works, and stick with it. They do not use AI, some are interested in it, but mostly work in legacy, enterprise applications that are slower to adopt new things, and their companies may or may not allow AI to be used.

What does this mean?

Right now there is a clash of these two kinds of people. On a team, some people may relay more on gernative code than others. New features can be completed quickly, I have seen it.

And, new features written by AI can cause bugs in existing code (I have seen this too). Loads of new code can be added to the codebase via a prompt, and the Jira ticket can be considered "complete". Let's dig deeper into the process for most engineers:

  1. The code is generated by AI to solve a problem or add a feature.
  2. A Pull Request is created.
  3. The code is checked by someone on the team to see if it completes the set of requirements.
  4. The code is merged, and the dreaded Jira ticket is considered "complete".

Does this describe a good chunk of your process? Perfect!

These are the parts I have seen overlooked:

Is the code written by the AI duplicated in the codebase? In my experience, it often is, which takes away from DRY (dont repeat yourself) practices.

Is the code written by the AI following best practices? I worked in a codebase that had an unreal amount of useMemo and useCallback hooks. Per the react docs, this is an anti-pattern: your code should run fine without it. Therefore, the AI would often use these anti-patterns. Whether this is good or not varies upon projects.

Can the code written by the AI be tested? Again, I have worked on teams with and without proper test coverage, and if the code is not testable, what are we doing here?

Can the code be explained by the person committing the code? Ill get more into this in a second. But, suffice to say, the person committing the code should be able to explain what the code does and how it works to solve the problem it is supposed to.

Is the code committed flexible, or brittle? Just this week, I fixed a bug where the code wasn't using optional chaining, therefore it would break because the page would try and render before the data was returned. Just adding a few quesiton marks made everything great again.

At what point are you in the product lifecycle when using AI? New projects can use AI in different ways than legacy projects. New projects are laying out the foundations, and it is exciting. There is no context, and everything seems like you are "absolutely right". For legacy projects, other microserves and dependencies may play a bigger role. Pre-existing patterns must be honored. And the way we use AI changes between these two.

How does this generated code interact with the other microservices our project interacts with? This is a larger, systems-design pattern that needs to be addressed, but can be amplified in changes to a codebase. For example, if the code you are working on consumes data from an endpoint, and the payload for that endpoint changes, how will your code be affected? Most importantly, is your LLM aware of that other codebase?. In my experience, it was not.

What is the best way to use AI right now?

A solid, it depends. You can insert your eye roll here, but whether using AI or not the same rules should apply:

  1. Apply the current, best practices for the project you are working on. AI should use the best React Hooks for the job- and not overuse useEffect or useState. And if an outside tool or library is being used for state management or data fetching, AI should leverage the best practices for that tool as well. It is up to the engineers themselves to know the current recommended practices though- remember, this LLM was trained on OLDER CODEBASES, and the newest libraries are changing.
  2. Communicate with the team. Let everyone know what works. Let everyone know what doesnt work. This can happen in lunch and learns, during code reviews, etc. There are plenty of ways to discuss tradeoffs for any code added to the codebase- I often look forward to these conversations as a chance to learn something new.
  3. Try to apply these practices BEFORE the pull request is created. Maybe a simple checklist would be sufficient to keep the process running efficiently to ensure everyone on the team is aligned in the general goals.
  4. Study software engineering to know the best prompts to use for your LLM- often called "prompt engineering." The LLM is only going to know and do what it is told- so be sure to use the right words and phrases to have a successful outcome. Some things ot consider are, do we use current code patterns from the codebase, or different patterns for things like data fetching? Does the code need to stay DRY (can other code be reused?) Are there any code design patterns to be aware of? What is the architecture used for this project? How will the code handle errors? Will the generated code be legible, and easy to reason about?
  5. Also, tests. If the code isnt easy to reason about, the developer will question if changes will have any unintended side effects. Tests should allow a developer to make changes with confidence knowing that unintended changes will be caught before they can cause any trouble- hopefully on the developers machine before the changes are committed. This can also be a good place of accountability for changes created by AI. And if writing tests isn't your thing, no problem! This is the tedious work that AI is really, really good at!