Redsauce's software and cybersecurity blog

Pair programming III: Not everything bright is a gem in the light

Posted by Iván Fernández García

{1570}

Despite all the aforementioned advantages of using AI as a work companion, when developers rely too heavily on an AI assistant, it is likely (and almost certain) that their critical thinking and problem-solving skills will be drastically diminished. AI can suggest solutions or even write complete portions of code, but if the programmer does not carefully review what the assistant produces, they may accept suboptimal or faulty suggestions, creating a dependency loop where the assistant is increasingly trusted and the programmer's own knowledge is increasingly sidelined.

This is reflected in research suggesting that continued use of AI assistants can hinder deep learning for developers, especially those who are less experienced. While useful for speeding up work, these tools may discourage active problem solving or collaborative debate.

Implications for Coworkers

In environments where pair programming is a common practice, introducing an AI assistant can affect relationships between teammates.

For example, if one developer starts relying on AI to solve problems, they may feel less motivated to engage with their partner. This reduces opportunities for constructive discussion and mutual learning, two key pillars of pair programming. Their teammate might feel sidelined or may need to take on extra responsibility by reviewing the code proposed by the AI. This can create tension within the team, as it might seem that one member is doing "less work" or not contributing equally to the task.

One of the benefits of working with another person is the ability to exchange ideas and find creative solutions to complex problems.

If a programmer begins to trust the AI more than their teammate, they may miss out on that brainstorming process that often leads to more innovative outcomes. While AI is excellent for repetitive or technically complex tasks, truly disruptive ideas usually emerge from human interaction.

The Bloody Context

The main issue is that AI is not aware of the context in which the code is being written. Let's imagine we're an AI. We're sleeping deeply and suddenly the alarm goes off and a user asks us a question about a piece of code or an algorithm they want to display something on screen.

We don't know who this person is, the intricacies of the project they're working on, the available libraries, the stress level... we just have some general memory data and must quickly write a piece of code. Let's say the user asks us to alphabetically sort a response received from the backend. Here's our proposal, which does exactly what was asked:

package main

import (
"fmt"
"sort"
)

func main() {
// Simulate a backend response
results := []string{"Zapato", "apple", "Mango", "banana", "cherry"}

// Alphabetically sort
sort.Strings(results)

fmt.Println("Alphabetically sorted results:")
for _, r := range results {
fmt.Println(r)
}
}

Now, when printing the list, "Zapato" appears before "apple"! How is that possible? Madness! Well, no. What’s missing here is context. For instance, based on the request, we don’t know who’s going to see this list. A human? A machine?

According to ASCII code—that gem of 1960s engineering, courtesy of a committee that believed 7 bits were more than enough because hey, who would ever need more letters than those in English and a few quirky symbols? Long live typewriters!—uppercase letters come before lowercase ones. That’s why any word starting with a capital letter will appear before one starting with a lowercase letter, if sorted by ASCII value.

Providing this kind of context to AI requires forethought. And often, it takes prior knowledge or experience about where things usually go wrong to catch it in time. And this is precisely what we're slowly losing if we continue down this path: the code we get is a rehash of older code, and old bugs come back again and again in the endless cycle of life... like TikTok's absurd trends.

A Delicate Balance

The key is to find a balance between using AI assistants and human collaboration. Programmers must recognize that AI is a tool meant to complement—not replace—this interaction. To avoid dependency, it is recommended to promote responsible use of AI, which means verifying its suggestions, discussing them with your programming partner, and continuing to work on improving your own technical and analytical skills.

In short, AI can be a great ally in pair programming, but it must be used carefully so as not to sacrifice the collaboration, learning, and creativity that typically arise from working together with our generally delightful fellow humans

About us

You have reached the blog of Redsauce, a team of experts in QA and software development. Here we will talk about agile testing, automation, programming, cybersecurity… Welcome!