The FC/FC Chasm: Why Teaching Programming Still Matters
Background
In Fall 2013, I enrolled in Comp 11, my college’s intro programming course. I had intended on studying political science, but “computer” quickly replaced "political" and I’ve been coding ever since. After that first semester, I spent considerable time as a TA for Comp 11 and other courses. By the time I’d left college, I had spent hundreds of hours working with novice programmers. Being a TA was the best job I’ve ever had.
Fast forwarding through a few years of software engineering, I taught high school students CS at a public school outside Boston from 2021-2023. ChatGPT came out in the middle of my (short) tenure, and I was blown away. I remember logging on at my desk inside my classroom. I had AP Computer Science A (the Java one) next period. Even GPT-3 was able to easily solve that day’s exercises. ChatGPT wasn’t mainstream for students until after I left the classroom, but on that day in December 2022 I knew that this was a big deal.
I loved the process of learning to code, and I loved sharing that process with others. And now it was all under threat. This essay is a distillation of my thinking on the topic, and it’s intentionally long to combat the quippy LinkedIn-style snippets that I’ve been sending into the void for the past three years.
What is Programming?
Programming is the completion of the following process:
- The programmer is given (or identifies) a goal and is given (or identifies) a specification for a program that will achieve that goal
- The programmer translates that specification into source code
- The computer executes the source code (“runs the program”) with a set of inputs provided by the programmer
- The creator of the specification interprets the output of the program as either meeting or failing to meet the original goal
This is pretty abstract. Here’s an example to make it more concrete:
- The programmer is given the goal to write a webpage that has a box labeled “Temperature (F)” that converts the temperature into Celsius.
- The programmer writes HTML code for an input box, a button saying “Convert”, and an output box labeled “Celsius”. They write some code that does the math of the conversion and places the result in the output box.
- The programmer loads the page, types a temperature in the box, and clicks convert. A number pops up.
- The programmer manually checks that the answer is right, and possibly repeats Steps 3 and 4 a few times. If something goes wrong, they “debug” (fix) the code they wrote in Step 2 and then repeat Steps 3 and 4.
This is programming.
How AI Changes the Process
Even during my first conversation with that early version of ChatGPT, it could easily complete Step 2 when prompted with a description of the Fahrenheit => Celsius (FC) task. Traditional programming instruction focuses almost entirely on the mechanics of Step 2. Students learn syntax and programming constructs as they complete arbitrary tasks, like FC, of incrementally increasing complexity. This is roughly modeled after math instruction. In an algebra class, students solve linear equations, then quadratics, then multivariate equations, etc.
Prior to the advent of ChatGPT, Step 2 was considered a task that required some skill, and in the “Just Learn to Code” era, students were told that simply being fluent in completing Step 2 was their ticket to a six-figure job. If this is your motivation for teaching or learning computer science, AI is certainly a big, big, big problem.
Step 2 Was Never That Hard
Step 2, translating specifications for programs into source code, has been getting easier since the very beginning of computer programming. Advances in programming languages and IDEs have turned a very difficult process into one that can be learned in a reasonably short amount of time. Completing the FC task with punch cards or assembly language takes quite a bit more work than writing a few lines of HTML and Javascript in VS Code.
It’s important to remember no-code tools also promised to enable the completion of Step 2 without any special training. The FC task should take someone familiar with Excel about 90 seconds to complete. There are hundreds of other no-code tools that would require little to no formal programming expertise to complete the task.
This is all to say, for a large set of problems, Step 2 was already a solved problem by 2022. While sending ChatGPT one message and receiving working code to complete the task is easier, and wildly impressive the first time you see it, simple programs have been simple to create for a while.
When Step 2 Isn’t That Easy
If Step 2 is simple for simple programs, what about complex ones? Making a website for your online tutoring business can be done on Squarespace without any knowledge of programming, and the site will look great. Alternatively, how easy is it to complete Step 2 for Google Sheets, Gmail, or Final Cut? It turns out, not that easy.
It can be hard to understand the difference in complexity between something ChatGPT can “one shot” and the software that we use every day. Step 2 for Final Cut is difficult because of the complexity of the desired solution specified in Step 1. The description of every feature could fill hundreds of pages, and it has been edited and revised for years as the product has matured. With each new feature comes the interaction of that feature with every other feature. In mathematics, this is called a “combinatorial explosion”. Complex software just isn’t that easy to create, even with today’s modern tools.
This is all to say, there is a big gap between an AI that can finish the FC task when FC stands for Fahrenheit => Celsius and one that can complete the FC task when FC stands for “Final Cut”. The chasm is so large that it’s almost hard to describe. Let’s call it the FC/FC Chasm.
Crossing the FC/FC Chasm Today
The AI enthusiast’s answer to the FC/FC Chasm today is the “copilot”. They admit that programming is too hard for today’s models to fully create software of the complexity of Final Cut on their own. But, models can make programmers more efficient in their work. This is something that I wholeheartedly agree with and embrace to a degree that probably isn’t clear based on the AI skepticism I project online.
Having a ChatGPT window open while I work has saved me countless hours and has allowed me to build software that would have been well outside my reach in 2021. I happily pay for ChatGPT Plus and would probably pay five times as much if OpenAI made me (based on their finances, they probably should).
There are a million variations of the engineer + AI copilot setup, and they all work reasonably well. This is an amazing accomplishment, and AI companies should be proud.
The “Inevitable” March of AI Progress
How do AI enthusiasts believe their tools will cross the FC/FC Chasm themselves, without humans in the loop? The models will get better.
Should we believe them? Well, maybe. The models have gotten better. GPT-4 was remarkably more capable than GPT-3, and today’s reasoning models (o1, o3, etc.) are even more capable still. While the gap between GPT-3 and o3 is large, the FC/FC Chasm is wider. AI enthusiasts know this, and have never promised their current systems can cross the gap today. But they tell us that the gap will be closed. They talk about AGI, Artificial General Intelligence, but when AGI isn’t good enough they talk about ASI, Artificial Superintelligence.
If ASI comes to fruition, the FC/FC Chasm will be closed. Easily. More importantly, we’ll have unbelievable advancements in all areas of scientific research that will solve most of humanity’s most pressing issues (if it doesn’t kill us, which we’ll skip for now). But, ASI doesn’t exist today, and we still have code to write.
It’s easy to confuse dreams of the future with the reality of today. AI companies are happy to encourage this confusion. Replit (an AI coding company) has a marketing page which reads: “Turn your ideas into apps. What will you create? The possibilities are endless.” The possibilities for programs are endless! And conveniently missing from that tagline is that endless possibility is invariably tied to endless complexity. Complexity which can’t be automated away today, tomorrow, or anytime but coming soon.
Well, What Should We Teach?
Despite my near term skepticism, the FC/FC Chasm may be closed someday, or copilot systems will close an ever increasing share of it as models and technology progress. If the task of programming will be automated, is it worth learning?
First, let’s consider the list of tasks students learn to complete in school that are automated by today’s computer systems:
- Reading text aloud
- Transcribing spoken words to written text
- Defining words
- Summarizing books
- Addition, multiplication, algebra, calculus
- Translating between human languages
- Recalling historical facts
- Painting landscapes
- Playing the national anthem (recordings are available on Spotify)
- Etc.
I don’t think there’s a huge issue with us teaching students how to do any of the above tasks. Students need to learn something, and if we only focus on tasks that can’t be automated, the school day will get empty, quickly. I don’t think there is anyone in good faith arguing that we should stop teaching students to read and write (they need to read and write to talk to the models, after all), but programming is easily picked on.
Yes, Yes, Yes, Please, Please, Please, Let’s Still Teach Programming
In 2023, as I left the classroom, I wrote:
Programming is about harnessing the computer to express yourself creatively, systematizing the contents of your imagination until they come alive in the real world.
Regardless of what can or can’t be automated, that seems pretty neat.
Let’s remember our definition of programming from above:
- The programmer is given (or identifies) a goal and is given (or identifies) a specification for a program that will achieve that goal
- The programmer translates that specification into source code
- The computer executes the source code (“runs the program”) with a set of inputs provided by the programmer
- The creator of the specification interprets the output of the program as either meeting or failing to meet the original goal
These steps teach us about creativity, resolving ambiguity, organizing abstract ideas, attention to detail, and critical thinking.
Most importantly, this entire process is cyclical. And at times, frustrating. It teaches a student the process of building something iteratively, of getting stuck, of getting un-stuck, of collaborating, of asking for help, and of creating something they’re proud of.
Programming requires the creativity of an artist, the precision of a mathematician, and the persistence of an athlete. What more could you ask for in a single activity for a student?
Should We Change How We Teach Programming?
Yes. We should drastically increase our emphasis on Step 1, but we should have done that regardless of the existence of AI. In today’s programming classes, during Step 1, most students are handed a specification for a program, not asked to create one.
But the creative version of Step 1 is where the magic happens. Step 1 is where a student can dream of something that doesn’t exist and become excited to build it. Doing the FC task (the easy one), is really, really, boring. And it’s what our curriculum has students build, since contrived programs are easier to turn into code during Step 2, the main focus of today’s curriculum.
We know that Step 2 will be easier for students in the future, so let’s make Step 1 harder. Agency in Step 1 leads to engagement in Step 2, and fuels persistence in Steps 3 and 4. If/when ASI does come and the FC/FC Chasm is crossed, the creative version of Step 1 will be the only step we’ll need.
Conclusion
The world of programming is changing, quickly. It’s easier than ever to build software. With these changes, we need to adjust how we teach programming, not stop altogether. I’m excited about the future today’s students will inhabit.
I’m even more excited to see what they will build.