The AI Traffic Problem: Designing for Depth in the Age of Speed
Why Removing Guardrails Might Be Making AI Problems Worse—And What to Do Instead
Imagine approaching an intersection with no traffic lights, no stop signs, no painted lines—just an open space where cars, bicycles, and pedestrians somehow need to negotiate their way through. Your instinct says this should be dangerous. Yet in several Dutch towns where traffic engineer Hans Monderman implemented exactly this design in the 1980s, accident rates fell dramatically. His “naked streets” experiments revealed something profound about human behavior: when deprived of the illusion of control that signs and signals provide, people naturally slow down. They make eye contact. They navigate with genuine attentiveness rather than blind rule-following.
Monderman’s insight was that too much apparent safety and too many explicit instructions could breed complacency and, ultimately, make systems less safe by preventing the necessary slowing down required for genuine attentiveness.
I’ve been thinking a lot about Monderman’s counterintuitive insight lately, not in the context of traffic grids, but educational ones. Specifically, how we navigate the sudden, chaotic arrival of generative AI. We have this powerful new tool, and much of the initial response in education has felt like a push towards "naked streets" – removing traditional guardrails, hoping students will naturally develop caution and responsibility when confronted with this new, less-regulated power.
But does the Monderman principle apply here? If removing traffic signals makes drivers slow down and become safer, does removing traditional academic guardrails around AI make students better learners by encouraging them to engage more deeply?
The early evidence, I'd argue, suggests maybe not. And the reasons why tell us something crucial about the fundamental differences between a traffic system and an educational one. Unlike the immediate, visceral feedback of a potential collision, the consequences of misusing AI in an assignment are often delayed, abstract, or simply non-existent in the student's immediate reality. The shared goal in traffic (get home safely) is clear and collective. The perceived goal in many educational settings (get a good grade, complete the task quickly) can be individualistic and focused purely on output, not process. Our current educational system, particularly in places like the U.S., often reinforces individual performance, operates under immense time pressure, and sometimes incentivizes credentialing over deep learning.
The perceived goal in many educational settings (get a good grade, complete the task quickly) can be individualistic and focused purely on output, not process.
In this system, confronted with the power of generative AI, the "naked street" approach often doesn't yield caution and negotiation. It yields optimization. Speed. Acceleration. Students aren't slowing down to make eye contact; they're flooring it through the intersection, focused solely on reaching the destination (submission) as quickly as possible. The observed behavior isn't cautious negotiation; it's often an optimization for speed and output within a system that rewards exactly that.
This isn't a moral failing of students; it's a systems problem. If the system rewards speed and output above all else, a tool that provides speed and output will be maximally exploited, regardless of the designer's hopes for cautious engagement or genuine slowing down for learning.
Redesigning the Educational Grid for Depth
If removing structure backfires in the age of AI, how, then, do we build the system we need? What mechanisms for change are available to us to encourage students to slow down and foster deep learning and responsibility? How do we build systems that value the journey, not just the velocity?
We can begin by focusing on design itself – not just adding more rules, but redesigning the learning pathways themselves to structure a slower, more deliberate pace. Think of it like applying sophisticated traffic calming techniques, calibrated for the educational environment. Add chicanes, roundabouts, narrower lanes – elements that intrinsically slow things down and require more deliberate navigation and cognitive effort.
This means designing assignments that can't be easily short-circuited by simply asking an AI for the final answer. Assignments that emphasize the messy, iterative process of thinking and writing, with built-in checkpoints for reflection and revision. Iterative assignments function like those traffic roundabouts: they force you to slow down, pay attention to others (or in this case, your own evolving ideas), and navigate multiple options before exiting. They add necessary structure that encourages a slower, more thoughtful pace without bringing everything to a stop.
My own experience and the feedback from students using iterative approaches bear this out. One student reflected after interacting with an AI Writing Tutor: "It asked me questions that made me dive deeper into my subject than I had thought possible... so I had to think harder about the subject I wanted to learn about." The design of the task compelled deeper engagement by making speedy shortcuts less viable and rewarding the process of thinking itself.
And critically, AI itself can be redesigned and integrated into this new infrastructure as a positive force for calibrated speed, not just maximal velocity. Not as a bypass, but as part of the pathway that encourages thoughtful steps. Tools like Socratic bots or Custom GPTs designed to guide students through analytical steps – like the "Unpack Your Golden Line" app I've worked on – aren't about generating answers. They are designed to ask questions. They shape the flow of thought, guiding students from initial reaction to deeper analysis, acting as a structured pace car rather than an acceleration pedal. As one student noted about such a tool: "When going in, I expected it to be more explanatory, but it actively encouraged my participation. By doing this assignment, I learned how AI can be a helpful tool for learning."
As one student noted about such a tool: "When going in, I expected it to be more explanatory, but it actively encouraged my participation. By doing this assignment, I learned how AI can be a helpful tool for learning."
But even these tools, designed to be speed bumps and guides rather than acceleration lanes, require careful scaffolding by teachers. Without clear context and integration into a larger learning architecture, they risk becoming just another source of confusion or another subtle shortcut, ultimately failing to help students see the value in slowing down.
The Deeper System Problem: Motivation and the Value of Slowing Down
Yet, design alone, while necessary, isn't sufficient. This brings us to a more fundamental, and perhaps more challenging, systemic issue: meaning. Why would a student choose to slow down if the system doesn't make the value of that deliberate pace clear?
As the author of Teaching with AI: A Practical Guide to a New Era of Human Learning, José Antonio Bowen, recently put it, we have a core choice:
"We either need to redesign so that students have to do more than either they or AI can do alone, OR … explain why doing the work yourself is important."
If the educational system, as perceived by students, presents assignments as arbitrary hoops to jump through – if the value is solely in the submission or the grade – then using AI to clear the hoop quickly is not just logical, it's optimal resource allocation for a busy student. AI becomes a rational response to a system that feels meaningless.
But if we can cultivate the sense that assignments are opportunities to build something intrinsically valuable – a voice, a perspective, a skill – then the process itself gains meaning. The "long way," the act of slowing down to engage deeply, becomes the point.
….if we can cultivate the sense that assignments are opportunities to build something intrinsically valuable – a voice, a perspective, a skill – then the process itself gains meaning. The "long way," the act of slowing down to engage deeply, becomes the point.
Cultivating this kind of intrinsic motivation within a system often driven by external pressures (grades, competition) is a deep challenge. Students inhabit an AI-saturated world that constantly offers the path of least resistance, the option for maximum speed. If the educational system doesn't offer a compelling reason to take the path of more resistance – if there are no clear boundaries, consequences, or, at minimum, a serious conversation about values – why wouldn't they choose the speediest route?
Building that intrinsic motivation requires more than just telling students the work is important; it requires designing the learning experience itself to feel important and relevant. This might involve using persuasive analogies (like the ones we're exploring here) to frame the value of the process, exploring alternative grading methods that emphasize learning and growth over punitive points, incorporating culturally responsive teaching practices that connect the material to students' lives and identities, or designing authentic assessments that mirror real-world tasks where the process and skills genuinely matter. These strategies help shift the student's focus from merely optimizing for speed and output to genuinely investing in the journey of learning, making the act of slowing down feel inherently valuable.

One student's simple comment about receiving supportive feedback from an AI guide ("nice job!," "Great thinking!") hints at the psychological dynamics at play: "It was nice reading those messages, and made me want to keep reading the material and answering its questions." Even small design choices can impact engagement and perceived value within the system, subtly encouraging continued effort, encouraging a willingness to stay with the material and, yes, to slow down.
But as
reminds us, "[i]n most human endeavors, some accountability structures are important even when we design for intrinsic motivation."Which brings us back to rules, but viewed through a different lens – one focused on fostering conditions where slowing down is possible and valued.
Building a Mosaic: Beyond Binary Approaches
Here’s the core problem: We talk about wanting students to slow down and engage deeply, but are we designing systems that make that possible? Right now, too often, we aren't.
Look at the data: A recent Inside Higher Ed survey paints a stark picture of systemic failure. Three in ten students report being unclear on the rules for using generative AI in their coursework. Is it any wonder? The survey found only 31 percent of professors at four-year publics and a mere 24 percent at two-year publics actually included an AI policy in the syllabus. That’s not a minor communication breakdown; it's a fundamental failure of clarity at the most basic level of a system students must navigate daily. As academic integrity expert David Rettinger notes, "People don't always know where the boundaries are."
When expectations are this fuzzy—when the basic signals aren't even in the syllabus—students are left guessing. This ambiguity, coupled with the speed and unreliability of a tool like AI, creates a high-pressure environment. It's not just about academic rules; it's a deeply flawed system design that often devolves into an unproductive "cops and robbers" dynamic, eroding trust and making rushing feel like the safest bet. This system isn't optimizing for learning; it's optimizing for compliance in a fog of uncertainty.
So, how do we evolve from gatekeeping submissions to designing conditions for meaningful engagement?
The emerging framework that makes the most sense to me right now isn’t a single, rigid rule, but what
calls a "mosaic approach." Think of it like safety engineering's "Swiss Cheese model" as described by —no single layer is perfect, but multiple, imperfect layers of defense and guidance, when stacked, create resilience. This approach explicitly rejects the adversarial game.As I understand it, the mosaic approach builds a varied landscape of signals that make slowing down feel supported. Central to this recalibration is transparency, often facilitated by a simple AI disclosure form. This isn't about catching students; it's a mechanism for honest reporting without fear. As Ostro puts it, if students fully share all the ways they used AI on a writing project, "they will never be honor coded for this." This trust-first signal rebuilds relationships and creates teachable moments about ethical AI use.

Beyond transparency, the "mosaic" includes varied pedagogical terrain designed for different paces of engagement. Some assignments function as carefully calibrated "naked streets" for creative AI exploration, allowing students to experiment at boundaries. Others incorporate explicit "traffic calming" elements—requiring visibility into the drafting process through document history or judicious use of detection tools. Combined with direct AI literacy instruction that cultivates healthy skepticism of AI outputs, this multifaceted approach offers a more robust pathway than any single rule or detection method could provide.
When missteps inevitably occur within this complex system, the response isn't just punishment, but education. As Dr. Tricia Bertram Gallant and David Rettinger argue, violations become opportunities to calibrate the system and guide the user. Keeping our "educator hat on," as Bertram Gallant puts it, allows us to use these moments for reflection and learning.
Ultimately, this is about building trust, not just enforcing rules. And trust thrives not in a vacuum, but when expectations are visible, meaningful, and clearly communicated within a system designed for mutual understanding and educational purpose.
We are still in the early days of calibrating education for the age of AI. There are no easy answers. But the lesson from other complex systems is clear: responsible behavior and deep learning require intentional design. We need to design friction where depth is necessary. We need to authenticate not just authorship, but meaningful engagement. And we need accountability structures grounded in purpose, not surveillance, to create the safety students need to feel slowing down is worthwhile.
Which brings us back to fundamental questions about the system we are building:
Where, precisely, must we design friction into the learning process to encourage necessary depth?
How do we authenticate authentic engagement—the product of deliberate effort?
And how can we establish clear expectations grounded in educational purpose rather than enforcement?
This isn't about policing the edges; it's about redesigning the educational system's core to make deep learning possible in the age of the algorithm.