Beyond the Digital Sandbox: Why CS Needs to Get Physical

The recent humanoid-robot martial-arts performance featuring Unitree robots at China Media Group’s 2026 Spring Festival Gala was simply impressive. Of course, the Internet did what it always does and turned a 4-minute cultural spectacle into a global robotics leaderboard with hot takes, cold takes, and the inevitable comparisons to Boston Dynamics’ Atlas. But I don’t want to write a “who’s winning” post. I want to write an educator’s post.

The real question this performance raises for me isn’t whether one company or one country is ahead. It’s this: why does embodied computing look like the future on a prime-time stage, while so much of our computer science curriculum still treats hardware, control, and robotics as side quests? I’m not a roboticist, and I don’t have Unitree’s internal design or control stack. So I can’t (and won’t) pretend to deliver a deep technical review. But we don’t need privileged access to state the obvious: the coordination, timing, and recovery behaviors on display point to serious systems engineering: sensing, control, software, and integration under tight constraints. In my opinion, we should treat moments like this the way previous generations treated early computing demos: not as a reason to chest-thump or sneer, but as a reminder to build broader, more practical pathways. This is especially true at regional public universities, where “real hardware” too often remains scarce, elective, or extracurricular.

Computer Science: Returning to Root

When I taught Computer Systems, one of my favorite examples to demonstrate how the concept of binary control evolved was Jacquard’s Loom. A chain of metal punched cards whose holes encode which yarn threads should be lifted to weave intricate patterns into the fabric. In a very literal sense, it was a programmable machine. It stored instructions in physical form and executed them reliably, repeatably, and at scale.

For much of modern CS, we’ve been able to stay in a comfortable execution pattern: prepare digital inputs, write programs, and let computers produce digital outputs. Even when the world is involved, the messy physical side of speech, images, and video often gets abstracted away behind keyboards, microphones, cameras, and layers of device drivers. It has been possible, for a long time, to live almost entirely on the software side of the house.

But the growing visibility and commercialization of robots like Boston Dynamics’ platforms and Unitree’s humanoids signals a shift back toward computing that acts. Unitree lists the G1 at about $13.5k (as of Feb 18, 2026), which is not cheap, but it is a very different accessibility story than humanoids used to be. Meanwhile, Boston Dynamics has publicly positioned Atlas as a fully electric humanoid aimed at real industrial tasks, and the company has discussed 2026 deployments, with broader scaling timelines still unfolding.

If this is where computing is heading with systems that sense, decide, and move in the physical world, then the future computing professional can’t be defined only by life in the digital sandbox. It implies comfort with circuits and electronics, microcontrollers and embedded programming, sensors and actuators, and the unglamorous discipline of debugging real constraints. And if LLMs make some routine software tasks cheaper to produce, that only strengthens the case for expanding CS education toward the parts of computing that still demand deep systems understanding: embedded systems, edge devices, and embodied AI.

Do We Have the Money?

Fully equipped Electrical Engineering (EE) and Computer Engineering (CE) departments at R1 universities are expensive. I still remember working on CPU architecture design using Mentor Graphics and was told that the license costs the university upward of 6,000 dollars each. We had twenty machines in the lab with Mentor Graphics! That is certainly a significant number, and it was only justified by the fact that my alma mater had a very strong chip design program and an accompanying spin-off company. A comprehensive EE department at an R1 with subfields in power engineering, optics engineering, etc will also need to spend a significant amount of money on these respective labs. This certainly is not feasible at many regional public universities.

However, from a practical point of view, we don’t have to spend the same amount of money. If we don’t deep dive into the silicon layer and only hover near the software/hardware border, then it is possible to prepare labs at a reasonable cost. For example, a minimum viable lab that has 24 UNO basic starter kits and multimeters, six 4-channel oscilloscopes, and twelve portable soldering irons can support 24 students and costs approximately 3,500 dollars. If we want to add some robotic components, a simple mBot kit typically cost between 70 to 100 dollars, the price of a standard textbook.

In other words, R1 chip-design labs are expensive, but embedded literacy labs can be affordable.

Do We Have the Personnel?

Ok, this is a difficult issue. At public regional universities, we typically go 4/4, or if we are supported, 4/3. We don’t have TA/GA. Many of us barely touched hardware stuff throughout our Bachelor’s to PhD career. I have a Computer Engineering degree, and I distinctly remember learning Digital Design and VHDL, but none of these, even if I can still remember them, is going to help with the modern microcontroller/embedded system knowledge. After all, we all have our academic freedom and tenure, where no one can tell us how to teach and what to teach.

Let’s look at this from a pragmatic lens then. Most, if not all, CS departments are experiencing a reduction in enrollment. What I wrote above regarding the expansion of scope is not something new. CRA CERP Pulse Survey showed that 62% of the responding academic units saw a decline in enrollment level. Areas that saw an increase actually include Computer Engineering. It looks like pure CS is shrinking, and like it or not, it will impact teaching-oriented CS faculty. Learning new skills is not about losing academic freedom. It’s about keeping our students employable and our programs relevant. After all, if we preach continuous learning to our students, we must model it ourselves.

In conclusion, yes, we might have the personnel.

So What Do We Do?

EE/CE literacy is becoming the new Math/CS. Not because everyone needs to become an electrical engineer, but because everyone who calls themselves a computing professional will increasingly work at the boundary where software meets physics. Building 3D casings for microcontrollers, compiling code and uploading binaries to hardware, designing robotic controls, and the disciplined habit of measuring before guessing are no longer niche skills. They are the new baseline for writing software that touches the real world.

The good news is that the equipment barrier has collapsed: a workable embedded lab can be built with kits with costs similar to a textbook and scaled with shared instruments over time. The harder part is faculty time and confidence. However, if enrollments are softening and the world is shifting toward embodied computing, then upskilling isn’t a betrayal of academic freedom; it’s a renewal of it.

The path forward doesn’t require a revolution. It requires a series of small, deliberate steps. Let us start small. For selected courses, we can insert small modules woven into existing courses. A small lab showcasing how to write C programs for Arduino inside Computer Systems. A ROS unit, leveraging Webots or Gazebo if we don’t have the hardware ready, for Operating Systems. Perhaps bringing robotic activities and workshops from student clubs and turning them into an embedded design course. In one or two semesters, maybe we will have enough momentum for a concentration and eventually fully equipped labs and facilities for a degree program with potential ABET accreditation in several years. But we have to make that first step and just learn, research, and teach this new content.

Jacquard’s loom reminds us that computing didn’t begin as “apps”. It began as control. The robots on stage are simply the modern version of that idea, moving at full speed. Our job is to give our students the skills to build it, not just watch it.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Why Should I Write?
  • Where to Begin: The Gartner Hype Cycle
  • A Game Evolution: From Arena to Journey
  • New Year Reflection: Learning from GPT
  • Learning from non-technical writings