🍜 When I use "biang" to prove my humanity to AI

Current location: Home · Blog · About Me · biang
📺 Inspiration: Prove you're not a robot with one character
Bilibili · Video Source

In a high school composition class video, the teacher posed a question: "If you had to use one character to prove you are not a robot, what would you choose?" Students wrote down "Love" (爱), "Hate" (恨), "Dull" (钝), "Slow" (慢) — answers carrying warmth and imperfection, like slices taken directly from young hearts. Three years later, as a student majoring in Intelligent Science, when I faced this question again amidst the gaps of code and algorithms, my first reaction was the "biang" from Biangbiang noodles: a character so complex that input methods struggle to type it, with strokes like a maze.

Not because of its cultural metaphor, nor because it symbolizes a tradition not yet digitized, but simply because — it is one of the Chinese characters with the most strokes. Choosing it felt more like a trained conditioned reflex: in the moment when I most needed to prove my humanity, what I subconsciously retrieved was complexity — this cold, computable attribute.

The clarity of that moment horrified me. It was as if, through three years of technical training, I saw how my cognition had been quietly reconstructed: while my juniors were still defining humanity with emotions, confusion, and existential imperfections, I was already responding to the same question with "information entropy" and "processing difficulty." From "Dull" to "biang," it wasn't growth, but an unconscious technological migration.

We are all rainmakers, yet we are the first to get wet. As learners in the AI field, we are at the center of this storm of change: using AI to spark research inspiration, using Copilot to generate code frameworks, and even delegating general education assignments — those humanistic reflections that should have been breathing spaces outside of technology — to language models for polishing. The gears of efficiency spin at high speed, devouring not just time, but that inefficient, contemplative state of mind that allows for aimless wandering. The tool, in turn, shapes the user: when everything can be optimized, those unoptimizable things appear out of place — whether it's a walk just for the scenery, or a poem read three times to understand only half a sentence.

It wasn't until I saw a classmate, maintaining a pace of reading one book a week, having finished over seventy books in 2025; one morning he rode his bike from Nansu by Taihu Lake to Wuxi for breakfast, and then returned. This low-efficiency richness made me realize: while technology pursues a world that approaches reality infinitely like differentiable rendering, those parts that cannot be differentiated — the change in wind pressure felt on the skin while riding, the blur on glasses from the steam of noodles, the subtle sense of achievement after completing a non-utilitarian task — are the last strongholds of humanity.

Yes, my relationship with AI is just like 3D reconstruction based on differentiable rendering: multiple back-propagations and optimizations can make the scene closer to the real world; through multiple iterations of feedback, AI can infinitely approach my intent, but it can never become the me that contains all body memories, emotional fluctuations, and subconscious impulses. It can quickly generate the draft of this article, but it cannot reproduce the cold wind of early winter in Suzhou knocking on the study room window when I wrote these words; it cannot simulate the anxiety lingering in my fingertips from last night's final exam review when I typed on the keyboard; nor can it recreate the sorrowful clarity rising in my heart at the moment I chose the character "biang" — the sorrow of "I am losing my sensitivity."

Watching that classmate eating noodles in the morning breeze of Wuxi, what I felt was not even longing, but a distant envy belonging to an ALS patient towards normal limbs — I understand that vivid beauty, but I seem to have lost the organ to touch it.

I am still in the center of this storm, still using AI to spark ideas, using Copilot to fill in code, and even still handing over general education assignments to large models to "water down" a decent-looking article. I know this is a cession of the soul, but under the whip of efficiency and the pressure of life, I cannot stop. I am like a driver who clearly sees the cliff ahead but cannot brake because the inertia is too great, watching helplessly as I fall along with the torrent of the entire technological era.

Differentiable rendering pursues the infinite optimization of parameters to approach reality. What I fear is that our generation seems to be undergoing a reverse optimization: to adapt to this AI world we built with our own hands, we are actively pruning away those non-differentiable, non-smooth, low-efficiency noises of humanity.

Perhaps, the answer to "proving you are not a robot" is ultimately not a character, but this clear and lingering pain that cannot be eliminated by algorithms.

*This article was assisted by deepseek

⬅️ Back to Blog 👤 Back to About Me