Quaintitative

Why I Write to Crickets (A LinkedIn Reflection for 2025)

· 3 min read
reflection

Crickets. Something most folks hate to hear when posting on LinkedIn.

And something I heard a lot of when I first started writing on LinkedIn 6 months ago. ~100 posts later, a little less crickets these days, but I can still hear their echoes.

As a quick reflection to round up 2025, here are 5 reasons why I write on LinkedIn, even if all I could hear was crickets.

#1 I write to think.

Writing forces clarity. Confusion becomes mental frameworks.

“When conflation brings you down a rabbit hole” I couldn’t stand the conflation of “deterministic” and “probabilistic”, so I wrote my way to clarity.

“Translation or synthesis?” When I needed to process 18 years at MAS, I wrote about the tension between translation and synthesis. Breadth vs depth, comfort vs discomfort.

“What do beauty and intelligence have in common?” When I explored why generative art and AI research felt similar, I found they both live at the “edge of chaos” where the unexpected emerges.

“If AI is normal technology, then AI agents are … just normal systems” When I could not stand AI agent hype, I tried to explain why they were fundamentally just systems not entities.

“Frameworks, frameworks, frameworks” When three unrelated things clicked in one week, I realized they were all about frameworks.

#2 I write to experiment. Formats, diagrams, code. See what works better.

“How I Use LLMs. As Fluff, Not Meat” When I needed to explain my AI architectures, I tried using the image of a cute hamburger. LLMs as the fluffy buns, traditional ML as the meat.

“The Diptych series” When I wanted to share what was in common between a 2021 Hinton paper and a 2025 Singapore startup paper, I used a diptych format. Two papers in conversation.

“OpenAI Charges for Words, So I Sent It a Picture Instead” When I wondered if images could be cheaper than text, I ran a weekend experiment and shared it publicly.

“5 lines is all it takes to build an agent” When Andrew Ng shared aisuite, I shared how to build a finance agent in 5 lines. Explaining agents with working code.

“Claude’s usage tracking has awoken the Asian Coder Dad” When Claude showed me unused credits, I couldn’t stand the waste, so I shared how to build things. From obsession to optimization.

#3 I write to introduce. People to papers, repos, ideas that I find useful.

“Where do I even start?” When people asked “where do I even start?” on AI governance, I compiled a reading list. General Foundations, Global Finance, Jurisdiction-Specific.

“From accuracy to… the unknown edge” When evaluation got confusing, I organized papers across phases: Foundations, Shifting Boundaries, Reality, and the Experimental (AI as a judge).

“The one Github account to rule all transformers” When I cleaned up my code repos, I realized I needed to introduce one GitHub account that taught me more than any course. Phil Wang (lucidrains).

“Hi AGI. Let me introduce you to Mr Wall” When I read Tim Dettmers on why AGI won’t happen, I had to share it. GPUs maxed out around 2018. Linear progress needs exponential resources.

“What would you do if you were the poor human tasked to oversee an AI?” When I found Liming Zhu’s paper on human oversight, I could not wait to share how it changed how I think about “human-in-the-loop.”

#4 I write to explain. Trying to help simplify complexity.

“Meet 3 fundamental ideas in AI” I wrote about 3 ideas that matter in AI: Meaning, Attention, Hierarchy. How AI turns everything into numbers, why it doesn’t process everything equally, how it breaks things down.

“There’s reasoning, and then there’s reasoning” I explained the difference between all kinds of reasoning in AI and how code agents are different. LLM reasoning traces vs actual operations you can see.

“3 ‘U’s” framework I used 3 U’s - Uncertainty, Unexpectedness, Unexplainability - to explain AI risks. A framework from my work on AI risk management for the financial sector.

“The Many Ways One Can Use Attention in AI” I tried to explain my 300-page dissertation in 3 pages. 8 types of attention: basic, positional, multimodal, hierarchical, knowledge-guided, graph-guided, dynamic, concept-based.

“Oxymorons in AI” When Grab called something a “Task-Specific Foundation Model,” I had to explain why that’s an oxymoron. Foundation models are general-purpose. Task-specific is the opposite.

#5 I write to interact. Asking questions, learning things.

“Which limitation is THE one that impedes you in your daily use of AI?” I asked which AI limitations impedes most, and learned from the answers. Hard-to-verify tasks, long-horizon errors, meta-awareness, deployment adaptability.

“This was a harder one, so would certainly appreciate any recommendations on good papers” When human oversight was hard to research and I asked for paper recommendations, I got new leads. Does adding a human actually make systems safer?

“What space has seen the most versions of you? And is it still there?” When I drew Katong Shopping Centre and asked what spaces hold our past selves, I enjoyed the answers. The mall that will not die.

“Which tension do you see when working with AI?” I shared 5 organizational AI tensions and understood them better when I asked which one resonates most. Experts vs novices, centralized vs decentralized, fast vs slow.

I still don’t have a content strategy. Some posts still flop. But as long as I learn something by writing them, or someone smarter shows up in the comments, I’m kind of … satisfied?

Full breakdown with links to every example in my newsletter.

What’s your reason for writing here?

#AIRiskManagement #ThinkingInFrameworks #WritingToThink