Hunan Cuisine

The cover image was taken at a small Hunan cuisine restaurant I often visit. It’s in the Yintian Jincheng store, not very far, and you can walk there from the company. The owner is great, everything is fresh stir-fried with plenty of “wok hei” (breath of the wok), and the prices aren’t expensive. I always order 15 minutes ahead, and a table full of dishes is ready when I arrive. Interested friends nearby should try it.

Record the down-to-earth trending technologies seen every week, and publish them here after screening. If you find it good, you can follow this weekly to get update notifications.

Figma “Design for beginners” is worth a look
https://help.figma.com/hc/en-us/sections/30880632542743
This series is worth a look. It’s suitable for beginners who know Sketch but want to learn some details or start learning design. You’ll design a portfolio website from scratch, covering basics like shapes, text, and frames, and explore more advanced features like auto layout, components, and prototyping. After finishing, you’ll basically be ready to work.

“MoMoYu” (Slacking Off) Timer is well-made
https://momoyu.app/
This free MoMoYu timer is very well-made—a very cute Mac desktop app. I actually feel it’s more of a simple focus timer; the less you slack off, the better you can focus on things. Haha, I thought about it in reverse.

Unify your existing devices into one powerful GPU
https://github.com/exo-explore/exo
The idea behind exo is quite good: unify your existing devices like iPhone, iPad, Android, Mac, and NVIDIA into one powerful GPU. The project itself supports various open-source models and provides a ChatGPT-compatible API for external use.

Qbot: AI automated quantitative trading robot
https://github.com/UFund-Me/Qbot
Found an interesting project on GitHub: Qbot, an AI automated quantitative trading robot. It’s fully locally deployed. The description says Qbot = Intelligent trading strategy + Backtesting system + Automated quantitative trading + Visual analysis tools. Although the UI is average, the backend logic is quite worth learning.

TeachYourselfCS-CN: Self-study Computer Science in Chinese
https://github.com/izackwu/TeachYourselfCS-CN
This self-study CS guide is quite suitable for “vibe coding” friends who aren’t from a CS background. It helps you understand some CS basics and can prevent the issue of “I want to change something quickly but don’t know why it can be changed.”

Just Looking Around

Hahaha, I also added the “Eight Honors and Eight Shames” to Claude Code. Useful!
Shame on guessing interfaces; Honor in careful reading.
Shame on vague execution; Honor in seeking confirmation.
Shame on blind business logic; Honor in human confirmation.
Shame on creating interfaces; Honor in reusing existing ones.
Shame on skipping verification; Honor in active testing.
Shame on destroying architecture; Honor in following specifications.
Shame on faking understanding; Honor in honest ignorance.
Shame on blind modification; Honor in cautious refactoring.

Found an interesting tip on Hacker News
Found an interesting tip on Hacker News: when using Google search, add -fu*k after your keywords. Ads and AI Overviews will disappear, giving you clean search results. Try it out.

Random Thoughts

Comparing Codex and Claude Code in depth over the weekend

Over the weekend, I spent a day comparing ChatGPT’s Codex and Claude Code. On Saturday morning, when I first started my monthly subscription for ChatGPT Plus and installed Codex for VSCode, my first impression was that Codex’s product interaction and module display were entirely superior to Claude Code. Each step was clearly displayed, code changes comparison was comfortable, and I could use it for a long time without the “anti-addiction” design found in Claude Code. I wasn’t used to it at first and even thought OpenAI was truly generous.

However, I found that when encountering difficult and complicated problems, Claude Code’s clarity of logic and thinking completely won over Codex. Although the Claude Code command line itself has a very crude effect, it doesn’t affect the display of ability and problem-solving at all. While Anthropic isn’t politically great and is a bit blunt with users, it truly has skills. It doesn’t affect my willingness to continue paying, and I plan to buy some Amazon stocks at a suitable price.

This is consistent with many current LLM products: the stage of purely making good product tools seems to be passing. The concept of “Model is the Product” is once again being emphasized. No matter how good the interaction is, faced with users’ needs, the most important thing is the capability of the model itself. Interaction becomes the icing on the cake, and the model’s ability to solve problems becomes the key to victory.

For LLM companies, it truly has become “whoever masters model training masters the future.” Current AI competition is not like the previous generation’s competition for application innovation, which can’t produce much more, but competition for technical innovation. If the downstream dependent model capability is imperfect, it makes little sense to compete in applications. Companies without model capabilities just “reskinning” AICoding editors also makes little sense.

However, many friends think Codex is superior, which might be due to different scenarios or using low-thinking modes. I tried it more deeply on Sunday.

Supplement for Sunday: On this day I used GPT-5-high. In Mac application development scenarios (leaning towards Swift), which involves debugging and compiling apps, Claude Code still feels better. However, after setting it to “high,” it became much better than yesterday’s “low,” but also much slower because it needs to think for a long time. In the future, I may prioritize Claude Code and use Codex when encountering “anti-addiction” limits.