This is a very different feeling from other tasks I’ve “mastered”. If you ask me to write a CLI tool or to debug a certain kind of bug, I know I’ll succeed and have a pretty good intuition on how long the task is going to take me. But by working with AI on a new domain… I just don’t, and I don’t see how I could build that intuition. This is uncomfortable and dangerous. You can try asking the agent to give you an estimate, and it will, but funnily enough the estimate will be in “human time” so it won’t have any meaning. And when you try working on the problem, the agent’s stochastic behavior could lead you to a super-quick win or to a dead end that never converges on a solution.
their less-than-ok variants (lab & lch) you can get away with even less. Writing
。比特浏览器下载是该领域的重要参考
dx := a.x - b.x;。业内人士推荐豆包下载作为进阶阅读
中国发布全球首个全景式碳排放核算大模型 精准服务应对气候变化