FastRender and Cursor’s Autonomous Coding Experiment
Cursor described the browser as having a from-scratch rendering engine in Rust, including HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JavaScript virtual machine. However, the browser only “kind of” works, and independent developers who examined the code found it barely compiles, often fails to run, and is heavily reliant on existing projects like Servo and QuickJS, contradicting the “from scratch” claim.
The project consumed an estimated 10-20 trillion tokens and cost several million dollars, yet recent commits fail to build cleanly, and performance is poor, with pages taking about a minute to load.
Cursor engineer Wilson Lin admitted the difficulty of building a browser from scratch and revealed that parts of the code, such as the JavaScript engine, were derived from his personal parser project, not AI-generated. Servo maintainer Gregory Terzian harshly criticized the code as a tangled, poorly designed mess incapable of supporting a real-world web engine. The project was presented as a milestone confirming Cursor’s autonomous agent capabilities rather than a messy internal experiment, lacking essential engineering standards like passing continuous integration, reproducible builds, and meaningful benchmarks.
This case exemplifies the current AI hype problem in software development, where marketing oversells AI’s ability to autonomously deliver complex projects, while practical, reliable results remain elusive. Despite CEOs predicting AI will soon write most code, many enterprise AI pilots fail to deliver significant returns. Tools like Cursor’s AI agents can assist with autocomplete and refactoring but are far from replacing human engineers for full project delivery.
The industry is in an “AI uncanny valley,” where excitement outpaces actual capability.
Links: