The Search Engine That Almost Never Was: How Larry Page's Dream Machine Broke Stanford's Internet
In 1996, Larry Page built a search engine so powerful it crashed Stanford's network for hours. The university wanted to shut it down. He had to find a way to make it faster — or kill it forever.
The Night Stanford's Internet Died
It was 2 AM on a Tuesday in January 1996, and Stanford's network administrators were furious. For the third time that week, the university's internet had slowed to a crawl. Email wouldn't send. Web pages timed out. Researchers couldn't access databases. Someone was hammering the network with an absurd amount of traffic, and they'd finally traced it to a graduate student's dorm room in the Gates Computer Science building.
Larry Page, a 23-year-old PhD student with unruly hair and an obsessive personality, was the culprit. He'd built something he called "BackRub" — a search engine that was methodically crawling and downloading the entire World Wide Web. Not just indexing URLs like other search engines. Actually downloading pages, analyzing every link, calculating relationships between millions of websites.
The project was eating bandwidth like nothing Stanford had ever seen. Page's crude cluster of repurposed desktop computers, connected by loose cables snaking across his dorm room floor, was consuming so much of Stanford's internet capacity that legitimate research was grinding to a halt.
The IT department sent him an ultimatum: shut it down, or we'll shut it down for you.
Page refused. He was onto something revolutionary, and he knew it.
The Crazy Idea That Wouldn't Die
Larry Page had arrived at Stanford in 1995 with a peculiar obsession: understanding the structure of the World Wide Web. While other computer science students were chasing hot topics like artificial intelligence or virtual reality, Page was fixated on a seemingly mundane question: how do you determine which web pages are actually important?
The existing search engines — AltaVista, Excite, Yahoo — were primitive. They ranked pages by counting keywords. Search for "university" and you'd get thousands of worthless results. Some clever spammers had figured out they could game the system by repeating keywords hundreds of times in white text on white backgrounds. The web was growing exponentially, and search was broken.
Page's insight came from an unlikely place: academic citations. In academia, the importance of a research paper isn't determined by how many times it mentions "cancer research" — it's determined by how many other papers cite it. And not all citations are equal. A citation from a prestigious journal carries more weight than one from an obscure publication.
What if the web worked the same way? What if links were like citations — and you could determine a page's importance by analyzing who linked to it, and how important those linking pages were?
It was elegant. It was recursive. It was mathematically beautiful.
It was also computationally insane.
Building the Impossible Machine
To test his theory, Page needed to download the web. All of it. Then he needed to analyze the relationship between every page and every other page — billions of potential connections.
He recruited Sergey Brin, a fellow Stanford PhD student known for his mathematical prowess and his inline skating through campus hallways. Brin was initially skeptical. "This is crazy," he told Page. "You can't download the entire web."
Page's response was simple: "Watch me."
They started building in Page's dorm room in March 1996. Page wrote a web crawler — software that would start at a single URL, download the page, extract every link, then follow those links recursively. Brin developed the mathematical algorithms to analyze the link structure and assign importance scores.
They needed computing power, but PhD students don't have budgets. So they got creative. They borrowed unused computers from around the department. They bought cheap hard drives — the biggest they could find — maxing out their credit cards. They used LEGO bricks to build a custom storage case for the drives because actual server racks were too expensive.
The setup was absurd. Multicolored LEGO bricks holding up terabytes of data. Cables everywhere. The humming of hard drives filled Page's dorm room. His roommate moved out.
By summer, BackRub was crawling. It was downloading thousands of pages per hour, storing them on the LEGO-rack drives, analyzing link patterns. The algorithm — which they'd eventually call PageRank, a play on Larry's last name — was working. When you searched for something, the results were good. Scary good. Better than anything else on the market.
But there was a problem. The project was consuming so much bandwidth that Stanford's network was buckling.
The Ultimatum
The network administrators weren't impressed by innovation. They were impressed by angry emails from professors who couldn't download research papers. They gave Page and Brin a choice: drastically reduce the bandwidth usage, or shut down the project entirely.
For most students, this would have been the end. But Page saw it differently. If BackRub was consuming that much bandwidth, it meant they were actually crawling the web at scale. It meant the project was working. The problem wasn't the concept — it was the execution.
They needed to make it faster. More efficient. Smarter about what to crawl and when.
Page and Brin spent weeks optimizing. They rewrote the crawler to be more selective, focusing on important pages first. They implemented caching to avoid downloading the same page multiple times. They figured out how to compress data more efficiently.
Slowly, the bandwidth usage dropped. Stanford's network stabilized. The IT department backed off.
BackRub survived.
The Search That Changed Everything
By fall 1996, BackRub was live on Stanford's internal network. Students and faculty could search it from their dorm rooms and offices. The URL was awkward — backrub.stanford.edu — and the interface was bare-bones. Just a search box and a button.
But the results were magical.
Search for "Stanford" and you didn't get thousands of random pages that happened to mention the word. You got the university's homepage at the top, followed by the most important department pages, then relevant research papers. The ranking made sense in a way no other search engine did.
Word spread through campus. Then to other universities. Researchers at MIT and Berkeley started using it. Tech companies in Silicon Valley heard rumors about these Stanford grad students with a search engine that actually worked.
One early user, a Stanford professor, sent Page an email: "I don't know how you did this, but this is how search should work. This changes everything."
Page and Brin realized they had a problem. BackRub was handling 10,000 searches per day on Stanford's network. But the web was doubling in size every few months. To keep up, they'd need more servers, more storage, more bandwidth — more money than two PhD students could possibly scrape together.
They tried to sell it. They approached Yahoo, Excite, AltaVista — all the major search companies. They wanted $1 million for the technology.
Every single company said no. Yahoo's chief architect, Udi Manber, looked at the demo and said, "It's cute, but we're a portal, not a search company." Excite's CEO loved it but thought $1 million was too expensive for a technology that didn't directly generate revenue.
Page and Brin were running out of options. They couldn't afford to keep BackRub running on Stanford's infrastructure forever. They couldn't sell it. They were contemplating shutting it down and finishing their PhDs.
Then one of their professors, Rajeev Motwani, introduced them to an investor named Andy Bechtolsheim — one of the founders of Sun Microsystems. They met him at 8 AM on the porch of a faculty member's house. Page and Brin, sleep-deprived and disheveled, gave a quick demo on a laptop.
Bechtolsheim watched for maybe two minutes. Then he said, "This is the single best idea I've seen in years. I don't have time for details right now, but I'm in."
He pulled out his checkbook and wrote a check for $100,000.
The payee line read: "Google Inc."
There was just one problem: Google Inc. didn't exist yet. They hadn't incorporated. They couldn't cash the check.
The Legacy
That check sat in Page's dorm room drawer for weeks while they scrambled to file incorporation papers. When they finally deposited it in September 1998, they used the money to rent a garage in Menlo Park, buy proper servers, and officially launch Google.com.
The name came from "googol" — the mathematical term for 10^100, representing the massive scale of data they were organizing. A friend misspelled it as "Google" when checking if the domain was available. They liked it. It stuck.
Twenty-five years later, Google processes over 8.5 billion searches per day. The algorithm that Larry Page built in a dorm room — the one that nearly got shut down for crashing Stanford's network — became the most valuable piece of software ever written.
The LEGO server rack? It's in the Smithsonian now.
BackRub, the search engine that almost never was, became the gateway to human knowledge. All because a stubborn grad student refused to shut down his bandwidth-hogging project, even when the university's IT department demanded it.
Sometimes the most revolutionary ideas are the ones that break the network.
Keep Reading
The Demo That Almost Didn't Happen: How Steve Jobs Faked the First iPhone Keynote
On January 9, 2007, Steve Jobs showed the world a revolutionary phone. What the audience didn't know: the iPhone in his hand was held together with duct tape, crash-prone software, and a very specific sequence of moves he couldn't deviate from.
One More Thing: The Presentation Secrets Behind the Most Legendary Keynote in Tech History
On January 9, 2007, Steve Jobs walked onto the Macworld stage and introduced three products. Except it was only one. The audience didn't see it coming — and that was the point.