share_log

"Difficult tasks" are becoming increasingly valuable! In a recent dialogue at Stanford, Sergey Brin, co-founder of Google, discussed hard-core technologies in the AI era and highlighted an emerging field that is currently undervalued…

Smart Investor ·  Dec 24 23:46

Source: Smart Investors

Following the 'AI red alert' triggered by ChatGPT in November 2022, $Alphabet-A (GOOGL.US)$ / $Alphabet-C (GOOG.US)$ the two co-founders, Sergey Brin and Larry Page, who had only retired two years prior, were 'called back to help' and began appearing frequently in the office.

As Sergey Brin's increased public presence in 2025 and Gemini's rapid advancements unfolded, their full return not only reignited Google’s internal entrepreneurial spirit but also caused a stir within the investment community.

Even Berkshire Hathaway made significant purchases of Google shares in the third quarter.

Recalling Elon Musk’s recent remarks on the three key factors in AI competition—first being talent—and 'Silicon Valley Godfather' Steve Hoffman’s comments during an offline exchange in Shanghai, where he critiqued Meta for lacking an internally cultivated engineering culture… seeing the return of Google’s two founders with hardcore technical expertise indeed evokes admiration for the company’s deep moat of technology and talent, which is difficult to replicate.

An investor commented on Todd Combs’ recent departure from Berkshire Hathaway to join JPMorgan by saying, 'Top talents still prefer to be around the best people.'

This statement seems quite fitting when applied to Google today.

What exactly does the return of the two founders signify? Recently, I’ve watched numerous interviews with Sergey Brin, while those featuring Larry Page remain on my list.

Let’s start with Brin’s recent conversation at Stanford.

During the closing event of Stanford University’s School of Engineering centennial celebration, Brin took the stage alongside President Jonathan Levin and Dean Jennifer Widom, sharing his extensive thoughts as an engineer, entrepreneur, and AI practitioner.

Quite interesting. The outputs from all three were impressive.

During his Ph.D. studies at Stanford, Sergey Brin did something 'somewhat illegal.' For the computer in the lab that could 'engrave' electronic keys, he climbed scaffolding, pried open a balcony door, copied the program, and created a 'master key'—which later granted him access to nearly the entire building.

It was during this period of freedom and mischief that he and Larry Page co-authored the original PageRank paper and jointly founded Google. Over the next two decades, Google grew from a search engine into one of the most influential technology companies globally, while Brin himself became one of Silicon Valley's most legendary engineer-entrepreneurs.

After retiring, Brin originally planned to study physics in coffee shops... but for various reasons returned to the office in 2023, gradually becoming deeply involved again, from participating in model training to strongly advancing the Gemini project.

Brin believes that over the past decade, especially in the AI era, 'difficult things' have become increasingly valuable. 'I think it’s somewhat of a historical pendulum swing: at this stage today, hardcore technology has suddenly become so important.'

He admitted that commercializing Google Glass too early was a lesson in 'thinking I was Steve Jobs.' He also advised young entrepreneurs to ensure their ideas are mature before launching external accelerators, as once growth begins, external expectations, expenses, and timelines can create pressure that distorts actions.

Notably, when asked on the spot about an 'underrated emerging field,' Brin’s response seemed somewhat reluctant. He said, 'If you really want me to name a direction, perhaps it’s the application of AI—and potentially quantum computing in the future—in materials science.'

Levin and Jennifer both agreed with this conclusion; Jennifer also added that changes in the field of synthetic biology are somewhat underestimated.

Having discussed many takeaways at length, we still recommend watching the full session carefully. Smart Investors (ID: Capital-nature) has meticulously translated and organized this content to share with everyone.

1. Stanford Anecdotes

Levine: Sergey came to Stanford for his Ph.D. in 1993. At that time, Jennifer had just started as a professor, and I was still an undergraduate at Stanford. It was the fall of 1993, and I was a senior.

Let’s go back to that time. You were a graduate student at Stanford. Can you talk about what it was like studying in the engineering school? And how did that experience shape you and create opportunities that eventually led to founding Google?

Brin: In retrospect, I may not have fully realized it at the time, but it was indeed a highly creative and liberating period.

I began my Ph.D. studies in Margaret Jacks Hall, which is located on the main quad. It was an older building with small creaky rooms and narrow doors.

That’s where I learned how to pick locks—thanks to a guy from MIT who taught me how to jimmy them open.

Looking back now, I’m genuinely surprised by how much freedom everyone gave me. Because, you know, I could completely manage my own time as I saw fit.

At first, I was working on 'reverse engineering shredded documents,' meaning scanning your shredded files into the computer and trying to piece them back together.

I never quite got that system to work perfectly, but strangely enough, no one ever told me to stop working on it.

I went through several advisors during those years: Hector—he passed away, unfortunately, but he was such a kind and gentle person—and later Jeff Ullman. They occasionally asked me what I was working on, but overall, they didn’t impose many restrictions.

We later moved to the new SAIL building, the Gates Building for computer science. At the time, it felt a bit ‘playful’ compared to today.

At that time, I already knew my 'lock-picking career' was coming to an end because the building used electronic keys with small infrared devices on them.

Back then, these electronic locks weren’t connected to the internet, so they wouldn’t verify permissions themselves but simply read the information stored in the key. If the key said you were allowed in, it let you in. I guess it hasn’t changed much now. Anyway, being electronic, they were troublesome—unlike mechanical locks, which could be easily forced open with a couple of tries.

When we first moved into the building, there were still scaffolds outside, and some areas inside were still under final construction. But all doors and offices were already equipped with electronic locks, which I couldn’t possibly open.

The only lock I could pick was the one on the balcony—and coincidentally, that room housed the computer used to 'program' the electronic keys. (Audience laughter)

So... I think the statute of limitations for legal prosecution should have passed by now. I hope so. I should be able to tell this story.

In any case, I climbed out from my office onto the scaffolding, knowing they would remove it the following week.

That scaffolding was real scaffolding, with various horizontal and vertical bars—it looked pretty intimidating. Looking back now, I don’t know where I got the courage at the time. All I can say is, I was just a kid then, and my judgment wasn’t the best. (Laughter)

I climbed along the scaffolding to the balcony, picked the physical lock, got inside the room, and saw the computer.

I remember copying all the software from that computer, essentially creating a 'master key' for myself, and then deleting the copies. For a while after that, this master key could open almost anything.

2. Starting Point of Entrepreneurship

Levine: The kind of 'education' you received at the engineering school could really take your life in many directions... like becoming a master locksmith or a CIA agent (laughs). But obviously, you didn’t end up on that path.

So, tell us, how did you actually end up on the path you eventually took? How did it all come together?

Brin: We started intermittently working on what later became Google around 1995, and continued for several years.

Larry Page deserves credit for being highly focused on the link structure of the Web. Back then, the internet itself was something entirely new, and coming up with a novel idea was incredibly easy.

My first money-making idea was to create an online pizza ordering service. You have to understand, at that time, the concept of 'ordering takeout online' sounded completely absurd, whereas now it’s become second nature.

As a joke, I placed a Coca-Cola ad at the top of the page. I thought it was hilarious — internet advertising, can you imagine? As it turned out, it wasn’t funny at all. (Audience laughs)

In any case, the project failed spectacularly.

The process was as follows: customers would fill out an order form on the website, but most pizzerias weren’t online back then. So I figured they at least had fax machines, right? I decided to automatically send them faxes with the orders. Then I realized they barely ever checked their faxes… and that’s when everything started to fall apart.

Of course, the idea ultimately didn’t work out, but it reflected the atmosphere of the time: those of us from computer science backgrounds generally understood how the internet operated and how to set up web servers. If you wanted to build a small website, you could do it in no time.

Back then, everyone was 'drawing things' online, experimenting everywhere, and it was an incredibly creative period.

Back to the topic. Larry was focused on the link structure of the Web, and I was doing data mining, so we combined the two ideas. Soon, we realized that this approach was indeed very useful for search.

But we experimented with it at Stanford for a long time, and as we were testing it, we kept wondering: should we treat this as purely an academic project or take it further?

We also tried licensing the technology to some Internet companies.

Once, we pitched it to Excite. Vinod Khosla had a good eye for it; he thought at the time, 'This is great, you should buy it.' But Excite wasn’t very interested.

However, Vinod and I continued communicating via email. We sent an email saying: okay, we are willing to license the technology to you for $1.6 million. About 15 minutes later, we received a reply saying: 'Wow, that’s a lot of money, but okay.'

At the time, we were thrilled. For a graduate student, $1.6 million was an astronomical figure.

Then our friend Scott came to us. At the time, four of us were working on the project—Scott and Alan were the other two. Later, they went off to start their own venture. That day, Scott burst in laughing uncontrollably.

It turned out that the 'reply from Vinod' was forged by him. Back then, you could pretty much pretend to be anyone sending emails... (audience laughter)

So, naturally, the deal didn’t happen.

But later, Larry and I still felt that if we wanted to scale this thing up, we really needed substantial funding. So we decided to look for investment. In the end, we did find angel investors, and at the time, finding angel investment was relatively easy.

My advisor, Jeff Ullman, was particularly lenient with me at the time. For me, leaving the doctoral program was a significant decision. My parents were certainly disappointed, but Jeff's attitude was: why don't you give it a try, and if it doesn’t work out, you can always come back.

He made it sound so effortless, as if it were a reversible choice. So, strictly speaking, I am technically still on leave, and perhaps one day I really will return to complete my degree. We’ll see how things develop.

Levine: Looking back today, this story seems quite interesting. Entrepreneurship was actually the 'last resort' at the time—something you only turned to after trying licensing and various other paths that didn’t work out. In a sense, your journey also reshaped people’s perceptions of entrepreneurial pathways.

Brin: It’s hard for me to say. Our experience was our own unique path. Scott and Allen both eventually left their programs to start their own companies, which was part of the reason they moved on. Perhaps they grew impatient with our constant hesitation and attempts to 'push the technology forward.'

But honestly, there were a lot of people starting businesses back then; it was quite common.

As I mentioned earlier, Allen was already involved with a weather-related company, which I believe later became Weather Underground. I remember it was acquired in recent years.

Scott also had a company that specialized in email list archiving called e-Groups, which was later acquired by Yahoo.

In short, such cases were very common back then. So, I don’t know...

We just took longer than many others to make up our minds: let’s do it ourselves.

3. "Difficult things" are becoming increasingly valuable.

Levine: Looking back at that starting point, it’s certainly astonishing. Google is now worth around $4 trillion, processes 10 million searches per minute, and has developed countless products and businesses. But when you and Larry decided to start the company, none of this was obvious to most people, which is why issues like 'the licensing deal falling through' happened.

Looking back from where you started to where you are today, it’s clear that you made many key decisions correctly. In hindsight, were there things you got right from the very beginning that you now feel were particularly important?

Brin: I think from the start, Larry has always been extremely ambitious, and he still is. Basically, if you propose any plan to him, he’ll say, 'Not big enough.' You can’t just think about the solar system; you need to think about the galaxy (laughs).

That’s probably part of his passion and drive.

So, we did have a very grand mission statement early on, like 'organizing the world’s information.' I think laying the foundation for the company with such an idea was quite good.

Another aspect is that our company had a strong academic atmosphere from the very beginning.

Both of us came out of PhD programs, while many startups at the time were founded by people right after their undergraduate studies. I think this gives you a slightly different perspective on how you approach problems.

Of course, I should clarify that many companies founded by undergraduates are also exceptionally excellent. However, along our journey, we have consistently placed greater emphasis on fundamental R&D investment, including longer-term and more foundational R&D, which I believe became part of our culture quite early on.

Levine: You also hired many PhDs, so it wasn’t just the two of you.

Brin: Yes, very clearly so. For instance, Urs Hölzle, who was one of our earliest core members. I met him because I was involved in a faculty recruitment review process at Stanford at the time, which included interviewing him.

He didn’t end up getting the job at Stanford, and I’m not entirely sure why. These things tend to be quite complex, and the candidates were all very strong.

But as soon as he messaged me later, I immediately said: Can you start working tomorrow? Because I already knew him well and was very clear about his capabilities.

Levine: If we were to rank the most innovative companies globally over the past 25 years, I believe Google would undoubtedly be at the top. You’ve had innovations at the product level, such as YouTube, DoubleClick, and Waymo; and also continuous breakthroughs at the technology level, from foundational systems in the early days to chips and computing power today.

What intrigues me is this: it’s extremely difficult for a large company to maintain high levels of innovation over the long term. Almost every company hits a bottleneck at some point, but you seem to have achieved it. Many people also think you’ve played a significant role in this. How do you understand “innovation culture”? And how do you view your own role in it?

Brin: Thank you, here comes the praise again (laughs). Let me first say this: we’ve had plenty of failures and a long list of unsuccessful projects. I won’t go into details now, but there have been quite a few failures.

So, in a sense, part of innovation is that you have to keep trying.

I think it’s probably because our “academic roots” run deep, so we’re more willing to take on difficult challenges and bet on technically complex areas. Especially in the last decade or so, “difficult things” have become increasingly valuable.

Take AI, for example. It’s clearly a huge trend, but if you look closely at the computational power required behind it, the depth of mathematics involved—these are technically profound and highly challenging problems.

I think this is somewhat of a cyclical shift: at this stage today, “hardcore technology” has suddenly become so important.

There was a period when you might remember pets.com. Back then, you could slap anything with a .com and it seemed to work. The technical threshold wasn’t high; anyone with basic Web knowledge could create a whatever.com. At the time, we were working on search, which fortunately required deeper technical expertise. But since then, technological complexity has only continued to increase.

To be honest, the people we are hiring now are far more capable than I was, and even much better than my peers back in the day. At that time, I considered myself a 'math-inclined computer science student,' since I studied both mathematics and computer science during my undergraduate years, which was relatively rare among students of my cohort.

But now, when we recruit from Stanford and other top-tier programs, these young individuals possess an exceptionally strong foundation in both mathematics and computer science. Many of them also come from physics backgrounds because physics training inherently demands rigorous mathematical skills. Moreover, much of their work relies heavily on constrained computational resources, necessitating solid computational capabilities.

So I think the reality has evolved to this point: foundational R&D capabilities are becoming increasingly crucial. In a way, we were fortunate to have bet on this path early on.

4. Hardcore Technology Belief

Levine: This observation is quite intriguing: technological challenges have once again taken center stage, reemerging as a competitive advantage for businesses. Let’s talk about AI. Everyone is thinking about it right now.

You’ve also returned to Google to engage in related work, where you are at the forefront in many aspects, and the competition is extremely intense. The scale of capital invested in AI infrastructure has reached hundreds of billions of dollars, even at the level of individual companies. It’s truly astonishing. How do you view the overall landscape of AI today?

Brin: Well, I need to think about how to answer this without sounding like I’m boasting (laughs).

Indeed, the scale of investment is enormous. But I must admit, from certain perspectives, we did make mistakes: our investment was insufficient, and we didn’t take it as seriously as we should have.

For example, about eight years ago, when we published the Transformer paper, we didn’t actually treat it as a direction that required full commitment, nor did we promptly scale up our computing power investments.

Another reason is that we were somewhat hesitant to directly roll it out to the public because chatbots can say foolish things, as you know. As a result, OpenAI took the lead on this, and we have to admire them for it—it was a very smart decision. Moreover, some of our own people, like Ilya, went over there to push these initiatives forward.

But I do believe that we have benefited from Google's long-term technological accumulation. Our research and development in neural networks began as early as the era of Google Brain.

In a sense, there was an element of luck, but it wasn't entirely due to luck. For example, hiring Jeff Dean was no accident. Of course, it was fortunate that we were able to recruit him, but more importantly, at the time, our mindset was: hardcore technology matters.

We also hired many people from DE because they had top-tier research labs at the time.

Jeff is deeply fascinated by neural networks, which may be related to his early experiences. I'm not sure exactly how he developed this passion, but when he was around his teens, he was already doing some crazy things—studying neural networks while thinking about curing diseases in the third world.

That’s just the kind of person he is, really extraordinary. Later, he built up the entire team.

At that time, he was under my responsibility within the Google X system, but my attitude towards him was basically: “Jeff, do whatever you want.”

He once told me, “We can make machines distinguish between cats and dogs.” I said, “Oh, cool.” (Laughter from the audience)

But you need to trust your technical people. Not long after, they actually developed a series of neural network algorithms and started applying them in some of our search processes.

Later came the Transformer, and we were able to do more and more.

So on one hand, we have deep foundations, but on the other hand, for a few years, we didn’t quite focus enough on scaling it as a strategic priority. Meanwhile, we did get some other things right, such as hardware.

The TPU line started about twelve years ago. Earlier, we used GPUs, and we were probably one of the first companies to adopt GPUs on a large scale. Later, we moved to FPGAs, and then we began developing our own chips, iterating through countless generations.

We have unwavering faith in hardcore technology, which drives us to extract more computing power, design chips, and develop algorithms. At the same time, we have long invested heavily in computational infrastructure.

Our data centers reached a very large scale quite early and have been operational for a long time. Of course, Amazon’s AWS also operates massive data centers, but there are not many companies that can achieve a scale like ours.

There are not many companies that possess proprietary semiconductors, deep learning algorithms, and key components across the entire stack, allowing them to remain at the forefront of modern AI.

Levine: Technology is becoming more powerful every year. There are many different visions about what AI will evolve into. Will it eventually be able to do everything a human can do while sitting in front of a computer, or even more? Do you have any thoughts on where the technology might head?

Brin: Honestly, the pace of innovation is astonishing. The competition is incredibly fierce, and everyone can see it: the top companies in the U.S., the leading companies in China—everyone is striving hard.

If you don’t keep up with AI news for just one month, you’re already significantly behind.

Where will it go in the future? I don’t know; perhaps we truly do not know.

I think beyond the question you just raised—whether it can do everything humans can do—there is an even harder question: Can it do things that humans cannot? This is essentially the issue of 'superintelligence.'

There is no answer to this yet: To what extent can something become intelligent?

Human evolution took hundreds of thousands of years, and the evolution of primates took millions of years. This pace is extremely slow compared to the current development rhythm of AI.

Levin: Are we ready to face this speed?

Brin: For now, people are indeed deriving significant benefits from technology. Although there are occasional 'apocalyptic' predictions, overall, everyone's capabilities have been significantly enhanced.

And to be honest, AI is often still ridiculously unintelligent—you always need to keep an eye on it, supervise it. But occasionally, it can be very smart and give you an excellent idea. Especially when you're not an expert in a particular field, it can be quite useful.

For example, if I want to understand how to design a new AI chip, of course, I could ask the top chip design experts at our company. But even without them, I could at least pick up my phone and chat with an AI. It will most likely provide you with an 80% to 90% reliable overall framework to help you grasp the general picture.

The same applies to health-related issues.

I think it empowers individuals because typically, you don’t have experts in fields X, Y, and Z constantly around you. The empowerment it brings will unlock a lot of potential, whether in career development, entrepreneurship, health management, or improving one’s quality of life.

Of course, I don’t have all the answers, but I do believe it has enormous potential to enhance personal capabilities.

5. We must make use of AI

Levin: This is indeed a more positive vision; it will become an amplifier of human capabilities. I’m glad you see it from this perspective.

But now, every Stanford student, even every college student, is thinking about one thing: How will AI impact my career and opportunities? What advice do you have for your peers? What should they study? How should they approach the future job market?

Brin: I think it’s extremely difficult to predict.

Looking back, from the advent of the internet to the widespread adoption of mobile phones, these technologies have profoundly transformed society, as well as the types of jobs, career paths, and what people study. AI will absolutely do the same—100% it will bring change.

But during this current phase of rapid change, it’s hard to say exactly how things will evolve.

Moreover, today’s AI is already completely different from AI five years ago; and in five years, it will be something else entirely.

It’s also difficult for me to make definitive predictions, but one thing I am certain of: You must leverage AI to serve yourself.

It can help you with so many things. Personally, I now consult AI for almost everything I do: For example, choosing gifts for friends and family, brainstorming new ideas for a product, or organizing inspiration for artistic projects. I use it almost constantly.

It doesn’t complete tasks for me because I usually say, 'Give me five ideas…' Then I find that three of them are somewhat useless—I can spot that immediately. But among the remaining two, there’s often a small 'spark,' or it helps me shift my perspective and place things into a new framework.

Then I refine and rethink, turning it into something of my own.

Jennifer: Stanford gives undergraduates significant freedom, so many students haven't yet declared a major. A few years ago, we could predict that a large number would choose computer science. Do you still recommend that students pursue computer science? They’re all listening very carefully now (laughs).

Brin: I chose computer science back then because I truly loved it, so for me, it was almost a no-brainer. You could say I was lucky because I happened to enter a highly transformative field.

I wouldn’t advise you against choosing computer science just because 'AI is now pretty good at writing code.' AI does a lot of things well these days. It just happens that writing code has high market value, so many people are drawn to it.

More importantly, better programming skills lead to better AI. Therefore, including our company, all companies genuinely working on AI place great emphasis on coding ability.

We ourselves extensively use AI to assist in writing code and even to help deduce algorithmic logic. But precisely because it is so important, we cannot treat it as a 'skill that can be abandoned.'

So what I mean is: don’t switch to comparative literature just because you think AI can write code — because frankly, AI might be even better at comparative literature.

I don’t mean to offend students majoring in comparative literature, but consider this: the code written by AI sometimes contains serious errors and may not run at all. In contrast, if you make a mistake in a sentence or your expression isn’t precise enough in comparative literature, the consequences won’t be as immediate.

In a sense, AI is actually more suited to creative text-based tasks.

6. Live Q&A Session

Student 1: Google grew largely out of your academic research on PageRank. But today, innovation is increasingly driven by industry. Do you still believe that the flow of talent and results from 'academia to industry' remains crucial? If so, how would you strengthen this channel?

Brin: Wow, that’s an excellent question.

How crucial is the pathway from academia to industry? I can only offer an uncertain answer. During my graduate studies, it often took years, even decades, for a new idea to evolve into something of commercial value.

In an academic setting, you have greater freedom to refine a direction over the long term: securing funding, conducting research, and perhaps spending ten or twenty years gradually developing it, allowing it to slowly mature before it permeates into the industry, where it might be adopted by large corporations or used by startups to create products.

However, if this cycle is significantly shortened, it becomes questionable whether this model remains valid.

Of course, some aspects still hold true. Even in the AI field, we regularly follow research from Stanford and other universities, occasionally collaborating or recruiting talent.

But many ideas do not need to be 'incubated' within academia for a prolonged period. For instance, a new attention mechanism might be tested in a university setting for two years before being adopted in some form by the industry. Ultimately, the industry itself is also conducting significant explorations of similar nature.

From this perspective, the notion of 'relying on a long academic pipeline' may not necessarily hold. Perhaps what truly requires an academic environment are more radical, foundational, and yet-to-be-formed new architectures and pathways.

Quantum computing may serve as a typical example: the concept was proposed around the 1980s, and now companies are actively pursuing it while university laboratories explore different approaches, placing it in a delicate 'boundary area.'

If you truly have a completely different implementation approach—one that is neither our superconducting qubits nor the ion traps pursued by many ventures—then it may indeed need to be nurtured within a university setting for a few years to allow it to mature.

Such directions are exceedingly difficult, and short-term commercial pressures could potentially stifle them.

However, once you determine that it is sufficiently solid and promising, it will most likely move toward commercialization: either you establish your own company, or it gets integrated into the industrial ecosystem.

Therefore, I do not wish to provide a one-size-fits-all answer. Leading companies are now increasingly engaging in fundamental research, and as AI begins to yield tangible returns, the value of these long-term investments is becoming evident.

What is more likely to change in the future is this: The academic and industrial sectors will reassess and rebalance their respective roles in conducting research.

However, I do believe that certain endeavors still require approximately a decade of relatively purer research cycles. Companies may become more hesitant about such long-term commitments due to pressures like IPO timelines or product development schedules.

Student 2: AI is advancing at an unprecedented pace. What mindset should young entrepreneurs like me adopt to avoid repeating past mistakes?

Sergey Brin: Let me give you a very specific piece of advice: When you have a cool idea for a wearable device, make sure it is thoroughly refined before embarking on any flashy stunts like skydiving or airship displays.” (Laughter from the audience)

I’m not joking. Google Glass is a classic example of a cautionary tale.

I still love that concept to this day, but we rushed it to market too quickly back then. The cost had not been reduced to the appropriate level, and from a consumer perspective, the product was not yet mature or polished enough, yet I hurried to launch it.

At that time, I jumped the gun a bit and even harbored an illusion: I thought I was the next Steve Jobs, capable of creating something extraordinary and then stepping onto the stage to receive applause. That was a very typical mistake.

If I were to summarize it in one sentence: Many people think they are the next Steve Jobs—I made that mistake too. But Jobs was an extremely rare figure, difficult to replicate.

So my advice is: Fully “bake” your idea and ensure the product is sufficiently mature before hitting the external acceleration button. Once you step onto that treadmill, external expectations will rise rapidly, costs will escalate, timelines will lock in place, and you’ll have no choice but to deliver results by a certain point.

The problem, however, is that in many cases you may not be able to complete all the truly necessary tasks within that timeframe. Eventually, you fall into an 'acceleration spiral': expectations continue to grow, yet you haven’t given yourself enough time to digest, iterate, and refine.

Looking back, this is the type of mistake I most want to avoid.

Student 3: We see many AI companies relying on 'expanding data and increasing computing power' to enhance large language models. So if one day we run out of data and reach the limits of computing power, where do you think the next step will go? Will it involve changing architectures, such as finding an alternative to the Transformer? Or will it be a breakthrough in learning methods, like a training paradigm better than supervised or reinforcement learning? Or could it head in a completely different direction? Thank you.

Sergey Brin: Let me answer from my perspective. The directions you just listed are, in my view, already much more important than simply 'increasing computing power and expanding data.'

It's just that the idea of 'scaling up' tends to attract more attention because it’s very tangible—building data centers, purchasing chips. Additionally, companies like OpenAI and Anthropic have published numerous papers on scaling laws, drawing significant interest.

But if you carefully review the timeline, you’ll find that over the past decade or so, progress at the algorithmic level has actually outpaced scale expansion.

I remember when I was in graduate school, I saw a diagram illustrating the N-body problem—for instance, gravitational systems with multiple celestial bodies interacting and moving around. You would notice that from the 1950s to the 1990s, computational power increased dramatically due to Moore's Law, but the advancements in algorithms for solving the N-body problem far exceeded those gains in computational power.

You’ll realize that companies like ours will never shy away from staying at the forefront of computing power. But computing power is more like the 'icing on the cake'—the real main course and vegetables are the algorithm work, where you solidify your methodologies, structures, efficiencies, and learning mechanisms.

Jennifer: Let me add something as well. Regarding whether 'computing power will run out,' or more specifically, universities are already quite familiar with what it feels like to lack sufficient computing power since they cannot match the scale of computing resources that large companies possess—not even close.

But this limitation forces us to innovate in many ways: When you have less computing power, how can you make models more efficient? How can you achieve 'more with less'? These are questions we’ve been actively researching here, and we’ve done a lot of work in this area.

Student 4: I would like to ask, which emerging technology do you believe has been significantly underestimated in terms of its long-term impact?

Brin: Oh, that's a tough question. Which emerging technology has been significantly underestimated? I obviously can't say AI because everyone is focused on it, so it’s hard to claim it’s underestimated.

Levine: It might also be underestimated.

Brin: It indeed might be underestimated as well, but it may not count as 'emerging' anymore, so I can’t use it as an answer (laughs).

Many people mention quantum computing and what it could bring in the future. But I don’t want to put all my bets on it either. While I do support our investments in quantum computing, there are too many variables involved.

Moreover, theoretically speaking, we still don’t even know whether P is not equal to NP. There are many unresolved questions at the frontier of computational theory. Currently, quantum algorithms mainly apply to some very specific structural problems.

So while I am a supporter in that area, it’s hard to definitively say 'that’s the one.'

If you really insist on me naming one direction, perhaps it would be the application of AI (and possibly quantum computing in the future) in materials science.

Just imagine if we could use entirely new materials to solve a series of performance boundaries — the potential is almost limitless.

Levine: I was actually thinking about materials science too. Because the concept of being 'underestimated' is subtle: nowadays, people pay significant attention to technological innovation opportunities. Take nuclear fusion and quantum technologies for instance — though they’re not yet mature, it’s not as if they’re completely overlooked. And as for AI, there’s no need to even mention it.

But materials, in my view, may indeed be an area whose importance has not yet been sufficiently highlighted.

The same applies to the fields of biology and health, especially with significant opportunities arising within molecular science. Compared to AI, it currently receives less attention, but molecular science is also undergoing a profound revolution.

Jennifer: Yes, I wanted to say something similar. I’ve observed that the spotlight keeps shifting; right now, it’s shining strongly on AI, but it used to focus on biology, and it shouldn’t stop there.

Many incredibly exciting developments are happening in synthetic biology. I believe we need to broaden the scope of that spotlight a bit.

Student 5: As we grow up, many of us develop some notion of 'self-limitation.' I’d like to ask you: During the process of building Google, were there any beliefs you once held firmly that you later had to adjust? How did such changes influence your decision-making? Thank you.

Sergey Brin: Hmm, self-limitation… I think there have been several moments in life where I was compelled to expand my horizons and re-understand the world.

I was born in Moscow, in the Soviet Union, where the environment was vastly different and very poor—almost everyone was poor. My parents, grandmother, and I shared a small apartment of about 400 square feet, and we had to climb five flights of stairs every day.

At that time, it was hard to even think about the outside world.

Later, we were fortunate. My father vaguely saw what lay beyond after attending a conference in Poland, where he heard about the Western world. He then decided to emigrate with our family. It was a highly debated decision at home, but eventually, we made it to the United States.

When we arrived in the U.S., we were still very poor and essentially started from scratch. I had to learn a new language, make new friends, and rebuild my entire social network. That process was difficult, but it was also like a process of awakening.

Later, when I came to Stanford as a graduate student, it was somewhat similar. The professors gave me significant freedom and trusted me to manage my own time, coupled with the more open and daring atmosphere of California. To be honest, this tradition may not be as strong in California now as it used to be, but I won’t delve into complaints. All in all, it made me feel freer and more daring in my thinking.

So I may have answered your question in a roundabout way: you asked about 'self-limiting thoughts,' and I talked about experiences. But what I really want to convey is this: I’ve been through several painful transitions, and each one forced me to broaden my perspective on the world.

Looking back, these changes were often worth it. And because of these experiences, I may have come to believe earlier that those uncomfortable changes often end up creating more space and opportunities.

Student 6: Your achievements are things most people could hardly experience in a lifetime. Looking back now, how do you define a 'good life'? Beyond these accomplishments, what does a 'good life' mean to you?

Brin: What is a good life… I think first of all, it’s being able to truly enjoy your life and also enjoy what you’ve created.

I deeply value my family. Today, I have a child here with me, and my girlfriend is also present. I’m very grateful for the quality time I get to spend with them.

I’m also grateful that at this stage of my life, I’m still being challenged by intellectually stimulating problems. Actually, just before the pandemic, I had 'retired' about a month earlier, which might have been the worst decision I’ve ever made (laughs).

My idea at the time was: I’m going to sit in coffee shops and study physics because during that period, physics was what fascinated me the most. But then the pandemic hit, and all the coffee shops closed. (Audience laughter)

I ended up staying home all the time, almost like fermenting, and I could feel myself slipping, my mind wasn’t as sharp. Eventually, I thought: No, I need to get back to work.

At that point, the office hadn’t fully reopened yet, but after a few months, some people started going in occasionally, so I began going in too. Gradually, I became more and more engaged, eventually working on what we later called the Gemini project, and it was incredibly exciting.

For me, having an outlet for continuous output in terms of technology and creativity is very important and also highly rewarding.

If I were to remain retired permanently, I think that would be a pretty big mistake.

Student 7: If you had to share something 'very human, very relatable' to make everyone feel a bit more at ease, what would you share?

Sergey Brin: I sometimes find myself hesitant to ask questions for fear of appearing uninformed. So let me start with one: What exactly is Management Science and Engineering? (Laughter from the audience)

Is it one of those Dilbert-esque 'I want to be a manager' things? What exactly do you study?

Jennifer: It's a major, a department.

Sergey Brin: A department? So what do you study? (Audience laughter) What specific courses do you take?

Jennifer: Let me explain. This year marks the 25th anniversary of the department. It was formed by merging three disciplines: Industrial Engineering, Operations Research, and Engineering Economic Systems. You can think of it as a triangle encompassing these fields.

Many schools might call it Industrial Engineering or Operations Research separately, but at Stanford, we package them together as the Department of Management Science and Engineering. This department is also the organizer of the ETL course, and today’s class is supported by them.

Sergey Brin: Alright, I get it now. So the 'very human' part about me is: I didn’t even know this (laughter). But I’m glad I asked.

Jennifer: Let me add one more thing: the reason I’m so 'empathetic' is that I can explain these things to Sergey Brin and actually get him to listen carefully. (Applause)

Student 8: How do you manage your time to keep up with changes? You mentioned earlier that you focus on staying sharp and keeping up with advancements in AI. So, what books do you read? What podcasts do you listen to while driving?

Brin: Well, I’ll try not to make this sound like an advertisement. (Laughter) One of the things I do most often now is chat with the voice version of Gemini while driving.

I use it as a conversation partner on the road, asking random questions whenever they come to mind.

But don’t rush to use the public version just yet. The public version isn’t the better internal version I’m using. Give us a few more weeks, and we’ll release the version I currently have access to. The model behind the current public version feels a bit outdated, which is kind of embarrassing to admit. (Laughter)

The questions I ask are quite specific, such as: if I were to build a data center, how much electricity in hundreds of megawatts would I need? How should I estimate the costs of different power supply solutions? I discuss and calculate these things with it while driving.

Well, I suppose this does still sound a bit like an advertisement for Gemini.

I also listen to some podcasts. I really enjoy 'All-In'; the hosts are great. Recently, we met Ben Shapiro in Florida—he’s also a podcaster. It’s actually quite interesting meeting podcasters in person.

But ultimately, I prefer interactive thinking while on the road, which is why I lean towards conversing with AI. Though saying that feels a bit odd.

Levin: This is truly a glimpse into the future. It’s a fitting conclusion. I think it won’t be long before all of us might be doing the same.

Looking to pick stocks or analyze them? Want to know the opportunities and risks in your portfolio? For all investment-related questions,just ask Futubull AI!

Editor/jayden

The translation is provided by third-party software.


The above content is for informational or educational purposes only and does not constitute any investment advice related to Airstar Bank. Although we strive to ensure the truthfulness, accuracy, and originality of all such content, we cannot guarantee it.