BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

5 History Lessons For Nvidia From The 80-Year History Of AI

Following

Jensen Huang, Nvidia’s chief executive officer and co-founder has a perfect crystal ball and it tells him that his company is creating a new industry, the AI industry. On Nvidia’s Wall Street-rousing earnings call, he predicted trillions of dollars in new investments, doubling the amount of data centers in the world over the next five years.

Predictions are difficult, especially about the future. We have lots of data at our disposal, however, that may help us make educated guesses about the future or, at the very least, highlight where and how and why past predictions failed to materialize as expected. It’s called history.

For Nvidia today, it’s the 80-year history of artificial intelligence or AI, punctuated by funding peaks and valleys, marked by rival approaches to research and development, and expressed in public fascination, anxiety, and excitement.

AI history started in December 1943, when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper in the (relatively new) tradition of mathematical logic. In “A Logical Calculus of the Ideas Immanent in Nervous Activity,” they speculated about networks of idealized and simplified neurons and how they could perform simple logical operations by transmitting or failing to transmit impulses.

Ralph Lillie, who was establishing the field of histochemistry at the time, described the work of McCulloch and Pitts as “the attribution of ‘reality’ to logical and mathematical models” in the absence of “experimental facts.” Later, when the paper’s assumptions failed to pass empirical tests, MIT’s Jerome Lettvin observed that while the fields of neurology and neurobiology ignored the paper, it inspired “those who were destined to become the aficionados of a new venture, now called artificial intelligence.”

Indeed, the McCulloch and Pitts paper was the inspiration for “connectionism,” the specific variant of artificial intelligence dominant today, now called “deep learning.” Regardless of its non-existent connection to how the brain actually works (of which we still don’t know a whole lot), the method of statistical analysis—”artificial neural networks”—underpinning this AI variant is usually described by AI practitioners and commentators as “mimicking the brain.” No less an authority and leading AI practitioner than Demise Hassbis declared in 2017 that McCulloch and Pitts’ fictionalized account of how the brain works and similar work “continue to provide the foundation for contemporary research on deep learning.”

Lesson #1: Beware of confusing engineering with science and confusing science with speculation, and confusing science with papers studded with lots of mathematical symbols and formulas. Most important, resist the temptation to fall for “we are as gods” hallucination. It’s a persistent and widespread arrogance that has served as the catalyst for tech bubbles and the periodical rise of irrational exuberance about artificial intelligence over the last 80 years.

Which leads us to artificial general intelligence or AGI, the notion that soon, very soon, we are going to have machines with human-like intelligence or even super-intelligence.

In 1957, AI pioneer Herbert Simon announced that “there are now in the world machines that think, that learn and that create,” and predicted that within ten years a computer would be a chess champion. In 1970, another AI pioneer, Marvin Minsky, said confidently that “in from three to eight years we will have a machine with the general intelligence of an average human being… Once the computers got control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.”

Anticipating the imminent arrival of AGI can move mountains, even government spending and policies. In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth Generation computer project, with the goal of developing thinking machines that will reason like humans. In response, DARPA began in 1983, after a lengthy “AI Winter,” to fund again AI research through the Strategic Computing Initiative, with the goal of developing machines that will “see, hear, speak, and think like a human.”

It took about a decade and a few billion dollars for enlightened governments everywhere to get enlightened not just about AGI but also the limitations of good old-fashioned AI. But in 2012, connectionism finally triumphed over other AI variants and a new flood of predictions about imminent AGI washed over the world. Moving beyond run-of-the-mill AGI, OpenAI announced in 2023 that superintelligence AI, “the most impactful technology humanity has ever invented,” could arrive this decade and “could lead to the disempowerment of humanity or even human extinction.”

Lesson #2: Beware of the shiny new new thing and examine it carefully, thoughtfully, intelligently. It may not be necessarily different from the previous rounds of speculation about how close we are to endowing machines with human intelligence. Just ask one of the “godfathers” of deep learning, Yann LeCun: “We're missing something big to get machines to learn efficiently, like humans and animals do. We don't know what it is yet.”

AGI has been “just around the corner” so many times before over so many years, because of the “fallacy of the first step.” Yehoshua Bar-Hillel, a pioneer of machine translation and one of the first to talk about the limitations of machine intelligence, pointed out that many people think that if someone demonstrates a computer doing something that until very recently no one thought it could perform, even if it’s doing it badly, it is only a matter of some further technological developments before it will perform flawlessly. You only need to be patient, so goes the widespread assertion, and eventually you will get it there. But reality proves otherwise, time and time again, cautioned Bar-Hillel, already in the mid-1950s (!).

Lesson #3: The distance from the inability to do something to doing it badly is usually much shorter than the distance from doing something badly and doing it correctly.

In the 1950s and 1960s, many people were trapped in the fallacy of the first step because of the rapid advancement in the processing speed of the semiconductors powering computers. As hardware progressed each year along the reliable upward trajectory of “Moore’s Law,” it was generally assumed that machine intelligence will advance in lock step with the hardware.

In addition to ever-increasing hardware performance, however, a new phase in the evolution of AI introduced two new elements: Software and data collection. Starting in the mid-1960s, expert systems brought a new focus on capturing and programming real-world knowledge, specifically the knowledge of specialized domain experts and specifically, their rules of thumb (heuristics). Expert systems grew in popularity and by the 1980s it was estimated that two-thirds of Fortune 500 companies applied the technology in daily business activities.

By the early 1990s, however, this AI bubble completely deflated. Numerous AI startups went out of business and corporations froze or cancelled their AI projects. Already in 1983, expert systems pioneer Ed Feigenbaum identified the “key bottleneck” that led to their demise, that of scaling up the knowledge acquisition process, “a very tedious, time-consuming, and expensive procedure.”

Expert systems also suffered from the challenge of knowledge accumulation. The constant need to add and update rules made them difficult and costly to maintain. They also displayed the ever-present deficiencies of thinking machines compared to human intelligence. They were "brittle," making grotesque mistakes when given unusual inputs, could not transfer their expertise to new domains, and lacked understanding of the world around them. At the most fundamental level, they could not learn—from examples, from experience, from the environment—the way humans learn.

Lesson #4: Initial success, i.e., widespread adoption by corporations and government agencies and vast public and private investment, even over a period of ten or 15 years, does not necessarily lead to the creation of an enduring “new industry.” Bubbles tend to burst.

Throughout the ups and downs, the hype and the setbacks, two completely different approaches to AI development have competed for the attention of academics, public and private investors, and the media. For more than four decades, the rule-based Symbolic AI approach dominated. But the examples-based, statistical analysis-powered connectionism, the other major approach to AI, had its share of AI hype and glory for brief periods in the late 1950s and the late 1980s.

Until the rebirth of connectionism in 2012, AI research and development has been driven mostly by academics. Typical to academia, where dogmas prevail (see so-called “normal science”), there has always been a binary choice between Symbolic AI and connectionism. Geoffrey Hinton devoted most of his Turing Lecture in 2019 to the trials and tribulations he and the handful of deep learning devotees endured at the hands of mainstream AI and machine learning academics. In a typical academic fashion (my dogma is the true gospel), Hinton also made sure to dismiss reinforcement learning and the work of his colleagues at DeepMind.

Just a few years later, in 2023, DeepMind has taken over AI at Google (and Hinton left his position there), mostly as a response to the success of OpenAI, which also uses reinforcement learning as a component of its AI development. There is no indication, however, that either DeepMind or OpenAI or any of the numerous AI “unicorns” working towards AGI, focus on anything other than today’s dogma, Large Language Models. After 2012, AI development has by and large shifted from academia to the private sector, but the targeting of just one approach is still the major preoccupation of the field.

Lesson #5: Consider not putting all your AI eggs in the same basket.

There is no question that Huang is an exceptional chief executive and that Nvidia is an exceptional company, successfully moving rapidly to capitalize on the AI opportunity that suddenly presented itself over a decade ago as the parallel processing of its chips (originally designed for the efficient rendering of video games) lends itself well to deep learning calculations. Ever vigilant, “our company is thirty days from going out of business,” Huang tells his troops.

In addition to staying paranoid (remember Andy Grove’s Intel?), the lessons from the 80-year history of AI may also help guide Nvidia through the ups and downs of the next thirty days or thirty years.

Follow me on Twitter or LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.