The Minds, Brains, and Machines Initiative: Driving Innovation at the Intersection of Natural and Artificial Intelligence

NYU Center for Data Science
8 min readJul 21, 2023
Minds, Brains, & Machines Initiative

The studies of biological and artificial intelligence, which have spent the occasional decade pretending not to have much to say to each other, are entering, thanks to the current wave of AI progress, a new symbiosis. Novel AI techniques are being used to shed new light on how the mind and brain really work, and there’s a renewed excitement that we’ll be able to build machines that learn and think like people do. The Minds, Brains, & Machines (MBM) initiative at NYU, sponsored and supported by the Center for Data Science, was created to stand as a world-class nexus for this kind of research, doing its part to facilitate what may be, in an unprecedented era of AI explosion, currently the world’s most important cross-disciplinary conversation.

In a recent conversation with Assistant Professor of Psychology and Data Science Brenden Lake and Professor of Psychology Todd Gureckis, co-directors of MBM, we delved into their vision for the initiative, some of the research being done by the affiliated faculty, and the potential impacts on the data science community. The discussion brought to light their excitement about how MBM could reshape our understanding of both natural and artificial intelligence.

The MBM initiative aims to establish deep connections between neuroscience, cognitive science, and AI, focusing on the junction of these three domains. By facilitating interdisciplinary interactions, MBM seeks to foster breakthroughs that could revolutionize our understanding of the mind, the brain, and AI.

MBM, founded in Oct. 2021, is a nexus in another way, too. In addition to bringing together biological and artificial intelligence research, the MBM integrates both understanding and engineering intelligence.

“Those two components unify a number of different disciplines,” said Lake. Joining these two components facilitates “the back and forth between the mechanistic and technical understanding of intelligence,” which is central to the purpose of MBM.

On NYU’s campus, in fact, you can actually see the divide between the “understanding” faction, which includes neuroscience, psychology, philosophy, and linguistics, all of which are located at the same intersection on Washington Place, and the “engineering” disciplines, such as computer science and CDS, which are in a different building a ten minute walk away.

“All of these different disciplines are engaged in either ‘understanding’ or ‘engineering’ intelligence, but there was not a unified initiative to bring together researchers to work together on this common enterprise.”

In its two years of existence, MBM’s mission has only become more important and relevant, in part because of the explosive recent growth of AI.

Both understanding and engineering intelligence are now more relevant than ever, and this is true in ways that work both within and outside the LLM (large language model) framework, upon which so many recent headline-grabbing developments are based.

Within the LLM framework, one of the most important outstanding problems is the ‘black box problem’, which refers to the fact that the inner workings of LLMs are not transparent in the way regular software is. This makes interpretability, alignment, safety, and responsible usage highly important areas of research.

Lake and Gureckis emphasized the need to align AI technologies with human intelligence, thus ensuring that these systems can more “fruitfully interact with us as people.” One key task of this project is understanding AI systems’ behavior. This is the focus of incoming Assistant Professor/Faculty Fellow Umang Bhatt, who specializes in human-machine collaboration.

“The popularity of large language models makes understanding how such AI systems affect our decision-making abilities more important than ever,” said Bhatt. “And an interdisciplinary collaboration like MBM that is tackling these issues is thus necessary and timely.”

Lake and Gureckis made clear, however, that, in addition to working within the LLM framework, there is equally important work to be done outside it. The public splash LLMs have made in the past year have overshadowed deep unresolved issues in the study of intelligence which get less airtime, but may be integral to moving forward in both engineering and understanding intelligence.

According to Lake and Gureckis, capabilities of, for example, ChatGPT, to some extent obscure the extent to which intelligence has truly been “solved.” It’s in the interests of companies like OpenAI and Google, who have products to promote, to imply that it’s only a matter of time, or a matter of scaling, until we reach general intelligence artificially. But this, according to Lake and Gureckis, is more salesmanship than science: there are deep and fundamental aspects of intelligence still not understood and not engineered.

As Yann Lecun likes to point out, there are things a dog can do that state-of-the-art language models can’t do, and with orders of magnitude less training data. It doesn’t seem likely, therefore, that simply scaling language models is going to get us where we want to go if we’re trying to truly understand — and build — intelligence.

Getting there is likely to require totally new discoveries coming from unexpected directions — which is exactly what MBM is designed to make happen.

That said, the drama of recent AI breakthroughs from industry do, Gureckis and Lake admit, make it an exciting time to be working in the field. But they point out that the deeper reason for the field’s explosion is the deep learning paradigm we’ve been in for the past decade, which has allowed researchers from disparate specialties to talk to each other.

“There’s a fusion happening,” said Lake, “where the dividing lines that have traditionally been held up pretty strongly between disciplines don’t really hold anymore.”

Gureckis agreed. As recently as ten years ago, researchers in robotics and linguistics and vision wouldn’t have a lot to say to each other, “because linguists were talking about Chomskian generative grammar, and roboticists were talking about dynamic control theory, and there wasn’t a lot that they could share.”

Or, said Lake, consider what used to be the process of becoming a vision researcher. “You’d need to learn — especially on the engineering side — all of these vision pipeline tools, and special-purpose features you’d need to compute on images in order to process them and use them.” Similar overhead was required to do research in natural language processing. Requirements like these created a very strong pressure to specialize.

Deep learning changed all that. “There’s been a lot of progress on the underlying technology that can allow you to train extremely sophisticated models that can handle all sorts of different data,” said Lake. “Then you can compare those models to human abilities, and figure out what you’ve learned and what’s missing.”

Gureckis added that deep learning as a lingua franca has spread even across neuroscience. “It’s the same thing,” he said. “Deep neural networks are used all over the place to explain and interpret the brain. There’s a common language that’s emerging.”

Gureckis and Lake mused that the real frontiers of research might lie beyond these approaches. Yes, deep learning has shaped the past decade of developments, but the inclination within MBM discussions, they said, is, if anything, to look beyond the current toolkit, to explore a range of computational methods and innovative ideas that could define the next decade of AI.

“This requires this kind of cross-disciplinary communication and novel computational methods that may not be the focuses of, you know, Google and OpenAI right now,” said Gureckis. “Because we’re not dealing with the scaling problem the way that they are — we’re dealing with the actual intelligence problem.”

Intelligence is not going to be understood “just by stumbling upon the right prompt for your large language model, like ‘think step by step’ or ‘pretend you’re a world expert’,” Lake said with a laugh, “which a surprising amount of research focus is on.”

“So, looking at the big picture, it’s not that we don’t know the right prompt,” said Lake, “it’s that we’re looking for the key ingredients of intelligence that are missing.”

At the forefront of this search are MBM-affiliated researchers using precisely the interdisciplinary approaches the initiative was created to foster. Both Grace Lindsay, Assistant Professor of Psychology and Data Science, and SueYeon Chung, Assistant Professor of Neural Science, build artificial neural networks inspired by biologically plausible constraints. Lindsay uses these artificial models to generate hypotheses about, and develop new tools to study, biological brains. Chung, meanwhile, draws on principles in neural network theory, theoretical physics, and high-dimensional geometry to develop theories and analysis methods to characterize the structure of information in populations of neurons.

The project of MBM “is timely more than ever,” said Chung, “due to the concurrent advancements in computational and experimental neuroscience, alongside the rapid development of AI.”

All this is possible because of the unique phenomenon of the Center for Data Science, founded by Yann Lecun. Capitalizing on the longstanding computational bent of NYU’s neuroscience and psychology departments, over the past decade CDS has brought together some of the most notable names in the world across an impressive array of intelligence-related fields — from David Chalmers in philosophy to Gary Marcus and Sam Bowman in AI — to create a research environment unlike any other.

“The glue of computation is really the thing that brings people together to speak the same language together from disparate fields,” said Gureckis. “And when you combine that with all these engineering advances, it just makes it a really great time for science.”

NYU, in fact, is “becoming a place that rivals somewhere like MIT that’s always had a longstanding strength in these types of areas,” said Gureckis. “We have the same depth of outstanding faculty that work on these topics, such that a new PhD student would have lots of people to work with, collaborate with, and learn from, making it a really good place to build a career.”

The initiative isn’t only about driving research breakthroughs, either. It also places emphasis on community engagement, providing opportunities for students to engage with these exciting developments. The directors called for interested individuals to join the community, come to the events, and speak to the researchers.

One such opportunity is an upcoming NYU-only Summer Poster Conference on July 21. Hosted by CDS, the event will offer attendees a chance to interact with leading researchers in the field.

“This will bring together students and faculty working on these topics from multiple departments, have them present their work, learn about each other’s expertise, and maybe find collaborators,” said Lake.

Another thing MBM does to recognize the work of undergraduate students is The Glushko Prize for Outstanding Undergraduate Honors Thesis in Minds, Brains, and Machines, which is awarded to an NYU student “who has conducted an Honors thesis in computational cognitive science or otherwise at the intersection of human and machine intelligence.” Last year, Alexa Tartaglini, working on the differences between human and machine vision, shared the prize with Siyue (Brenda) Qui and her thesis on visual crowding.

The Minds, Brains, & Machines initiative at CDS offers a transformative vision for the future of AI, neuroscience, and cognitive science. By fostering interdisciplinary communication and pioneering novel computational methods, it stands to redefine our understanding of intelligence and push the frontiers of data science research.

Gureckis noted that the explosion of the field is remarkable, and has even bled into non-specialists. “Having been in this field for almost several decades, the public interest in this topic has remarkably grown in the last several years,” he said. “You know, for many years in my life, if I got in a cab, and the cab driver asked me what I did, I had an awkward conversation that nobody understood. But now everyone wants to talk about it with me in ways that are more sophisticated than I ever would have imagined.”

By Stephen Thomas

--

--

NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.