Researchers at the Massachusetts Institute of Technology (MIT) have discovered striking parallels in the ways that humans and advanced machine-learning (ML) models solve complex tasks, specifically those requiring complex reasoning. The MIT neuroscientists, led by Associate Professor Evelina Fedorenko, found that the same types of problems that require the most time and effort for human volunteers are the very same problems that demand the most “tokens” (a measure of internal computation) for the new reasoning models to solve. This convergence suggests that in at least one important way, these modern AI models employ a human-like approach to thinking and problem-solving.
This finding is particularly significant because the reasoning models were not explicitly designed to mimic human cognition. Their builders prioritized robust performance and accuracy under various conditions. The fact that an architectural similarity emerged independently highlights a potential fundamental efficiency in step-by-step reasoning that is shared across both biological and artificial neural networks. The study involved giving both humans and reasoning models the same set of diverse problems, including numeric arithmetic and intuitive reasoning tasks, and systematically tracking the effort required to reach a solution.This research effort successfully bridges the fields of machine learning and human cognitive science. By demonstrating that imposed computational constraints on ML models cause them to alter their strategies to mimic human shortcuts (like hierarchical and counterfactual reasoning), the work offers valuable insights. It suggests that ML is becoming more “cognitive” and less purely statistical, which is crucial for advancing AI safety and ethics by providing a foundation for building machines that think in ways that are more transparent and understandable to their creators.





