Can Students Make STEM Progress With the Large Language Models (LLMs)? An Empirical Study of LLMs Integration Within Middle School Science and Engineering Practice
Qing Guo, Junwen Zhen, Fenglin Wu, Yanting He, Cuilan QiaoThe rapid development of large language models (LLMs) presented opportunities for the transformation of science and STEM education. Research on LLMs was in the exploratory phase, characterized by discussions and observations rather than empirical investigations. This study presented a framework for incorporating LLMs into Science and Engineering Practice (SEP), utilizing a case study on submarine construction, followed by a four-week quasi-experimental validation. The research employed conditional cluster sampling, selecting two homogeneous natural classes from a middle school in China to serve as the experimental and control groups. The key experimental variable was the inclusion of LLMs in the SEP project. Various validated and self-developed assessment tools were used to measure students’ STEM learning outcomes. Statistical analyses, including pre- and post-test paired comparisons within classes and ANCOVA for between-class differences, were performed to evaluate the effects of LLM integration. The results showed that students participating in SEP integrated with LLMs significantly improved their mastery of scientific knowledge, attitudes towards science, perceived usefulness of technology, understanding of engineering, computational thinking skills, and problem-solving abilities. In contrast, students participating in traditional SEP exhibited weaker knowledge acquisition, differences in understanding engineering concepts, and lack of development in computational thinking and problem-solving skills. This study was a pioneering effort in integrating LLMs into science education and provided a framework and case reference for the deeper application of LLMs in the future.