Unit Test Generation Using Large Language Models: A Systematic Literature Review
Straipsniai
Dovydas Marius Zapkus
Vilniaus universitetas
Asta Slotkienė
Vilniaus universitetas
Publikuota 2024-05-13
https://doi.org/10.15388/LMITT.2024.20
PDF

Kaip cituoti

Zapkus, D.M. and Slotkienė, A. (2024) “Unit Test Generation Using Large Language Models: A Systematic Literature Review”, Vilnius University Open Series, pp. 136–144. doi:10.15388/LMITT.2024.20.

Santrauka

Unit testing is a fundamental aspect of software development, ensuring the correctness and robustness of code implementations. Traditionally, unit tests are manually crafted by developers based on their understanding of the code and its requirements. However, this process can be time-consuming, errorprone, and may overlook certain edge cases. In recent years, there has been growing interest in leveraging large language models (LLMs) for automating the generation of unit tests. LLMs, such as GPT (Generative Pre-trained Transformer), CodeT5, StarCoder, LLaMA, have demonstrated remarkable capabilities in natural language understanding and code generation tasks. By using LLMs, researchers aim to develop techniques that automatically generate unit tests from code snippets or specifications, thus optimizing the software testing process. This paper presents a literature review of articles that use LLMs for unit test generation tasks. It also discusses the history of the most commonly used large language models and their parameters, including the first time they have been used for code generation tasks. The result of this study presents the large language models for code and unit test generation tasks and their increasing popularity in code generation domain, indicating a great promise for the future of unit test generation using LLMs.

PDF
Kūrybinių bendrijų licencija

Šis kūrinys yra platinamas pagal Kūrybinių bendrijų Priskyrimas 4.0 tarptautinę licenciją.

Atsisiuntimai

Nėra atsisiuntimų.

Skaitomiausi šio autoriaus(ų) straipsniai

<< < 2 3 4 5 6 > >>