Exploring LLMS with RAG to simplify access to institutional norms at the Universidade do Estado de Santa Catarina
DOI:
https://doi.org/10.5902/2318133895063Keywords:
Large Language Models, Retrieval-Augmented Generation, Artificial Intelligence, Academic Regulations, UDESCAbstract
The dispersion of academic regulations hinders access to essential information in universities. This study aims to investigate the use of large language models – LLM – integrated with the retrieval-augmented generation – RAG – technique to simplify access to normative documents at the Universidade do Estado de Santa Catarina. Information retrieved from regulatory documents was used as RAG context for questions submitted to two LLMs: Meta Llama 3.1 and Google Gemini 2.5. The evaluation, based on the Ragas framework, indicated that the quality of the retrieved context is essential for accurate and faithful responses. Gemini demonstrated greater robustness in noisy contexts, achieving higher fidelity and correctness scores, highlighting the potential of this approach to support academic processes.
Downloads
References
COMANICI, Gheorghe et al. Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities. arXiv preprint, New York, 2025, arXiv:2507.06261.
DOUZE, Matthijs et al. The FAISS Library. IEEE Transactions on Big Data, Piscataway, 2025, p. 1-17.
ES, Shahul; JAMES, Jithin, ANKE, Luis Espinosa, SCHOCKAERT, Steven. RAGAs: Automated Evaluation of Retrieval Augmented Generation. CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 18, 2024. Conference proceedings … St. Julians: Association for Computational Linguistics, 2024.
GRATTAFIORI, Aaron. et al. The Llama 3 Herd of Models. arXiv preprint, New York, 2024, arXiv:2407.21783.
LANGCHAIN TEAM. LangChain: The platform for reliable agents. 2025. Disponível em: https://www.langchain.com/langchain. Acesso em: 5 jan. 2026.
LEWIS, Patric et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS, 34, 2020. Anais… Vancouver: Neural Information Processing Systems Foundation, 2020.
LIMA, Saint-Clair da Cunha. Assistente de busca: uma abordagem RAG para busca semântica em documentos textuais da Assembleia Legislativa do Rio Grande do Norte. Natal: UFRN, 2025. 169f. Dissertação (Mestrado em Tecnologia da Informação), Universidade Federal do Rio Grande do Norte.
LOPES, Álvaro José et al. Chat diário oficial. 2025. Grupo Raia. Disponível em: https://grupo-raia.org/projetos/projeto/chat-diario-oficial. Acesso em: 5 jan. 2026.
MIKOLOV, Tomas; CHEN, Kai; CORRADO, Greg; DEAN, Jeffrey. Efficient estimation of word representations in vector space. arXiv preprint, New York, 2013, arXiv:1301.3781,
MINAEE, Shervin et al. Large language models: a survey. arXiv preprint, New York, 2024, arXiv:2402.06196.
NETO, Milton Gama. Construindo aplicações personalizadas com LLM através de RAG (Retrieve Augmented Generation). Medium, [S.l.], 2024. Disponível em: https://medium.com/data-hackers/construindo-aplicações-personalizadas-com-llm-através-de-rag-retrieve-augmented-generation-6f3a3df7b6de. Acesso em 9 jan. 2026.
OLIVEIRA, Luana Ferreira. Chatbot como ferramenta de apoio ao acesso às informações acadêmicas da UFSM. Santa Maria: UFSM, 2024. 47f. Trabalho de conclusão de curso (Graduação em Sistemas de Informação). Universidade Federal de Santa Maria.
SILVA, Vinícios Facin. Assistente virtual para coordenador de curso: aplicação de LLMs à gestão de perguntas frequentes em um curso de graduação. Blumenau: UFSC, 2025. 71f. Trabalho de conclusão de curso (Graduação em Engenharia de Controle e Automação). Universidade Federal de Santa Catarina.
PESSANHA, Gabriel Rodrigo Gomes; VIEIRA, Alessandro Garcia; BRANDÃO, Wladmir Cardoso. Large language models (LLMs): a systematic study in administration and business. Revista de Administração Mackenzie, São Paulo, v. 25, n. 6, 2024, eRAMD240059.
SAHA, Binita; SAHA, Utsha. Enhancing international graduate student experience through ai-driven support systems: a LLM and rag-based approach. INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ITS APPLICATION, 6, 2024. Conference proceedings … Pristina: ICONDATA, 2024.
SINGER-VINE, Jeremy. Pdfplumber. 2023. Disponível em: https://github.com/jsvine/pdfplumber. Acesso em: 10 nov. 2025.
VIBRANTLABS. Ragas: Supercharge your LLM application evaluations. 2024. Disponível em: https://github.com/vibrantlabsai/ragas. Acesso em: 5 jan. 2026.
VASWANI, Ashish et al. Attention is all you need. Advances In Neural Information Processing Systems, Long Beach, v. 30, 2017, p. 5998-6008.
VOGEL, Daniela; RAMOS, Alexandre Moraes; FRANZONI, Ana Maria Bencciveni. Transformando a educação com Large Language Models (LLMs): benefícios, limitações e perspectivas. Caderno Pedagógico, Curitiba, v. 22, n. 4, 2025, e13846.
WEI, Jason et al. Emergent abilities of large language models. arXiv preprint, New York, 2022, arXiv:2206.07682.
Downloads
Published
How to Cite
Issue
Section
License
Authors keep copyright and concede to the magazine the right of first publication, with the work simultaneously licensed under the Creative Commons Attribution 4.0 International, non-commercial license with no derivative work, which allows to share the work with no author recognition and initial publication in this magazine.
Authors has authorization to overtake additional contracts separately, to distribute a non-exclusive version of the work published in this magazine: For example: to publish in an institutional repository or as a chapter of a book, with authorial recognition and initial publication in this magazine.
Authors are allowed and are encouraged to publish and distribute their work online. For example: in institutional repositories or in their own personal page – at any point before or during the editorial process, because this can result in productive changes, as well as increase the impact and the mention to the published work.

