An application of item response theory in ENADE’s assessment of business administration
DOI:
https://doi.org/10.5902/2179460X40196Keywords:
Item response theory, ENADE, Educational assessmentAbstract
Traditionally considering gross or standardized scores as the result of an individual's assessment or selection is a common fact. However, the results obtained depend on the items or questions that compose the evaluation instruments. Model applications that provide a better interpretability of the evaluative instrument, the Item Response Theory (IRT) allows to measure the latent trait of individuals, that is, characteristics that cannot be directly observed. The National Assessment of Student Achievement (ENADE) aims to assess the performance of undergraduate students in relation to syllabus, their skills and competences. Its results provide important data in the educational field, building references that allow the definition of actions aimed at improving the quality of undergraduate courses. This article presents an analysis of the 2009 ENADE test that was answered by 231.531 new and graduating students of the Business Administration course of several institutions in the country through the IRT. It was possible to verify the feasibility of using the IRT as an instrument to measure ENADE items, as well as the occurrence of a latent trait gain between incoming and graduating students, showing that the graduates at the end of the academic period had average latent trait superior to newcomers and somehow built up academic skills.
Downloads
References
ANDRADE, D. F., TAVARES, H. R., VALLE, R. C. Teoria da resposta ao item: conceitos e aplicações. São Paulo, 2000.
ANDRICH, D. A rating formulation for ordered response categories. Psychometrika, 43., 561-573, 1978.
BAKER, F. B. The Basics of Item Response Theory. 2 ed. USA, 2001.
BIRNBAUM, A. Some Latent Trait Models and Their Use in Infering an Examinee's Ability, 1968.
BOCK, R. D. Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika, 37., 29-51, 1972.
BOCK, R. D., ZIMOWSKI, M. F. Multiple Group IRT. In: Handbook of Modern Item Response Theory. New York: Spring-Verlag, 1997.
EMBRETSON, S. E., REISE, S. P. Item Response Theory for Psychologists. New Jersey, USA: Lawrence Erlbaum Associates, 2000.
FRANCISCO, R. Aplicação da teoria da resposta ao item (TRI) no exame nacional de cursos (ENC) da Unicentro. Dissertação de Mestrado, Universidade Federal do Paraná, Curitiba, 2005.
HAMBLETON, R. K., SWAMINATHAN, H., ROGERS, H. J. Fundamentals of item response theory. Newbury Park, CA: Sage, 1991.
KARINO, C. A., ANDRADE, D. F. (2010). Entenda a teoria de respostas ao item (tri), utilizada no enem. Disponível em: http://www.senado.gov.br/comissoes/CE/AP/AP20101116_NotaTecnica_INEPMEC_ TeoriaRespostasAoItem.pdf. Acesso em: 01/06/2014.
KLEIN, R. Uma re-analise dos resultados do PISA: problemas de comparabilidade. Ensaio: aval.pol.públ.Educ. vol.19 no.73 Rio de Janeiro Oct./Dec. 2011.
KUDER, G. F., RICHARDSON, M. W. The theory of estimation of test reliability, 1973.
LORD, F. M. A theory of test scores (No. 7). Psychometric Monograph, 1952.
MASTERS, G. N. A rasch model for partial credit scoring. Psychometrika, 47., 149-174, 1982.
MURAKI, E. A generalized partial credit model: Application of an em algorithm. Applied Psychological Measurement, 16, 159-176, 1992.
NOGUEIRA, S. O. ENADE: Analise de itens de formação geral e de estatística pela TRI. Dissertação de Mestrado, Universidade São Francisco, Itatiba, 2008.
OLEA, J., PONSODA, V., REVUELTA, J., BELCHI, J. Propiedades psicométricas de un test adaptativo informatizado de vocabulario inglês. Estudios de Psicologia, n. 55, p. 61-73, 1996.
OLIVEIRA, K. S. Avaliação do exame nacional de desempenho do estudante pela teoria de resposta ao item. Dissertação de Mestrado, Universidade São Francisco, Itatiba, 2006.
PARTCHEV, I. Simple interface to the estimation and plotting of irt models, 2014.
PRIMI, R., CARVALHO, L. F., MIGUEL, F. K., SILVA, M. C. R. Analise do funcionamento diferencial dos itens do exame nacional do estudante (ENADE) de psicologia de 2006. Psico-USF, 15, n. 3, p. 379-393, 2010.
R DEVELOPMENT CORE TEAM. R: A Language and Environment for Statistical Computing. R Foun-dation for Statistical Computing, Vienna, Austria, 2012.
RASCH, G. Probabilistic Models for Some Intelligence and Attainment Tests. Copenhagem: Da-nish Institute for Educational Research, 1960.
SAMEJIMA, F. A. Estimation of latent ability using a response pattern of graded scores. Psychometric Monograph, N. 17. 1969.
SCHWARTZMAN, S. O "conceito preliminar" e as boas praticas de avaliação do ensino superior. Revista da Associação Brasileira de Mantenedoras de Ensino Superior, n. 38, pp. 9-32, 2008.
SAO PAULO, E., MIRANDA, B. S., MOREIRA NETO, J. G., PAIXAO, L. A. R. Aplicação do modelo de credito parcial generalizado na avaliação do projeto sesi por um brasil alfabetizado. Revista Eletronica Iberoamericana sobe Calidad, Eficacia y Cambio en Educacion, v. 5, n. 2e, p. 24-38, 2007.
VENDRAMINI, C. M. M., SILVA, M. C., CANEL, M. Analise de itens de uma prova de raciocínio estatístico. Psicologia em Estudo, Maringá , v. 9, n. 3, p. 487-498. (set./dez 2004).
WRIGHT, B. D. Sample-free test calibration and person measurement. Procedings of the 1967 Invitational Conference on Testing Problems. Princeton, NJ: Educational Testing Service., 1968.
Published
How to Cite
Issue
Section
License
To access the DECLARATION AND TRANSFER OF COPYRIGHT AUTHOR’S DECLARATION AND COPYRIGHT LICENSE click here.
Ethical Guidelines for Journal Publication
The Ciência e Natura journal is committed to ensuring ethics in publication and quality of articles.
Conformance to standards of ethical behavior is therefore expected of all parties involved: Authors, Editors, Reviewers, and the Publisher.
In particular,
Authors: Authors should present an objective discussion of the significance of research work as well as sufficient detail and references to permit others to replicate the experiments. Fraudulent or knowingly inaccurate statements constitute unethical behavior and are unacceptable. Review Articles should also be objective, comprehensive, and accurate accounts of the state of the art. The Authors should ensure that their work is entirely original works, and if the work and/or words of others have been used, this has been appropriately acknowledged. Plagiarism in all its forms constitutes unethical publishing behavior and is unacceptable. Submitting the same manuscript to more than one journal concurrently constitutes unethical publishing behavior and is unacceptable. Authors should not submit articles describing essentially the same research to more than one journal. The corresponding Author should ensure that there is a full consensus of all Co-authors in approving the final version of the paper and its submission for publication.
Editors: Editors should evaluate manuscripts exclusively on the basis of their academic merit. An Editor must not use unpublished information in the editor's own research without the express written consent of the Author. Editors should take reasonable responsive measures when ethical complaints have been presented concerning a submitted manuscript or published paper.
Reviewers: Any manuscripts received for review must be treated as confidential documents. Privileged information or ideas obtained through peer review must be kept confidential and not used for personal advantage. Reviewers should be conducted objectively, and observations should be formulated clearly with supporting arguments, so that Authors can use them for improving the paper. Any selected Reviewer who feels unqualified to review the research reported in a manuscript or knows that its prompt review will be impossible should notify the Editor and excuse himself from the review process. Reviewers should not consider manuscripts in which they have conflicts of interest resulting from competitive, collaborative, or other relationships or connections with any of the authors, companies, or institutions connected to the papers.