An Environmental Review of the Generative Artificial Intelligence Policies and Guidelines of South African Higher Education Institutions: A Content Analysis

Chaka Chaka, Thembeka Shange, Tlatso Nkhobo, Vivienne Hlatshwayo

Abstract


Accompanying the inroads generative artificial intelligence (GenAI) models such as ChatGPT have made into the higher education sector, an urgent need has arisen to investigate the types of GenAI policies South African higher education institutions (HEIs) have developed in response to GenAI. To date, no study has explored this aspect of South African higher education. With this lack in mind, this paper reports on an online rapid environmental review of the GenAI policies of 26 South African HEIs that were freely available on the websites of these HEIs or otherwise online. The main purpose of the paper is to establish whether these HEIs had institution-wide GenAI policies, what types of policies they were and what constituted their contents. The study employed a critical-ethics-based framing comprising six dimensions: the Siyavuma, semi-Siyavuma, critical, semi-critical, uBuntu and semi-uBuntu dimensions. It analyzed data through content and thematic analyses. Some of its findings are worth mentioning. Firstly, it discovered that five of the 26 South HEIs had their institution-wide GenAI policy documents freely available on their websites or online; one HEI had four such policy documents. The retrieved GenAI policy documents were mainly guides or guidelines. Secondly, academic staff and students were the main target audiences of the GenAI policy documents. Thirdly, ChatGPT was the most mentioned and the most cited GenAI tool in the reviewed policy documents. Fourthly, the responsible use of AI tools, GenAI and academic integrity, and risks and concerns of using GenAI tools featured as one instance of the main convergence points for the GenAI policy documents that spelled out their aims and their main foci. Lastly, six of the GenAI policy documents manifested elements of the critical dimension, whereas one GenAI policy document has features of the uBuntu dimension. The paper also makes relevant recommendations.

 

https://doi.org/10.26803/ijlter.23.12.25


Keywords


uBuntu dimension; critical dimension; critical-ethics-based approach; GenAI policies; Siyavuma dimension

Full Text:

PDF

References


Aderibigbe, A. O., Ohenhen, P. E., Nwaobia, N. K., Gidiagba, J. O., & Ani, E. M. (2023). Artificial intelligence in developing countries: Bridging the gap between potential and implementation. Computer Science & IT Research Journal, 4(3), 185?199. https://doi.org/10.51594/csitrj.v4i3.629

Aschenbrenner, L. (2024). Situational analysis: The decade ahead. https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf

Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research. Qualitative Research, 1(3), 385–405. https://doi.org/10.1177/146879410100100307

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., Pham, P., Chong, S. W., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(4), 1?41. https://doi.org/10.1186/s41239-023-00436-z

Booth, H., & Pillay, T. (2024, June 7). A timeline of all the recent accusations leveled at OpenAI and Sam Altman. Time. https://time.com/6986711/openai-sam-altman-accusations-controversies-timeline/

BusinessTech. (2022, August 11). All 26 universities in South Africa listed in new global ranking. https://businesstech.co.za/news/trending/615901/all-26-universities-in-south-africa-listed-in-new-global-ranking/

Ceres, P. (2023, January 26). ChatGPT is coming for classrooms. Don't panic. Wired. https://www.wired.com/story/chatgpt-is-coming-for-classrooms-dont-panic/#intcid=_wired-bottom-recirc_9c0d2ac5-941b-45c7-b9ac-7ac221fc2e33_wired-content-attribution-evergreen

Chaka, C. (2022). Digital marginalization, data marginalization, and algorithmic exclusions: A critical southern decolonial approach to datafication, algorithms, and digital citizenship from the Souths. Journal of e-Learning and Knowledge Society, 18(3), 83–95. https://doi.org/10.20368/1971-8829/1135678

Chaka, C. (2023a). Detecting AI content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools. Journal of Applied Learning & Teaching, 6(2), 94–104. https://doi.org/10.37074/jalt.2023.6.2.12

Chaka, C. (2023b). Generative AI chatbots - ChatGPT versus YouChat versus Chatsonic: Use cases of selected areas of applied English language studies. International Journal of Learning, Teaching and Educational Research, 22(6), 1–19. https://doi.org/10.26803/ijlter.22.6.1

Chaka, C. (2024a). Currently available GenAI-powered large language models and low-resource languages: Any offerings? Wait until you see. International Journal of Learning, Teaching and Educational Research, 23(1)2, 148-173. https://doi.org/10.26803/ijlter.23.12.9.

Chaka, C. (2024b). Reviewing the performance of AI detection tools in differentiating between AI-generated and human-written texts: A literature and integrative hybrid review. Journal of Applied Learning & Teaching, 7(1), 1–12. https://doi.org/10.37074/jalt.2024.7.1.14

Chaka, C., Nkhobo, T., & Lephalala, M. (2020). Leveraging MoyaMA, WhatsApp and online discussion forum to support students at an open and distance e-learning university. Electronic Journal of e-Learning, 18(6), 494–515. https://doi.org/10.34190/JEL.18.6.003

Chaka, C., Shange, T., Ndlangamandla, S. C., & Shandu-Phetla, T. (2024). Editorial. International Journal of Language Studies, 18(1), 1–6. https://doi.org/10.5281/zenodo.10468102

Chan, C. K. Y., & Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect. London: Routledge.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104

Conference on Fairness, Accountability, and Transparency. (2023). Statement on AI harms and policy. https://facctconference.org/2023/harm-policy

Department of Higher Education and Training. (n.d.). Universities in South Africa. https://www.dhet.gov.za/SiteAssets/New%20site%20Documents/Universities%20in%20South%20Africa1.pdf

El Khoury, E. (2024). Mapping the response to AI and its impact on assessment redesign through document analysis. The Assessment Review, 5(1). https://assessatcuny.commons.gc.cuny.edu/2024/03/mapping-the-response-to-ai-and-its-impact-on-assessment-redesign-through-document-analysis/

Enago Academy. (2024). Thematic analysis vs. content analysis for data interpretation. https://www.enago.com/academy/content-analysis-vs-thematic-analysis/

Environmental Emergency Center. (2019). Rapid environmental assessment tool (REA). https://eecentre.org/2019/05/17/rapid-enviornmental-assessment-tool-rea/#:~:text=The%20Rapid%20Environmental%20Assessment%20Tool,a%20particular%20crisis%20or%20disaster.

Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. International Journal of Qualitative Methods, 5(1), 80–92.

Furze, L., Perkins, M., Roe, J., & MacVaugh, J. (2024). The AI assessment scale (AIAs) in action: A pilot implementation of GenAI supported assessment. Australasian Journal of Educational Technology, 40(4), 38–55. https://doi.org/10.14742/ajet.9434

Gallent-Torres, C., Zapata-González, A., & Ortego-Hernando, J. L. (2023). El impacto de la inteligencia artificial generative en educación superior: Una mirada desde la ética y la integridad académica [The impact of generative artificial intelligence in higher education: A focus on ethics and academic integrity]. Relieve, 29(2), Article M5. http://doi.org/10.30827/relieve.v29i2.29134

Goode, E. J., Thomas, E., Landeg, O., Duarte-Davidson, R., Hall, L., Roelofs, J., Schulpen, S., De Bruin, A., Wigenstam, E., Likjedahl, B., Waleij, A., Simonsson, L., & Göransson Nyberg, A. (2021). Development of a rapid risk and impact assessment tool to enhance response to environmental emergencies in the early stages of a disaster: A tool developed by the European Multiple Environmental Threats Emergency NETwork (EMETNET) project. International Journal of Disaster Risk Science, 12, 528–539. https://doi.org/10.1007/s13753-021-00352-8

Gray, A. (2024). ChatGPT “contamination”: Estimating the prevalence of LLMs in the scholarly literature. https://doi.org/10.48550/arXiv.2403.16887

Hacker, P., Borgesius, F. Z., Mittelstadt, B., & Wachter, S. (2024). Generative discrimination: What happens when generative AI exhibits bias, and what can be done about it. https://doi.org/10.48550/arXiv.2407.10329

Holmquist, L. E. (2024, October 4). Artificial intelligence is past its “wonderment phase”. Techcentral. https://techcentral.co.za/artificial-intelligence-wonderment-phase/252879/

Imbrie, A., Daniels, O. J., & Toner, H. (2023). Decoding intentions: Artificial intelligence and costly signals. https://cset.georgetown.edu/wp-content/uploads/CSET-Decoding-Intentions.pdf

Jin, Y., Yan, L., Echeverria, V., Gaševi?, D., & Martinez-Maldonado, R. (2024). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. https://doi.org/10.48550/arXiv.2405.11800

Kahn, J. (2024, May 21). OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say. Fortune. https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/

Kalai, A. T., & Vempala, S. S. (2024). Calibrated language models must hallucinate. Revised version. https://doi.org/10.48550/arXiv.2311.14648

Kobak, D., González-Márquez, R., Horvát, E.-Á., & Lause, J. (2024). Delving into ChatGPT usage in academic writing through excess vocabulary. https://doi.org/10.48550/arXiv.2406.07016

Leffer, L. (2024, April 5). AI chatbots will never stop hallucinating. Scientific American. https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/

Liang, W., Zhang, Y., Wu, Z., Lepp, H., Ji, W., Zhao, X., Cao, H., Liu, S., He, S., Huang, Z., Yang, D., Potts, C., Manning, C. D., & Zou, J. Y. (2024). Mapping the increasing use of LLMs in scientific papers. https://doi.org/10.48550/arXiv.2404.01268

Luo, J. (2024). A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 49(5), 651–664. https://doi.org/10.1080/02602938.2024.2309963

McDonald, N., Johri, A., Ali, A., & Hingle, A. (2024). Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines. https://doi.org/10.48550/arXiv.2402.01659

McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282.

Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of the world’s top-ranking universities. Computers and Education Open, 5, Article 100151. https://doi.org/10.1016/j.caeo.2023.100151

Murphy, H., & Kinder, T. (2024, June 7). Silicon Valley in uproar over Californian AI safety bill. Financial Times. https://www.ft.com/content/eee08381-962f-4bdf-b000-eeff42234ee0

Niraula, S. (2024). The impact of ChatGPT on academia: A comprehensive analysis of AI policies across UT system academic institutions. Advances in Mobile Learning Educational Research, 4(1), 973–982. https://doi.org/10.25082/AMLER.2024.01.009

Nolan, B., & Mann, J. (2024, May 20). More OpenAI chaos puts Sam Altman on the back foot. BusinessInsider. https://www.businessinsider.com/openai-ai-sam-altman-ilya-sutskever-crisis-resignations-chatgpt-2024-5

Nyaaba, M., Wright, A., & Choi, G. L. (2024). Generative AI and digital neocolonialism in global education: Towards an equitable framework. https://doi.org/10.48550/arXiv.2406.02966

PYMNTS. (2024, June 7). Silicon Valley on edge as new AI regulation bill advances in California. https://www.pymnts.com/artificial-intelligence-2/2024/silicon-valley-on-edge-as-new-ai-regulation-bill-advances-in-california/#:~:text=A%20new%20California%20legislative%20proposal,specific%20size%20and%20cost%20thresholds

Retraction Watch. (2024). “All the red flags”: Scientific Reports retracts paper sleuths called out in open letter. https://retractionwatch.com/2024/11/11/all-the-red-flags-scientific-reports-retracts-paper-sleuths-called-out-in-open-letter/

Right to Warn About Advanced Artificial Intelligence. (2024). https://righttowarn.ai/

Sample, I. (2023, January 26). Science journals ban listing of ChatGPT as co-author on papers. The Guardian. https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers

Saunders, C. H., Sierpe, A., Von Plessen, C., Kennedy, A. M., Leviton, L. C., Bernstein, S. L., Goldwag, J., King, J. Rl, Marx, CX. M., Pogue, J. A., Saunders, R. K., Van Citters, A., Yen, R. W., Elwyn, G., Leyenaar, J. K., & Leyenaar, J. K. (2023). Practical thematic analysis: A guide for multidisciplinary health services research teams engaging in qualitative analysis. BMJ, 381, Article e074256. https://doi.org/10.1136/bmj-2022-074256

Shange, T. (2023). Foregrounding care in online student engagement in a South African e-learning university. Open Praxis, 15(4), 288–302. https://doi.org/10.55982/openpraxis.15.4.576

Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & Health Sciences, 15(3), 398–405. https://doi.org/10.1111/nhs.12048

Vaismoradi, M. J., Turunen, H., & Snelgrove, S. (2016). Theme development in qualitative content analysis and thematic analysis. Journal of Nursing Education and Practice, 6(5), 100–110. https://doi.org/10.5430/jnep.v6n5p100

University World News. (2023, March 4). Oxford and Cambridge ban ChatGPT over plagiarism fears. https://www.universityworldnews.com/post.php?story=20230304105854982

Wang, H., Dang, A., Wu, Z., & Mac, S. (2024). Generative AI in higher education: Seeing ChatGPT through universities’ policies, resources, and guidelines. Computers and Education: Artificial Intelligence, 7, Article 100326. https://doi.org/10.1016/j.caeai.2024.100326

Xiao, P., Chen, Y., & Bao, W. (2023). Waiting, banning, and embracing: An empirical analysis of adapting policies for generative AI in higher education. http://dx.doi.org/10.48550/arXiv.2305.18617


Refbacks

  • There are currently no refbacks.


e-ISSN: 1694-2116

p-ISSN: 1694-2493