Research Article | | Peer-Reviewed

Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review

Received: 17 March 2025     Accepted: 15 May 2025     Published: 18 June 2025
Views:       Downloads:
Abstract

This paper examines the output of culture-specific items (CSIs) generated by ChatGPT 3.5 and ChatGPT Pro in response to three prompts to translate three anthologies of African poetry. The first prompt was broad, the second focused on poetic structure, and the third emphasized cultural specificity. To support this analysis, five comparative tables were created. The first and second tables presents the results of the CSIs produced by Chat GPT 3.5 and ChatGPT Pro respectively after the three prompts; the third table categorizes the unchanged CSIs based on Aixelá’s framework of “Proper nouns and Common expressions”; the fourth summarizes the CSIs generated by the human translators, a custom-built translation engine (CTE), and the two versions of a Large Language Model (LLM). The fifth table shows how the seven CSIs that were repeated in translation in French were rendered after the three prompts. The sixth table shows the strategies employed by ChatGPT 3,5 and ChatGPT Pro after the culture-specific prompt on the CSIs that were not translated unrepeated. Compared to the outputs of CSIs from the reference human translation (HT) and the CTE in prior studies, the findings indicate that the culture-oriented prompts used with ChatGPT Pro did not yield significant enhancements in the CSIs during the translation of the three African poetry from English to French. On evaluation however, ChatGPT Pro scored better in BLEURT than ChatGPT 3.5. A combined total of 20 CSIs were generated by the LLM versions, where 13 were repeated as the source word. The repeated CSIs were inconsistent with the outcome of the HT and CTE; some of the translations of the remaining seven unrepeated CSIs were also inaccurate compared to the reference HT and CTE. While the corpus of this investigation is small, the results show that the data used to build LLMs has not been French-centric nor poetry domain-specific and thus LLMs could benefit from a higher and better performance when tailored to other languages and specific domains.

Published in American Journal of Computer Science and Technology (Volume 8, Issue 2)
DOI 10.11648/j.ajcst.20250802.14
Page(s) 85-101
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

LLMs, ChatGPT 3.5, ChatGPT Pro, Structured Prompts, African Poetry, Translation

References
[1] Aixelá, J. F. (1996). Culture-Specific Items in Translation. In R. Alvarez and M. Vidal (Eds.), Translation, Power, Subversion (52-78). Multilingual Matters.
[2] Akpaca, S. M. (2024). A Syntactic, Semantic, and Pragmatic Evaluation of the Translation of an Ethnographic Text by ChatGPT. London Journal of Research in Humanities & Social Science, 24(12). Retrieved from
[3] Khoshafah, F. (2023). ChatGPT for Arabic-English Translation: Evaluating the Accuracy. Research square. Retrieved from
[4] Kuzman, T. et al. (2019). Neural Machine Translation of Literary Texts from English to Slovene. [Conference Presentation]. Dublin Machine Translation Summit Aug 2019. W19-73.pdf (aclweb.org).
[5] Liebman, S. (2018). How to Say Mickey Mouse in 27 Different Languages. Disney Diary. Retrieved from
[6] Mohamed Elghazali. (2020, October). Getting Started with Custom Translator. [Video]. YouTube.
[7] Ogundare, O. & Araya, G. Q. (2023). Comparative Analysis of CHATGPT and the Evolution of Language Models. Arxiv. Retrieved from
[8] Opaluwah, A. (2024). Cultural Discourse in African Poetry: Output by Human Translators and Machine Translation Systems. International Journal of Language, Literacy and Translation.
[9] Opaluwah, A. (2025). The Expertise of the European Translators of African Poetry. Multidosciplinary Journal of Media and Intercultural Communication. Retrieved from
[10] Peng, K. et al. (2023). Towards Making the Most of ChatGPT for Machine Translation. Retrieved from
[11] Ray, P. P. (2023). ChatGPT: A comprehensive review of background, applications, key challenges, bias, ethics, limitations and future scope. ScienceDirect, 3, 121-154. Retrieved from
[12] Rodrigue, J. (2012, January 17). Muhammad Ali at 70: How the 'Louisville Lip' became 'The Greatest'. The Guardian. Retrieved from
[13] Scheub, H. (2000). A Dictionary of African Mythology: The Mythmaker as Storyteller. New York, USA: Oxford University Press. Kindle Edition.
[14] Si, S. et al. (2023). Evaluating the Capability of ChatGPT on Ancient Chinese. Arxiv. Retrieved from
[15] Snover, M. et al. (2009). Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric. Aclanthology. Retrieved from
[16] Soyinka, W. (1982). Idanre: poème. (A. Bordeaux, Trans.). Dakar: Nouvelles éditions Africaines. (Original work published 1967).
[17] Wang, S. et al (2024). What is the Best way for ChatGPT to Translate Poetry? Arxiv. Retrieved from
[18] Wei, Y. (2023). A Comparative Study Between Manual and ChatGPT Translations on Literary Texts Taking Kung I-chi as an Example. Asia-Pacific Journal of Humanities and Social Sciences, 3(4), 57-66. Retrieved from
[19] Zan, C. et al (2024). Building Accurate Translation-Tailored LLMs with Language Aware Instruction Tuning. Retrieved from
Cite This Article
  • APA Style

    Opaluwah, A. (2025). Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review. American Journal of Computer Science and Technology, 8(2), 85-101. https://doi.org/10.11648/j.ajcst.20250802.14

    Copy | Download

    ACS Style

    Opaluwah, A. Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review. Am. J. Comput. Sci. Technol. 2025, 8(2), 85-101. doi: 10.11648/j.ajcst.20250802.14

    Copy | Download

    AMA Style

    Opaluwah A. Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review. Am J Comput Sci Technol. 2025;8(2):85-101. doi: 10.11648/j.ajcst.20250802.14

    Copy | Download

  • @article{10.11648/j.ajcst.20250802.14,
      author = {Adeyola Opaluwah},
      title = {Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review
    },
      journal = {American Journal of Computer Science and Technology},
      volume = {8},
      number = {2},
      pages = {85-101},
      doi = {10.11648/j.ajcst.20250802.14},
      url = {https://doi.org/10.11648/j.ajcst.20250802.14},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajcst.20250802.14},
      abstract = {This paper examines the output of culture-specific items (CSIs) generated by ChatGPT 3.5 and ChatGPT Pro in response to three prompts to translate three anthologies of African poetry. The first prompt was broad, the second focused on poetic structure, and the third emphasized cultural specificity. To support this analysis, five comparative tables were created. The first and second tables presents the results of the CSIs produced by Chat GPT 3.5 and ChatGPT Pro respectively after the three prompts; the third table categorizes the unchanged CSIs based on Aixelá’s framework of “Proper nouns and Common expressions”; the fourth summarizes the CSIs generated by the human translators, a custom-built translation engine (CTE), and the two versions of a Large Language Model (LLM). The fifth table shows how the seven CSIs that were repeated in translation in French were rendered after the three prompts. The sixth table shows the strategies employed by ChatGPT 3,5 and ChatGPT Pro after the culture-specific prompt on the CSIs that were not translated unrepeated. Compared to the outputs of CSIs from the reference human translation (HT) and the CTE in prior studies, the findings indicate that the culture-oriented prompts used with ChatGPT Pro did not yield significant enhancements in the CSIs during the translation of the three African poetry from English to French. On evaluation however, ChatGPT Pro scored better in BLEURT than ChatGPT 3.5. A combined total of 20 CSIs were generated by the LLM versions, where 13 were repeated as the source word. The repeated CSIs were inconsistent with the outcome of the HT and CTE; some of the translations of the remaining seven unrepeated CSIs were also inaccurate compared to the reference HT and CTE. While the corpus of this investigation is small, the results show that the data used to build LLMs has not been French-centric nor poetry domain-specific and thus LLMs could benefit from a higher and better performance when tailored to other languages and specific domains.
    },
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Prompt-oriented Output of Culture-Specific Items in Translated African Poetry by Large Language Models: An Initial Multi-layered Tabular Review
    
    AU  - Adeyola Opaluwah
    Y1  - 2025/06/18
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajcst.20250802.14
    DO  - 10.11648/j.ajcst.20250802.14
    T2  - American Journal of Computer Science and Technology
    JF  - American Journal of Computer Science and Technology
    JO  - American Journal of Computer Science and Technology
    SP  - 85
    EP  - 101
    PB  - Science Publishing Group
    SN  - 2640-012X
    UR  - https://doi.org/10.11648/j.ajcst.20250802.14
    AB  - This paper examines the output of culture-specific items (CSIs) generated by ChatGPT 3.5 and ChatGPT Pro in response to three prompts to translate three anthologies of African poetry. The first prompt was broad, the second focused on poetic structure, and the third emphasized cultural specificity. To support this analysis, five comparative tables were created. The first and second tables presents the results of the CSIs produced by Chat GPT 3.5 and ChatGPT Pro respectively after the three prompts; the third table categorizes the unchanged CSIs based on Aixelá’s framework of “Proper nouns and Common expressions”; the fourth summarizes the CSIs generated by the human translators, a custom-built translation engine (CTE), and the two versions of a Large Language Model (LLM). The fifth table shows how the seven CSIs that were repeated in translation in French were rendered after the three prompts. The sixth table shows the strategies employed by ChatGPT 3,5 and ChatGPT Pro after the culture-specific prompt on the CSIs that were not translated unrepeated. Compared to the outputs of CSIs from the reference human translation (HT) and the CTE in prior studies, the findings indicate that the culture-oriented prompts used with ChatGPT Pro did not yield significant enhancements in the CSIs during the translation of the three African poetry from English to French. On evaluation however, ChatGPT Pro scored better in BLEURT than ChatGPT 3.5. A combined total of 20 CSIs were generated by the LLM versions, where 13 were repeated as the source word. The repeated CSIs were inconsistent with the outcome of the HT and CTE; some of the translations of the remaining seven unrepeated CSIs were also inaccurate compared to the reference HT and CTE. While the corpus of this investigation is small, the results show that the data used to build LLMs has not been French-centric nor poetry domain-specific and thus LLMs could benefit from a higher and better performance when tailored to other languages and specific domains.
    
    VL  - 8
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Sections