IMPLEMENTATION AND OBSTRUCTION ON THE SCALES OF NEW REALITIES: KEY FACTORS INFLUENCING AI TECHNOLOGIES IMPACT ON CONTEMPORARY UK POPULAR CULTURE

Andrii Pilkevych,

Ph.D. (History), Associate Professor,

Taras Shevchenko National University of Kyiv, Kyiv, Ukraine

ORCID: 0000-0002-4154-064X

 

DOI: https://doi.org/10.17721/2524-048X.2026.33.10

 

Abstract. This article is dedicated to a comprehensive study and critical reflection on the processes of implementing artificial intelligence (AI) technologies into the socio-cultural fabric of the United Kingdom between 2024 and 2026. During this period, the UK has found itself at the epicenter of a global technological transformation, striving to balance its ambitions for innovation leadership with the necessity of preserving its unique cultural identity and the economic viability of its creative sector. The paper offers a systematic perspective on how algorithmic systems are reshaping not only the methods of cultural content production but also the very nature of authorship, authenticity, and the public reception of art.

The aim of the article is to provide a systematic analysis and conceptualization of generative AI’s impact on the UK’s popular culture ecosystem within the context of the national technological development strategy. The work seeks to reveal the dialectical contradiction between the government’s pursuit of a «technological breakthrough» and the need to protect intellectual property and ethical standards within the creative industries. The research focuses on identifying key tension points in the music, broadcasting, and visual arts sectors, where algorithmic intervention is driving the most radical changes.

Research Methodology. The study is grounded in a multidisciplinary methodological matrix that integrates approaches from various humanities and social sciences. Contemporary communication theories are utilized to analyze structural shifts in resource distribution between tech giants and independent creators. Critical Discourse Analysis (CDA) is applied to government reports (DCMS, IPO) and policy documents from industry associations to identify underlying ideologies of «technological determinism». Case studies of high-profile projects, specifically synthetic media experiments by ITV and Channel 4, provide empirical validity to the theoretical findings.

Scientific Novelty of the research lies in the theoretical conceptualization of the «algorithmic culture» phenomenon as a new stage in the evolution of the British media landscape during the 2024–2026 period. The British AI regulation model is analyzed not merely as a legal framework but as a cultural-anthropological phenomenon. The existence of a specific British model of AI integration is substantiated; unlike American «market libertarianism» or European «stringent regulatory protectionism», it is based on the synergy between the institutional ethos of public broadcasting and the state’s ambitions to serve as a global regulatory hub. A new theoretical approach to analyzing authorship is proposed, where the creative process is viewed not as an act of individual genius but as a diffusion between human intention and algorithmic generation. This allows for moving beyond an anthropocentric understanding of art, which is critical for updating UK intellectual property law. The study identifies and describes the phenomenon of the loss of «indexicality» in video imagery within the British media space. It establishes that the satirical use of deepfakes on national television creates a precedent for «legitimized simulation», necessitating the development of a new axiology of journalistic ethics. The transformation of British cultural influence from the export of “unique talent” to the export of “algorithmic standards of creativity” is modeled, allowing for a fresh assessment of the competitiveness of the British creative economy under conditions of global digital competition.

Conclusions. The classical concept of intellectual property is undergoing a crisis as the boundary between human creative contribution and algorithmic generation becomes increasingly blurred, requiring the introduction of new legal categories such as «synthetic law». Algorithmic curation and the focus on predictive analytics lead to a narrowing of aesthetic diversity in popular culture, pushing experimental and subcultural forms to the periphery. The widespread use of digital twins in satire and advertising undermines the audience’s ability to verify reality, necessitating an immediate update of media literacy standards. The automation of cognitive processes in the media industry threatens to devalue the professional skills of journalists, screenwriters, and artists, calling for new social guarantees in the sphere of creative labor. The British experience is unique due to its combination of a powerful creative sector and the ambition to become an «AI superpower». Britain is attempting to strike a balance between rigid regulation and a libertarian approach. Since the English language dominates AI training, British legal and cultural standards are automatically exported to other democracies.

Keywords: United Kingdom, popular culture, information technology, artificial intelligence, media studies.

 

Submitted 11.10.2025


Download


References:

  1. AI and creative technology scaleups: Less talk, more action (2025, February 3). Communications and Digital Committee. House Of Lords. Retrieved from: https://publications.parliament.uk/pa/ld5901/ldselect/ldcomm/71/71.pdf. [In English].
  2. AI Opportunities Action Plan. (2025). Department for Science, Innovation & Technology of United Kingdom. Retrieved from: https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan. [In English].
  3. Analysis of over 168 million job postings suggests that Generative AI has not reduced demand for creativity skills. The Creative Industries Policy and Evidence Centre by Newcastle University. Retrieved from: https://pec.ac.uk/news_entries/genai-and-creativity-demand/ [In English].
  4. Bansal, G. (2024). Reprogramming the Software of the Mind: A New Framework for Cultural Homogenization. Journal of Global Information Technology Management, 27(1), 1–7. DOI: https://doi.org/10.1080/1097198X.2023.2298021 [In English].
  5. Baudrillard, J. (1981). Simulacres et simulation. Paris: Galilée, 1981. [In French].
  6. Caled, D., & Silva, M.J. (2022). Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation. Journal of Computational Social Science, 5(1), 123–159. [In English].
  7. Caswell, D. (2024). «Audiences, automation, and AI: From structured news to language models.» AI Magazine, 45, 174–186. DOI: https://doi.org/10.1002/aaai.12168 [In English].
  8. Connock, A. (2024) British TV and AI: explore and exploit. European Journal of Cultural Management and Policy. (Vol. 14). DOI: https://doi.org/10.3389/ejcmp.2024.13225 [In English].
  9. Copyright and artificial intelligence statement of progress under Section 137 Data (Use and Access) Act. (2025). Department for Science, Innovation & Technology of United Kingdom. Retrieved from: https://www.gov.uk/government/publications/copyright-and-artificial-intelligence-progress-report/copyright-and-artificial-intelligence-statement-of-progress-under-section-137-data-use-and-access-act [In English].
  10. Duggins, A. (2025, January 31). Channel 4 may have violated Sexual Offences Act with deepfake video of Scarlett Johansson. The Guardian. Retrieved from: https://www.theguardian.com/tv-and-radio/2025/jan/31/channel-4-may-have-violated-sexual-offences-act-with-deepfake-video-of-scarlett-johansson [In English].
  11. Heritage, S. (2023, January 9). Behind the scenes of TV’s first deep fake comedy: None of it is illegal. Everything is silly. The Guardian. Retrieved from: https://www.theguardian.com/tv-and-radio/2023/jan/09/deep-fake-neighbour-wars-interview-itvx-comedy [In English].
  12. House of Lords Communications and Digital Select Committee inquiry: Large language models. (2026). UK Parliament Retrieved from: https://committees.parliament.uk/writtenevidence/124416/html/ [In English].
  13. Jones, B., & Jones, R. (2025). Action research at the BBC: Interrogating artificial intelligence with journalists to generate actionable insights for the newsroom. Journalism, 26(8), 1708–1725. DOI: https://doi.org/10.1177/14648849251317150 [In English].
  14. Moon, Y.E., & Lewis, S.C. (2024). Social Media as Commodifier or Homogenizer? Journalists’ Social Media Use in Individualistic and Collectivist Cultures and Its Implications for Epistemologies of News Production. Digital Journalism, 1–20. DOI: https://doi.org/10.1080/21670811.2024.2303988 [In English].
  15. Nechushtai, E., Zamith, R., & Lewis, S.C. (2023). More of the Same? Homogenization in News Recommendations When Users Search on Google, YouTube, Facebook, and Twitter. Mass Communication and Society, 1–27. DOI: https://doi.org/10.1080/15205436.2023.2173609 [In English].
  16. Nguyen, C.T. (2020). Echo chambers and epistemic bubbles. Episteme, 17(2), 141–161. [In English].
  17. Pettis, B.T. (2022). Know Your Meme and the homogenization of web history. Internet Histories, 6(3), 263–279. DOI: https://doi.org/10.1080/24701475.2021.1968657 [In English].
  18. Regehr K., Shaughnessy C., & Zhao, M. (2025). Normalizing toxicity: the role of recommender algorithms for young people’s mental health and social wellbeing. Frontiers in Psychology. (Vol. 16). https://doi.org/10.3389/fpsyg.2025.1523649 [In English].
  19. Report «Action needed to protect our creative future in the age of Generative AI». Queen Mary University of London. Retrieved from: https://www.qmul.ac.uk/media/news/2025/humanities-and-social-sciences/hss/action-needed-to-protect-our-creative-future-in-the-age-of-generative-ai.html [In English].
  20. Sapountzi, A., & Psannis, K.E. (2018). Social networking data analysis tools & challenges. Future Generation Computer Systems, 86, 893–913. [In English].
  21. Shin, D., Hameleers, M., & Park, Y.J. (2022). Countering algorithmic bias and disinformation and effectively harnessing the power of AI in media. Journalism & Mass Communication Quarterly, 99(4), 887–907. [In English].
  22. Williams, B.A., Brooks, C.F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78–115. [In English].