Image generated by Meta AI
BY CHIEMEZIE UGOCHUKWU
Innovations in AI applications are helping users to create realistic images and videos that look authentic and can be used for diverse purposes. AI-enabled pictures are created through machine learning models like generative adversarial networks (GAN) and other emerging AI models. Software developers properly train these AI models on a volume of picture datasets, enabling AI systems to produce realistic images. The developers create AI images through two distinct pathways.
The first is the process of feeding the AI model with multiple pictures to enable it to learn certain components that make up a picture, like colour, textures or shapes, among others. Secondly, once the AI system is fed and trained on multiple datasets (pictures), the model is ready to generate images when the user prompts it to create a specific picture. The crux of AI-generated videos is premised on producing real videos without the use of a camera or digital devices. In addition, GANs deployed in AI systems enable users to create videos of themselves and others. Here, developers train AI models on several video clips to enhance the learning of facial expressions, mouth movements, and other features of audio-visual content.
The Good Sides of AI Deepfakes
Benefits of using emerging AI deepfakes cut across diverse sectors, ranging from health, arts, film production, and promotional activities, among others. Researchers like Kate Glazko and Yiwei Zheng in their study found that deepfake pictures can be used to assist people having challenges in using images or pictures for thinking (aphantasia). This medical condition can confine a person to being unable to form vivid mental images. Furthermore, an Australian study reveals that deepfakes, which consist of pictures and videos, assist Alzheimer patients in forming strong emotional bonds.
Advertisement
On the social side, AI deepfakes are enhancing product advertisements and the promotion of services, especially in Nigeria. In an interaction with a young user of AI deepfakes in Nigeria, he offered that AI-generated content is used for adverts, especially with animated pictures and the voices of known personalities. According to him, this style of advertisement captures a wider audience given the humour that such content generates.
The AI-generated pictures are used as professional images on platforms like LinkedIn and several other social media platforms. It is believed that such images will boost the resume and enhance a perfect pictorial profile of users. A female user in a Nigerian university argues that deepfake defines what a person is into. To her, the use of these AI-generated contents specifies the type of business a person engages in, such as Web 3 or cryptography. These fields highlight professionals in coding and related activities.
Cybercrimes Thriving on AI Deepfakes
Advertisement
There have been growing concerns about the use of AI-generated pictures and videos for cybercrimes. Given the massive improvements in these AI models, it is enabling the production of convincing pictures, images and live videos. This development implies that more people will be susceptible to internet scams, especially in Nigeria. The use of deepfakes for internet scams in Nigeria is creating easier paths for cybercriminals to engage in identity theft. Scammers can use pictures and videos of known personalities to steal money from people. It was revealed to this researcher how these AI deepfakes have changed the narratives for internet fraud, especially for identity theft. A young man (name withheld) highlights that a cloned voice or AI-generated picture of a person can be used to bypass two-factor authentication security (2FA) on a digital device or system. He further pointed out that if a computer or digital system is not sophisticatedly secured, it is prone to be hacked or accessed by cybercriminals using AI-generated pictures or audio voices. Further, it was gathered that the era of writing long Python codes to create or change a voice is over, and social media platforms like TikTok enable users to edit videos and pictures. This negative use of AI-generated content portends a serious cyber threat in the growing digital economy of the country.
The cryptocurrency is facing increased scams with deepfakes. Cryptocurrency is an evolving currency that many young people are trading and getting involved in in Nigeria. Currently, internet scammers are using AI pictures, videos and audio of known personalities to market the currency. Here, the goal is to attract ‘investors’ to buy the currency, hoping that the price will double and the gain if they sell to others. Here, cybercriminals use known celebrities to promote a fake coin, prompting people to invest in such currencies. Sadly, after using AI-generated content to boost a crypto coin, the scammers disappear with the investors’ money.
Poor Sensitisation of Generative AI/Deepfake Capabilities: The Nigerian Situation.
There is no doubt that AI technologies can produce authentic images and real-time videos. This capability of the emerging technology points to the fast-paced nature of AI and the urgent need for people to be aware of these innovative changes. However, Nigeria and its residents remain largely unaware of these emerging technologies, particularly generative AI.
Advertisement
In recent times, government agencies like the National Orientation Agency (NOA) and government-owned media outlets are not owning up to the challenge of developing educational content, especially on social media platforms, to stem this ugly tide of AI scams. Social media are widely used by Nigerians, and sensitising people on these platforms will yield results, especially in understanding how the emerging technologies operate. Importantly, the ignorance that exists in knowing when AI deepfakes are used for cybercrimes in Nigeria will be reduced.
Observations have shown that cyber awareness of emerging digital technologies is mostly hosted in cities like Abuja, Lagos, and Port Harcourt, among others, neglecting rural/remote areas where literacy may be low. Cybercriminals often explore the opportunity to launch their ‘attacks’ in villages, posing as government officials or reputable organisations using cloned voices and pictures to deceive them. This fraud is coordinated in patterns; even if any of the victims want to confirm the veracity of information, AI voice clones, videos, fake websites and pictures are planted to respond to such enquiries. To minimise this, public awareness programs on AI-driven cybercrime should be organised in collaboration with town unions in villages, the church’s women’s wing, and age-grade gatherings, among others. It will localise the problem while instructing remote residents on necessary actions and steps.
Urgent Need for Inclusive Actions
Given the reality of what AI technologies can do, it implies that everyone is at risk of falling prey to internet scammers. This calls for collaborative and inclusive activities that captures every stratum of society (young, elderly, physically challenged and others). For youths, relevant government agencies and organisations should design educational activities on social media platforms using influencers, and animated content to sustain the interests of young people while delivering valuable contents and pointing out dangers of using the technology for scams. It is important to consider the elderly, as many of them lack knowledge about these emerging AI technologies. This goal can be achieved by using simple and less ambiguous words to teach them step-by-step strategies to avoid these growing scams. Every responsible society makes provisions for individuals with physical challenges. Smart-designed devices to aid understanding of relevant messages on emerging cyber threats and adequate provisions of hearing aids can accommodate these people. These actions, if properly implemented or designed, will widen the frontiers of inclusive cyber education in Nigeria.
Advertisement
Ugochukwu is a doctoral student at Bangor University, Wales, United Kingdom.
Advertisement
Views expressed by contributors are strictly personal and not of TheCable.