The Nigerian Communications Commission (NCC) says organisations must handle citizens’ personal data responsibly when using artificial intelligence (AI).
Aminu Maida, executive vice-chairman of NCC, spoke in Abuja on Friday at an event to commemorate the 2024 World Consumer Rights Day.
The theme of this year’s celebration is “Fair and responsible Al for consumers.”
Maida, who was represented by Abraham Oshadami, executive commissioner, technical services designate, said AI has already made significant strides from “voice assistants to recommendation algorithms that suggest what we should watch, read, or buy”.
Advertisement
He said AI is also driving innovations in healthcare, finance, transportation, and countless other fields.
The executive vice-chairman said despite these innovations, using AI responsibly is crucial to guaranteeing consumers’ trust and circumvent possible problems.
“As we celebrate the advancements in AI, we must also grapple with ethical questions,” he said.
Advertisement
” How do we ensure that AI systems are fair and unbiased? How do we protect privacy in an age of data-driven AI? These are complex issues that require careful consideration.
“Responsible AI means using it in an ethical way throughout its development, deployment, and usage.
“This includes considering issues like bias, privacy, transparency, and accountability.
” According to reports, responsible AI aims to empower consumers, build trust, and minimize negative effects.
Advertisement
“To this effect, AI developers need to be transparent about the data, algorithms, and models used in AI systems.
“This ensures that decisions made by AI can be explained and mistakes can be fixed to ensure everyone is treated fairly, regardless of their background.
“This helps prevent biased decisions or discrimination thereby promoting inclusivity and equality.
“Protecting citizens’ privacy is extremely important when using AI. Organisations should handle personal data responsibly, following strict privacy regulations. Respecting privacy builds trust in AI systems.”
Advertisement
Maida said responsible AI requires mechanisms for holding systems accountable and explaining their decisions.
He said developing regulations and policies to govern AI deployment can be complex.
Advertisement
“Although most Legislative and governing Bodies are looking to regulate this technology, there has been continuous struggle to strike the right balance between risk mitigation and stifling innovation, while promoting innovation and ensuring security and trust,” he said.
“In this era that has seen the rise of AI and IoT cybersecurity, it is important to break silos and foster collaboration of the quadruple helix innovation model comprising of the academia, the industry, government and society to share ideas. AI developers and regulators have to ensure AI system algorithms consider, ethics and inclusivity.”
Advertisement
Add a comment