Artificial Intelligence: Who’s in Charge?
After more than 50 years of intensive Artificial Intelligence global research and development, it has become one of the fastest growing technologies. AI brings huge transformative power that has the promise of producing significant socioeconomic and environmental benefits, sustainability, productivity gains, growth and prosperity for society – if it is implemented wisely.
AI is everywhere: The Good, the Bad and the Ugly
By definition, AI is programming high-performance computers combined with advanced technologies to think, learn, and behave in a similar way to humans through a complex system of algorithms. AI is designed to help people make better predictions, informed decisions, and to perform routine and complex tasks better than humans. Potentially, this could have huge benefits for a wide-range of sectors, such as healthcare, mobility, agriculture, education, manufacturing and communication, just to list a few.
However, alongside its potential, AI is fueling anxieties, including legal and ethical concerns about the trustworthiness of AI systems. The possibilities of the risks it poses on society include cultural values shift, privacy, fraud, unemployment and socially-biased judgement.
AI-generated errors can lead to devastating consequences, especially in critical and sensitive fields such as healthcare, traffic safety, investment decisions, cyberbullying, economic shifts, job losses and more.
Who’s in charge?
No single actor can provide answers to the concerns about the risks and challenges posed by AI, especially given its uncontrolled evolution.
AI is a shared responsibility among all stakeholders where international and national initiatives are already underway to establish the necessary technological and strategic frameworks and guidelines to ensure that AI is an ethically, socially, legally and ecologically responsible technology.
The objectives are to govern AI evolution with key principles to ensure the transparency, accountability, lawfulness, privacy, safety, human dignity, well-being, and environmental sustainability. Also to ensure that AI prioritizes the public good (AI for Good) over commercial and geopolitical gains.
Because of the huge complexity of AI systems, those objectives need to be collaborated across three key actors: Technology, Regulation and Academia:
Technology: AI development platforms and their underlying technologies across industry sectors should offer a transparent, robust, trustworthy, secure, and safe AI infrastructure with the necessary mechanisms and tools to address data privacy, right to access, accountability and data protection. AI development infrastructure should enable developers and data scientists of different skill levels to rapidly build, train, test and deploy models, including data preparation, model development, optimization, and deployment.
Regulation: Global initiatives are critical to generate corresponding AI frameworks, strategies, and guidelines to ensure responsible AI objectives can be met throughout the AI lifecycle with key three requirements: i) Lawfulness: complying with applicable laws and regulations; ii) Ethical: ensuring adherence to principles and values; and iii) Robustness: both from a technical and social perspective.
Ideally, these three requirements must be coordinated in harmony with each other without over-regulation, as the technology is still evolving and needs to be embraced and adjusted over the years to come.
Academia: Education and research play an essential role for steering AI in the right direction, developing AI’s educational path, delivering the next generation of developers and researchers to place more emphasis on being ethically and value-driven. Creating an educational atmosphere with the latest AI technology infrastructure and innovation hubs enable researchers and developers to work more closely with industry and communities through integrated knowledge-based AI-ecosystem platforms. This will help ensure that the design and development of AI systems is meaningful, adaptable, responsible, and robust enough to be trusted by the outside world.
It will also help education institutions to redesign their curriculums, generate adaptable policies and guidelines for using AI and continually re-validate these policies as technologies evolve, maintaining academic standards and integrity.
AI evolution needs to be human-centric
Keeping humanity in the AI-loop must be thoroughly considered. For example, converging the power of algorithms with human expertise, knowledge, creativity, and intuition can bring and amplify human’s unique ideas, wisdom, and insights to benefit AI’s outcomes.
AI systems need to be human-centric based on a commitment to use AI for the service of humanity and the common good, supporting global cooperation, and the achievement of the SDGs with the objective of improving humanity’s quality of life, while leveraging human knowledge, wisdom, values, morals, and sensibilities.
As AI has emerged as a top priority for all nations, international collaboration is becoming a priority in generating guidelines and regulations, now more than ever, to address the challenges presented by AI in terms of ethics, lawfulness, trustworthiness, and broader philosophical issues.
Ensuring harmony between technology, industry, academia and regulation, to establish best practices, generating universally accepted standards will steer AI towards its positive potential and mitigate its risks. Stake-holders working hand-on-hand will definitely ensure socioeconomic prosperity, environment protection and sustainable development of our world.
Medhat Mahmoud – Chief Digital Transformation, Huawei Northern Africa OpenLab
Medhat is a senior ICT and IoT global industry expert with an international profile. He has led successful projects and digital transformation initiatives with key international ICT industry leaders and held various senior positions, managing global assignments in North America, MEA, and APAC.
A strategic and visionary leader with an entrepreneurial mind-set, known for his creativity and fresh thinking, Medhat focuses ontransforming concepts into innovative products, from ideation to market.
A regular article contributor, he is a strong advocate of adopting ICT technology to improve collaboration, innovation and education in transforming people’s well-being.
Prior to OpenLab, he led Huawei’s IoT Competence Center in Silicon Valley, California, USA.