You don’t have to go far or think hard to come up with scenarios — real or imagined — in which new technology can go wrong. Artificial intelligence (AI) has played a starring role in many such stories. But it’s important to remember why humans created AI in the first place.
When used correctly and ethically, AI can be an amazing tool for good.
AI allows for more efficient use of resources, increased productivity and better customer experiences. It can contribute to improved healthcare and it’s even promising longer life expectancy. These are just a few examples of how AI can enhance our lives, businesses and the world.
Clearly, AI is a powerful technology, and, as we know, with great power comes great responsibility.
Some people, including myself, refer to that responsibility as ethical AI. This is one of the principles that drives my company, an AI and intelligent automation provider, because I believe that the widespread realization of ethical AI requires participation from both businesses and governments to address and ensure AI accountability. I think it calls for technology providers to limit the carbon footprints of their AI solutions. In addition, ethical AI relies on the ability of AI solutions to enable auditability and traceability.
Businesses and governments need to address AI accountability.
Accountability means ensuring people use AI engines as intended and not for fraudulent or other malicious pursuits.
I believe that technology vendors should be held to the highest standard of responsibility and accountability for ensuring an ethical approach to the design and application of AI products and services. Governments need to play an important role in AI accountability for ethical AI, too.
One way AI solution providers address accountability is by having very strong deal qualification criteria before starting work with an organization. For example, my company declined the business of an organization that planned to use AI bots to amass Broadway show tickets within seconds of their availability and then sell them to the public at elevated prices.
Technology providers also must educate their teams and partners on appropriate and inappropriate uses of AI. Some people are unaware of the power of AI engines and their potential for misuse. With the right education, such individuals can learn about the risks and how to avoid them. Education is a big horizontal component that I expect will cut across every aspect of an AI journey. That will go a long way toward contributing to ethical AI.
Legislators and regulators have a key role to play in the area of AI accountability. That involves specifying the applications for which AI can and cannot be used.
To be clear, I don’t think that AI as a technology should be regulated, but there should be governmental regulations put into place to help standardize how AI technology can be used. For example, regulations should indicate that applying AI is appropriate for particular purposes in specific verticals, while other laws or rules should make clear what applications of AI are not allowed.
Solution providers and their customers must work to limit AI’s carbon footprint.
AI also stands to make a large impact on the environment. And, with ethical AI in place, that can be a good thing. As National Geographic reports, AI can allow for better climate predictions, show the effects of extreme weather and identify the source of carbon emissions. PwC notes that AI can enable a sustainable future.
But AI engines that require giant datasets — and thus, huge amounts of computing infrastructure that consume enormous amounts of power — can have significant carbon footprints of their own. The enormity of this issue is illustrated by the fact that data centers consume at least 3% of all the electricity generated on the planet. This will need to be taken into consideration when companies develop practices that help monitor their AI footprint.
Ethical AI also can be advanced in small ways. For example, AI solution providers can opt to use darker user interfaces, which consume less power than lighter ones (they’re easier on the eyes, too). This is a small tip, but if more people follow it, I believe it could have a large impact on the environment.
Some organizations are already asking suppliers about their carbon footprints. So, clearly, environmentally friendly solutions are becoming a buyer criterion.
For ethical AI to exist, platforms need to leave footprints.
For ethical AI to work, we also need AI solutions that allow for auditability and traceability. This transparency will help companies establish better business and tech practices, while also providing visibility into their efforts in ensuring AI is used for good.
These traceability capabilities exist on your laptop today. If you gave your computer to forensic investigators, they could dig deep and see every website you’ve ever visited. They could identify what information you downloaded, what data you copied and everything else you did on the machine. It’s similar to footprints on a beach, which indicate where you have been, except without the ocean to wash these markers away.
However, AI algorithms and engines don’t always work that way. Such technologies can immediately sweep up these identifying footprints. In the process, they erase the record of what occurred and the ability to assess what happened and when. That makes it difficult to learn from mistakes, police for possible infractions and identify those who don’t follow organizational or regulatory rules.
That said, AI solutions providers should build auditability and traceability into their systems. Governments and organizations using AI should consider the importance of auditability and traceability to their enforcement and compliance efforts.