Artificial Intelligence (AI) has demonstrated the ability to enhance our daily lives by augmenting human capacity and capabilities. However, AI systems have also shown that when not designed and managed in a responsible way they can be biased, insecure, and in some cases, often inadvertently, violate human rights.

Simply put, AI has the potential to do great things, and, like we’ve seen with other great technologies eg.  social media, rules are required to give the necessary guardrails to those making these systems in order to protect individual and public interests.

In recent years, there has been a growing group of voices from research institutes, policy makers, journalists, human rights advocates, and companies sharing their unique perspectives and insights on how AI systems should be built and managed.  In a wide array of research reports, whitepapers, policies, guidelines, and articlesmany efforts have been made to identify the key challenges, and address ways to mitigate harms posed by the increased adoption of AI.

Additionally, given the scope, scale, and complexity of these tools establishing comprehensive and measurable rules that work in all contexts (domain, type of technology, and region) is extremely challenging. As such, we are often left with high-level, non-binding principles or frameworks that leave a lot of room for interpretation. Having built one of these policies, I know first hand that this is not enough.

It is imperative that we take the next step, that we start to work together to determine a way to establish those guardrails. And most importantly, that we find a way to take these concepts out of theory and put them into action. In their paper, “A Unified Framework of Five Principles for AI and Society,” Luciano Floridi and Josh Cowls do a comparative analysis of “six high-profile initiatives established in the interest of socially beneficial AI.”

Prior to having the opportunity to meet with Floridi and become more aware of his work on creating such an important ontology we had started a similar exercise, even referencing four of the same initiatives. Our purpose in doing so was to create an easy way for practitioners to make sense of this increasingly complex landscape and to use these best practices, and frameworks to build an easy-to-use assessment, this was the creation of the Responsible AI Trust Index.

The dimensions, our version of a unified framework, include:..

Read The Full Article

Leave a Reply

Check Also

Cookie consent: 5 harmful designs from UK Regulators’ guidance

The UK’s data protection regulator, the Information Commissioner’s Office (ICO), and …