Future Decoded: Microsoft imbues ethics and responsibility into AI as a core part of Azure AI technology.

Microsoft are designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and universal values. This is becoming a core principle and practice, and Microsoft are doing a great deal of research in this area.

Why is this important? Information technology is ever-growing, and becoming more sophisticated by the day. We are living in an era that is more defined by its rapidly evolving technology and how this technology is becoming a part of every person’s everyday life, regardless of their own abilities or experience. Because of this, it is the responsibility of the organisations behind this technology to ensure that they are taking responsibility for the massive impact that they cause. The future is likely to be even more defined by the technology that is currently evolving – and if companies neglect to take a practical, thoughtful and responsible approach to implementing and developing this software, they run the risk of not being able to catch up with the consequences. 

Technology can be both empowering and threatening. Issues such as breaches of privacy and data protection, cyber-attacks, and irresponsible programming are present threats that run the risk of becoming even more serious problems. The rise of AI poses another type of threat that is different but equally as important to recognise, wherein the developers and programmers behind the technology pose the risk of embedding societal prejudices and their own judgements into the algorithms that determine the AI’s actions. There are moral and ethical conundrums arising as technology becomes more sophisticated and ingrained in our daily lives.

Human decision makers are susceptible to many forms of prejudice and bias, such as those rooted in gender and racial stereotypes. Evidence in research as well as publicized news stories have found that machine learning systems can inadvertently discriminate against minorities, historically disadvantaged populations and other groups. One would hope that machine learning would overcome the bias, but, unfortunately, it learns from decisions that are made in the world.

Microsoft are devising interesting and innovation solutions to tackle bias. For example, Microsoft research have created a “fairness enforcer” which uses machine learning to create a process that yields a classification rule that is fair, according to the fairness definition while minimizing the error involved.

figure-all-for-mean-simplified
Credit: Microsoft (https://www.microsoft.com/en-us/research/blog/machine-learning-for-fair-decisions/)

Microsoft are currently attempting to help companies by embedding a practical and considerate approach into their software. Last year, Microsoft released a statement titled Responsible Bots: 10 guidelines for developers of conversational AI (2018), where they outlined their ethos for responsibly creating and maintaining technology that people are most likely to use day-to-day for real-world, consequential tasks and problems. They also published Six principles for the responsible development of AI in a 2018 book, The Future Computed (2018). With Microsoft working on systematically and intentionally aiming to prioritise social responsibility and ethical decision-making as the most integral factors in their transformation as a company, and making their choices and actions public and transparent. 

While Microsoft are currently leading the way on implementing practical strategies, it is becoming more important that any company who is developing AI need to be aware of the risks and how to prevent and handle them.

Tay_bot_logoA strong example of how conversational AI can be used irresponsibly and become harmful is Microsoft’s ‘Tay’ experiment. Tay was a Microsoft AI chatbot who was launched on Twitter  in 2016. After less than 24 hours, Tay was shut down completely, as it had begun to generate a huge amount of inappropriate tweets filled with racist, sexist and anti-Semitic language. This was due to the fact that Tay learned from the interactions that it had with the public. The lack of responsibility on the developer’s part was a failing in itself – but Microsoft have learned this lesson and they have done the right thing in doing so. Miller et al state in their article ‘Why We Should Have Seen That Coming: Comments on Microsoft’s ‘Tay’ Experiment, and Wider Implications’, 

We contend that these incidents are symptoms of a deeper problem with the very nature of learning software (LS-a term that we will use to describe any software that changes its program in response to its interactions) that interacts directly with the public, the developer’s relationship with it, and the responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care. (Miller, Wolf and Grodzinsky, 2017)

Cases like this make it clear that developers need to re-think their approach to conversational AI, and begin to prioritise a more proactive ethical stance. Microsoft have suggested a number of ways for developers and companies to begin their journey to more responsible, practical AI practices.

It is important to have a ‘break out in case of emergency’ option present when conversational AI is interacting with the public. This allows the opportunity for conversations to be interrupted or shut down if they become inappropriate. In the case of companies and products using AI virtual helplines, it also presents the chance for the conversation to be flagged to the moderator if the customer expresses that they are dissatisfied or have a problem which goes beyond the AI’s capabilities.

One of the most interesting, but also most potentially harmful features of conversational AI is that it can take on a human-like persona, or appear to have a personality. This gives extra weight to the things that it says in conversation; it needs to interact appropriately with users, respect cultural norms, and be designed to avoid norms violations wherever possible. For example, a chatbot designed to help users navigate an online banking site should not have opinions on religion or politics, and should not engage at all with any questions that are not relevant to its purpose. It is also useful to have a ‘code of conduct’ for users where possible, explicitly prohibiting hate speech or any form of harassment. Techniques such as keyword filtering come in useful when designing a bot to pick up on inflammatory language and inappropriate subjects.

Data Relish have recently released a series on how to tackle prejudice in machine learning, and this is something that also needs to be considered in this case. It is essential that AI is programmed to treat every user fairly and equally. To help ensure this is the case, the development team should be diverse and home to a wide range of perspectives and experiences. Furthermore, data should be consistently assessed by humans – while bias detection tools can be helpful, they should not be relied on entirely.

toolsandweapons_3d_new-1-671x1024Overall, it is essential that the development of AI does not lead to us feeling as if their job is now their job only and is therefore out of our hands. Human intervention is essential throughout the entire process. For AI in more sensitive fields, such as health or law enforcement, experts from relevant fields should be brought in and encouraged to give their input. Customer service AI should be programmed to ask users for feedback, which should be fed back to colleagues at the business. To ensure the AI is accessible for all, disabled people should be invited to test the bots before they are released to the public. Ideally, AI should learn from diversity and effectively recognize and shut down issues.

At Data Relish, we are currently reading Brad Smith’s new book on ethics and AI, called Tools and Weapons. We will summarize in the future, and, in the meantime, we are glad to see that Microsoft are taking these issues seriously.

 

Miller, K.W; Wolf, Marty J; Grodzinsky, F.S. (2017). Why we should have seen that coming. ORBIT Journal, 1(2).

Leave a Reply