AI definitely needs QA monitoring

Listen on the go!

In 2017, there were reports on how the world’s leading robotics and artificial intelligence pioneers called on the United Nations to ban the development and use of killer robots and weapons such as drones, tanks and automated machine guns. A group of 116 specialists from across 26 countries led by Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman had called a ban on autonomous weapons. This proves that even the big guns in automation are worried about robots running amok in the war field!

Almost a year back even Facebook had abandoned an experiment after two artificially intelligent programs started interacting with each other in a strange language only they understood. Ensuring a stringent surveillance and monitoring plan is one of the key focus areas for any Artificial Intelligence (AI) related activity or applications. All these inventions closely impact our regular routine and peaceful coexistence. Hence, if anything goes wrong, it can probably endanger lives or our well-being in some way. Cybersecurity is as well a perspective to look at the ongoing boom around AI. What you need is a robust Quality Assurance and Testing plan!

AI can be implemented across various operational areas – AI and facial recognition technology can help the police to identify suspects, whereas in the healthcare, radiographers can leverage AI to examine radiographs effectively and within time limits. It can be and has been implemented for conservation activities like tracking endangered species in the wild, monitoring remote glaciers, and much more. All this can happen if the device and application is well tested and configured for any unforeseen occasion.

What can go wrong with AI?
John-David Lovelock, research vice president at Gartner states, “AI promises to be the most disruptive class of technologies during the next 10 years due to advances in computational power, volume, velocity and variety of data, as well as advances in deep neural networks (DNNs).” He further mentions, “In the early years of AI, customer experience (CX) is the primary source of derived business value, as organizations see value in using AI techniques to improve every customer interaction, with the goal of increasing customer growth and retention. CX is followed closely by cost reduction, as organizations look for ways to use AI to increase process efficiency to improve decision making and automate more tasks.”

AI is indisputably loaded with opportunities and potential. Whether it delivers as expected, is a phase we are yet to see and believe. Let’s take the reverse route. It makes sense to establish what could go wrong with AI and then estimate how any such situation can be salvaged.

Data; the rider and roller for AI
Any new technology will work as per the data provided. Whether it’s a virtual assistant or a smart home device, it will function on the basis of the information that it sources from its virtual environment or any external source. Any leak or flaw in the data can result in disruption or breach within the system. Hence, it is critical to ensure quality of the data and test the application along with the data sources. Erroneous data can impact the quality of your application’s performance, particularly, its accuracy.

According to a survey conducted by Forrester, only 17 percent of respondents say that their biggest challenge was that they didn’t “have a well-curated collection of that to train an AI system.”

For instance, with AI for Facial Recognition, the accuracy of the application will depend on the way the data fragment is fed and the application is trained. It can even result in a bias, where a man could be recognized in some way and a woman in a particular way, resulting in inaccuracy. There are chances of racial bias as well. Hence, it is important to test and confirm the data that is being used to train the applications and devices for the respective operations.

When the AI programs/applications are built, the data and the algorithms must be analyzed and tested in line with the principles, goals, and values of the organization. Effective testing of AI might even need internal and external audit to look at the device or system objectively and share a verdict. David Schubmehl, Director of IDC’s cognitive and artificial intelligent systems research, says, “Organizations have already started to audit their machine learning models, and looking at the data that goes into those models. But like anything else, it’s an emerging area. Organizations are still trying to figure out what the best practices are.”

Compliance and Security determines stable behaviour
Can you trust an AI device or application with critical national level activities such as Presidential Elections? In the current scenario, definitely not! There are still doubts around the technology’s capability to perform an activity flawlessly, especially, without any surveillance. Compliance with the set protocols, data points, system configurations, and information sources is needed to ensure that the AI application delivers consistent results. Compliance can be achieved with rigorous testing and constant validation.

Compliance has to be ensured across varying conditions, as there cannot be a constant environment throughout for the application to perform and deliver. Anand Rao, partner and global AI leader at PricewaterhouseCoopers says, “Even perfectly accurate data could be problematically biased. If, say, an insurance company based in the Midwest used its historical data to train its AI systems, then expanded to Florida, the system would not be useful for predicting the risk of hurricanes.”

Aligning with the environmental changes and complying with the localized protocols is needed, which can ensure accuracy and efficiency with the service. This can evade the risk of working with fake or erroneous data as well, as the system will develop capabilities to align and self-learn.

Additionally, Security Testing is critical to ensure that the data remains untouched and the system can combat any attempts by hackers. Safeguarding the data is one of the growing concerns for almost all organizations that intend to adopt AI or have already dived in.

Can AI thrash out the manual element or will need continued monitoring, is yet a question to be answered. Quality Assurance and Testing efforts can help streamline the efforts put together by organizations and experts developing AI systems/applications.

Cigniti’s Digital QA service helps organizations in their Digital initiatives. Our methodologies, techniques, and specialists ensure that the apps are thoroughly validated for User Experience (UX) and cover responsive web design patterns, screen resolutions, accessibility, usability, content, navigation, etc.

Dial in to get some interesting insights from our experts on your Digital initiatives and developments.

Author

  • Cigniti Technologies

    Cigniti is the world’s leading AI & IP-led Digital Assurance and Digital Engineering services company with offices in India, the USA, Canada, the UK, the UAE, Australia, South Africa, the Czech Republic, and Singapore. We help companies accelerate their digital transformation journey across various stages of digital adoption and help them achieve market leadership.

Leave a Reply

Your email address will not be published. Required fields are marked *