Technologie

Trust issue with AI needs to be addressed: Vishwam Sankaran

NEW DELHI: From diagnosing diseases to categorising huskies, Artificial Intelligence (AI) has countless uses but mistrust in the technology and its solutions will persist until people, the „end users“, can fully understand all its processes, says a US-based scientist.

Overcoming the „lack of transparency“ in the way AI processes information — popularly called the „black box problem“ — is crucial for people to develop trust in the technology, said Sambit Bhattacharya who teaches Computer Science at the Fayetteville State University

„Trust is a major issue with Artificial Intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information,“ Bhattacharya told PTI.

The computer scientist, whose work includes using machine learning (ML) and AI to process images, was a keynote speaker at the recent 4th International and 19th National Conference on Machines and Mechanisms (iNaCoMM 2019) at the Indian Institute of Technology in Mandi.

To buttress his point that users don’t always trust solutions provided by AI, Bhattacharya cited the instance of researchers at Mount Sinai Hospital in the US who applied ML to a large database of patient records containing information such as test results and doctor visits.

The ‚Deep Patient‘ software they used had exceptional accuracy in predicting disease, discovering patterns hidden in the hospital data indicating when patients were on the way to different ailments, including cancer, according to a 2016 study published in the journal Nature.

However, Deep Patient had a black box, the researchers said.

It could anticipate the onset of psychiatric disorders like schizophrenia, which the researchers said is difficult for physicians to predict. But the new tool offered no clue on how it does this.

The researchers said the AI tool needed a level of transparency which explained the process behind its predictions that reassure doctors and justified any changes in prescription drugs recommended.

„Many machine learning tools are still black boxes that render verdicts without any accompanying justification,” physicians wrote in a study published in the journal BMJ Clinical Research in May.

According to Bhattacharya, even facial recognition systems based on AI may come with black boxes.

„Face recognition is controversial because of the black box problem. It still fails for people with dark skin, and makes mistakes when matching with a database of faces. There are good examples including problems with use cases in legal systems,“ he explained.

Not all algorithms are trustworthy.

Bhattacharya mentioned a project at the University of California, Irwine, where a student created an algorithm to categorise photos of huskies and wolves.

The UCI student’s algorithm could almost perfectly classify the two canines. However, in later cross-analysis, his professor Sameer Singh found the algorithm was identifying wolves based only on the snow in the image background, and not based on its analysis of the wolf features.

Citing another example, Bhattacharya said, „If you show an image classification algorithm a cat image, the cat comes with a background. So the algorithm could be saying it is a cat based on what it sees in the background that it relates to a cat.“

In such cases, „the problem is that the algorithm does not decouple the background with the foreground totally right“, he explained.

There is a whole new field dealing with ‚AI explainability‘ that is trying to explain how algorithms make decisions.

For instance, in London, researchers from DeepMind, a subsidiary of Google parent company Alphabet, used deep learning to assign sites of treatment priority looking at patient eye scans.

Their study, published in the journal Nature, noted that the system takes in three-dimensional eye scans, analyses them, and picks cases that need urgent referral.

According to DeepMind researchers, the model gives several possible explanations for each diagnosis, rating each of them, and also shows how it has labelled the parts of the patient’s eye.

„Google has invested a lot of effort in developing trustworthy algorithms, or having better algorithms to scrutinize what a deep learning algorithm is doing,“ Bhattacharyya said.

He added that the Local Interpretable Model Agnostic Explanations (LIME) algorithm is a promising solution for overcoming AI black boxes. LIME lets researchers analyse „input values“ the AI system has used to arrive at its conclusion, Bhattacharya said.

He said such algorithms, providing transparency, probe the kind of features which AI uses to draw its conclusions in the first place.

„Generally we are interested in knowing which features were most influential in the decision from the ML algorithm. For example, if it says that a picture is that of a dog, we may find that pointy or floppy ear-features and typical rounded dog nose features are the most important, whereas body hair is not very important,“ he explained.

Despite emerging solutions to the black box problem, human intervention will still be needed to interpret AI’s decisions.

„I believe that in the near future things will be far from perfect. We cannot expect the automatic explanations of AI decisions to be very good. We will need a significant amount of human oversight of the explanation itself, i.e. there will be issues in trusting the explanations themselves,“ Bhattacharya said. PTI VIS

If you want to read more Technology articles, you can visit our Technology category.

if you want to watch movies go to Film.BuradaBiliyorum.Com for Tv Shows Dizi.BuradaBiliyorum.Com, for forums sites go to Forum.BuradaBiliyorum.Com  .

Ähnliche Artikel

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Schaltfläche "Zurück zum Anfang"
Schließen

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!