Interpretable Machine Learning

Predictive models are all pervasive with usage in search engines, recommender systems, health, legal and financial domains. But for the most part they are used as black boxes which output a prediction, score or rankings without understanding partially or even completely how different features influence the model prediction. In such cases when an algorithm prioritizes information to predict, classify or rank; algorithmic transparency becomes an important feature to keep tabs on restricting discrimination and enhancing explainability-based trust in the system.

Consequently, we end up with accurate yet non-interpretable models. We have been working on building models that are either interpretable by design or approaches that explain in a post-hoc manner the rationale behind a prediction by an already trained model. Specifically, we have proposed different interpretability approaches to audit ranking models in the context of Web search.

We have been studying the problem of interpretability for text based ranking models by trying to unearth the query intent as understood by complex retrieval models. In [1], we proposed a model-agnostic approach that attempts to locally approximate a complex ranker by using a simple ranking model in the term space. In [3], we ponder on the question, what makes a good reference input distribution for neural rankers? We also have simple research prototype for explaining neural rankers called EXS [4].

Recently, we investigated the difference between human understanding and machine understanding of images in using post-hoc interpretability approaches [2]. In particular, we seek to answer the following questions: Which (well performing) complex ML models are closer to humans in their use of features to make accurate predictions? How does task difficulty affect the feature selection capability of machines in comparison to humans? Are humans consistently better at selecting features that make image recognition more accurate?


Projects

This thread of my research is supported by the German Science Foundation or DFG , Amazon research Awards and Schufa Gmbh.


Publications

[1] Model agnostic interpretability of rankers via intent modelling. Jaspreet Singh and Avishek Anand. In Conference on Fairness, Accountability, and Transparency (FAT), 2020.

[2] Dissonance Between Human and Machine Understanding. Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand. In CSCW 2019.

[3] A study on the Interpretability of Neural Retrieval Models using DeepSHAP. Zeon Trevor Fernando, Jaspreet Singh, Avishek Anand. In SIGIR 2019.

[4] EXS: Explainable Search Using Local Model Agnostic Interpretability. Jaspreet Singh and Avishek Anand. In WSDM 2019.

[5] Posthoc Interpretability of Learning to Rank Models using Secondary Training Data. Jaspreet Singh and Avishek Anand. In Workshop on ExplainAble Recommendation and Search (Co-located with SIGIR’ 18).

[6] Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases. Maximilian Idahl, Megha Khosla and Avishek Anand. In in workshop on Advances in Interpretable Machine Learning and Artificial Intelligence & eXplainable Knowledge Discovery in Data (co-located with ECML-PKDD 2019)..

Avishek Anand
Avishek Anand
Associate Professor

My research interests lies in the application of machine learning to information retrieval and Web tasks.