As an official AI partner of the Central TB Division (CTD), we are developing multiple interventions across the TB care cascade and helping India’s National TB Elimination Programme (NTEP) become AI-ready.

Automated interpretation of LPA test results

We use AI to interpret results of the LPA test to determine drug resistance to TB. Each LPA strip encodes the drug resistance pattern of the patient via a series of activated (dark) and inactivated (light) bands corresponding to different regions of the genome of the Tuberculosis bacterium. The AI problem consists of identifying this band pattern and then employing a set of rules to determine the specific type of drug resistance.

The challenging aspects of this problem lie in visually isolating each strip on a (possibly crumpled) piece of paper and then identifying the bands and their sequence. This is carried out using both classical computer vision techniques that detect activated bands via edge detection and match the band pattern against reference templates. Most importantly, the models need to work in a resource-constrained environment without impacting clinical outcomes. We are  working to develop end-to-end supervised deep learning techniques to address this multi-task problem, with a human-in-the-loop as an integral part of the AI ecosystem, along with novel data augmentation techniques.

Prediction of risk for Loss to Follow Up (LFU)

Our solution is a risk prediction project. Using a set of patient indicators like age, gender, location, time interval between diagnosis and treatment initiation, etc., collected at treatment initiation time from TB patients, we carry out advance prediction of whether or not the patient will eventually complete treatment. The AI employs an ensemble of models trained on Nikshay data corresponding to treatment outcomes for roughly half a million TB patients across the country. The data is feature-engineered, in the sense that a vast number of categorical indicators are encoded in different ways. New features, for example those that represent patient migratory behavior, are created through application of specific data transformations. These models were rigorously evaluated on a dataset of roughly half a million patients as part of a blind, prospective evaluation process.

The principal AI challenge here lies in early prediction of treatment dropoff (also referred to as Lost to Follow Up or LFU). Treatment dropoff is governed by a number of factors, many of them dynamic in nature, and it is therefore expected that early prediction of dropoff is a difficult problem. Our AI solution, however, yields accuracies that are more than twice as high as those of the best rule-based decision making systems, and fair across a number of important cohorts like gender, public vs. private treatment facilities, and month of treatment initiation. We are also working on interpretability methods to ensure that predictions are explainable in terms of the underlying indicators.

TB Ultrasound (USG)

Our USG solution aims to demonstrate that abnormal features found in chest ultrasound scans can be used as signals to diagnose TB using AI. Building upon previous works, we first show that the chest ultrasound scans of TB patients indeed contain distinctive features that are discernible to the radiologist’s eye. We formulate the AI task to detect these abnormal features in an automated fashion and predict the likelihood of the patient being TB-positive. Our current AI proof-of-concept uses deep learning  to identify abnormal features within individual frames of a USG video scan, followed by frame-level aggregation to make a video- or patient-level prediction for TB.

A challenging aspect of this solution is to ensure that data collection for building the AI model happens in an unbiased way, i.e.,USG scans are collected in the same protocol regardless of whether the patient has TB or not. We accomplish this by recommending  ‘lawn-mower’ style complete-chest scans for all patients, as well as localized scans that are carried out in equal proportion for TB and non-TB subjects.