Publications

Filter:

The NTEP (National TB Elimination Program) along with USAID aims to aid innovators and leverage emerging technologies such as artificial intelligence (AI) to accelerate …

The NTEP (National TB Elimination Program) along with USAID aims to aid innovators and leverage emerging technologies such as artificial intelligence (AI) to accelerate tuberculosis elimination efforts. The NTEP has identified areas and articulated problems within the TB cascade of care that are AI-amenable. However, the NTEP is aware that this is a non-exhaustive list and is open to exploring new problem areas and solutions.

The purpose of this document is to motivate and invite individuals and organizations to develop AI-based solutions for TB. The conclusion section of the document describes the process for engaging with the NTEP and USAID on such solutions.

Share

Share on facebook
Share on twitter
Share on email
We take an information-theoretic approach to identify nonlinear feature redundancies in unsupervised learning. We define a subset of features as sufficiently-informative when the joint …

We take an information-theoretic approach to identify nonlinear feature redundancies in unsupervised learning. We define a subset of features as sufficiently-informative when the joint entropy of all the input features equals to that of the chosen subset. We argue that the rest of the features are redundant as all the accessible information about the data can be captured from sufficiently-informative features. Next, instead of directly estimating the entropy, we propose a Fourier-based characterization. For that, we develop a novel Fourier expansion on the Boolean cube incorporating correlated random variables. This generalization of the standard Fourier analysis is beyond product probability spaces. Based on our Fourier framework, we propose a measure of redundancy for features in the unsupervised settings. We then, consider a variant of this measure with a search algorithm to reduce its computational complexity as low as with being the number of samples and the number of features. Besides the theoretical justifications, we test our method on various real-world and synthetic datasets. Our numerical results demonstrate that the proposed method outperforms state-of-the-art feature selection techniques.

Share

Share on facebook
Share on twitter
Share on email
A fundamental obstacle in learning information from data is the presence of nonlinear redundancies and dependencies in it. To address this, we propose a …

A fundamental obstacle in learning information from data is the presence of nonlinear redundancies and dependencies in it. To address this, we propose a Fourier-based approach to extract relevant information in the supervised setting. We first develop a novel Fourier expansion for functions of correlated binary random variables. This is a generalization of the standard Fourier expansion on the Boolean cube beyond product probability spaces. We further extend our Fourier analysis to stochastic mappings. As an important application of this analysis, we investigate learning with feature subset selection. We reformulate this problem in the Fourier domain, and introduce a computationally efficient measure for selecting features. Bridging the Bayesian error rate with the Fourier coefficients, we demonstrate that the Fourier expansion provides a powerful tool to characterize nonlinear dependencies in the features-label relation. Via theoretical analysis, we show that our proposed measure finds provably asymptotically optimal feature subsets. Lastly, we present an algorithm based on our measure and verify our findings via numerical experiments on various datasets.

Share

Share on facebook
Share on twitter
Share on email
Rapidly scaling screening, testing and quarantine has shown to be an effective strategy to combat the COVID-19 pandemic. We consider the application of deep …

Rapidly scaling screening, testing and quarantine has shown to be an effective strategy to combat the COVID-19 pandemic. We consider the application of deep learning techniques to distinguish individuals with COVID from non-COVID by using data acquirable from a phone. Using cough and context (symptoms and meta-data) represent such a promising approach. Several independent works in this direction have shown promising results. However, none of them report performance across clinically relevant data splits. Specifically, the performance where the development and test sets are split in time (retrospective validation) and across sites (broad validation). Although there is meaningful generalization across these splits the performance significantly varies (up to 0.1 AUC score). In addition, we study the performance of symptomatic and asymptomatic individuals across these three splits. Finally, we show that our model focuses on meaningful features of the input, cough bouts for cough and relevant symptoms for context.

Share

Share on facebook
Share on twitter
Share on email
Interpretability of epidemiological models is a key consideration, especially when these models are used in a public health setting. Interpretability is strongly linked to …

Interpretability of epidemiological models is a key consideration, especially when these models are used in a public health setting. Interpretability is strongly linked to the identifiability of the underlying model parameters, i.e., the ability to estimate parameter values with high confidence given observations. In this paper, we define three separate notions of identifiability that explore the different roles played by the model definition, the loss function, the fitting methodology, and the quality and quantity of data. We define an epidemiological compartmental model framework in which we highlight these non-identifiability issues and their mitigation.

Share

Share on facebook
Share on twitter
Share on email
In temporal ordered clustering , given a single snapshot of a dynamic network in which nodes arrive at distinct time instants, we aim at …

In temporal ordered clustering , given a single snapshot of a dynamic network in which nodes arrive at distinct time instants, we aim at partitioning its nodes into K ordered clusters C_1≺⋯≺C_K such that for i<j , nodes in cluster C_i arrived before nodes in cluster C_j , with K being a data-driven parameter and not known upfront. Such a problem is of considerable significance in many applications ranging from tracking the expansion of fake news to mapping the spread of information. We first formulate our problem for a general dynamic graph, and propose an integer programming framework that finds the optimal clustering, represented as a strict partial order set, achieving the best precision (i.e., fraction of successfully ordered node pairs) for a fixed density (i.e., fraction of comparable node pairs). We then develop a sequential importance procedure and design unsupervised and semi-supervised algorithms to find temporal ordered clusters that efficiently approximate the optimal solution. To illustrate the techniques, we apply our methods to the vertex copying (duplication-divergence) model which exhibits some edge-case challenges in inferring the clusters as compared to other network models. Finally, we validate the performance of the proposed algorithms on synthetic and real-world networks.

Share

Share on facebook
Share on twitter
Share on email
Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between …

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. In 2020, the COVID-19 Forecast Hub collected, disseminated, and synthesized hundreds of thousands of specific predictions from more than 50 different academic, industry, and independent research groups. This manuscript systematically evaluates 23 models that regularly submitted forecasts of reported weekly incident COVID-19 mortality counts in the US at the state and national level. One of these models was a multi-model ensemble that combined all available forecasts each week. The performance of individual models showed high variability across time, geospatial units, and forecast horizons. Half of the models evaluated showed better accuracy than a naïve baseline model. In combining the forecasts from all teams, the ensemble showed the best overall probabilistic accuracy of any model. Forecast accuracy degraded as models made predictions farther into the future, with probabilistic accuracy at a 20-week horizon more than 5 times worse than when predicting at a 1-week horizon. This project underscores the role that collaboration and active coordination between governmental public health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.

Share

Share on facebook
Share on twitter
Share on email
Accurate forecasts of infections for localized regions are valuable for policy making and medical capacity planning. Existing compartmental and agent-based models for epidemiological forecasting …

Accurate forecasts of infections for localized regions are valuable for policy making and medical capacity planning. Existing compartmental and agent-based models for epidemiological forecasting employ static parameter choices and cannot be readily contextualized, while adaptive solutions focus primarily on the reproduction number. The current work proposes a novel model-agnostic Bayesian optimization approach for learning model parameters from observed data that generalizes to multiple application-specific fidelity criteria. Empirical results point to the efficacy of the proposed method with SEIR-like models on COVID-19 case forecasting tasks. A city-level forecasting system based on this method is being used for COVID-19 response in a few impacted Indian cities.

Share

Share on facebook
Share on twitter
Share on email
During an epidemic, accurate long term forecasts are crucial for decision-makers to adopt appropriate policies and to prevent medical resources from being overwhelmed. This …

During an epidemic, accurate long term forecasts are crucial for decision-makers to adopt appropriate policies and to prevent medical resources from being overwhelmed. This came to the forefront during the covid-19 pandemic, during which there were numerous efforts to predict the number of new infections. Various classes of models were employed for forecasting including compartmental models and curve-fitting approaches. Curve fitting models often have accurate short term forecasts. Their parameters, however, can be difficult to associate with actual disease dynamics. Compartmental models take these dynamics into account, allowing for more flexible and interpretable models that facilitate qualitative comparison of scenarios. This paper proposes a method of strengthening the forecasts from compartmental models by using short term predictions from a curve fitting approach as synthetic data. We discuss the method of fitting this hybrid model in a generalized manner without reliance on region specific data, making this approach easy to adapt. The model is compared to a standard approach; differences in performance are analyzed for a diverse set of covid-19 case counts.

Share

Share on facebook
Share on twitter
Share on email
As the COVID-19 outbreak continues to pose a serious worldwide threat, numerous governments choose to establish lock-downs in order to reduce disease transmission. However, …

As the COVID-19 outbreak continues to pose a serious worldwide threat, numerous governments choose to establish lock-downs in order to reduce disease transmission. However, imposing the strictest possible lock-down at all times has dire economic consequences, especially in areas with widespread poverty. In fact, many countries and regions have started charting paths to ease lock-down measures. Thus, planning efficient ways to tighten and relax lock-downs is a crucial and urgent problem. We develop a reinforcement learning based approach that is (1) robust to a range of parameter settings, and (2) optimizes multiple objectives related to different aspects of public health and economy, such as hospital capacity and delay of the disease. The absence of a vaccine or a cure for COVID to date implies that the infected population cannot be reduced through pharmaceutical interventions. However, non-pharmaceutical interventions (lock-downs) can slow disease spread and keep it manageable. This work focuses on how to manage the disease spread without severe economic consequences.

Share

Share on facebook
Share on twitter
Share on email
Testing capacity for COVID-19 remains a challenge globally due to the lack of adequate supplies, trained personnel, and sample-processing equipment. These problems are even …

Testing capacity for COVID-19 remains a challenge globally due to the lack of adequate supplies, trained personnel, and sample-processing equipment. These problems are even more acute in rural and underdeveloped regions. We demonstrate that solicited-cough sounds collected over a phone, when analysed by our AI model, have statistically significant signal indicative of COVID-19 status (AUC 0.72, t-test,p <0.01,95% CI 0.61-0.83). This holds true for asymptomatic patients as well. Towards this, we collect the largest known(to date) dataset of microbiologically confirmed COVID-19 cough sounds from 3,621 individuals. When used in a triaging step within an overall testing protocol, by enabling risk-stratification of individuals before confirmatory tests, our tool can increase the testing capacity of a healthcare system by 43% at disease prevalence of 5%, without additional supplies, trained personnel, or physical infrastructure.

Share

Share on facebook
Share on twitter
Share on email