Going offline and other changes

We did a summer pilot in Hubli and it had some very interesting results.

For a product manager, launching a new version of an app should not be hard. Updating the UX/UI is exciting. But we operate in a different world. Everything we do needs to be measured. We have to ask the question, “Is this too much change? Will it isolate our user?” My colleagues went into it on a previous post. It is worth checking out. This is the question we asked during our Kharif pilot. We did a summer pilot in Hubli and it had some very interesting results. We have asked an auditor and a university to validate the results for us and when we hear from them, we will announce it.

But for now, let’s talk about the Kharif pilot. Things that were different from the summer pilot. We expanded out of Hubli into four other districts across the country. Currently, our pilot is running in Wardha, Maharashtra, Kutch, Gujarat, Ranga Reddy and Adilabad in Telangana. In summer, some of these farmers had access to irrigation, this time the majority don’t. The type of farmers, the locations and even the soil is different. Last time around, we worked with around 120 farmers. This time we have 700 lead farmers and 17,000 cascade farmers using our app. Cascade farmers are those who get their information from lead farmers. We have partnered with Welspun Foundation and the Deshpande Foundation for our pilot. 

A few updates

Coming back to the question, how much is too much? In this pilot, we made our model available offline. One of the biggest user needs we had was that because of spotty internet connection, our app struggled. Just to take you through the process again. Farmers set sticky traps across their field. Take pictures of the trapped insects. Now, in the summer, this image would be uploaded to the cloud, where it would be processed and then an alert was generated. We took this offline. Essentially, the image isn’t uploaded to a cloud anymore. But instead, it is processed by the model on the phone. Our research team had a very interesting post on it earlier. The alerts are generated instantly. The roll-out of this app is staged. Currently, only the extension workers and a limited set of lead farmers have this app. Once the bugs are ironed out, we will roll it out to all our farmer partners.

In the summer, we had done extensive research on the challenges farmers faced while using the app. We had to cater to those needs and accordingly revamped the UI for them. Our focus has been on a granular level. Now, Covid-19 prevents us from being present on location and observing the farmers. So, we had to make informed guesses. We plan to go through usage analytics and interviews if their needs have been met. 

Another important update as part of the Kharif pilot has been the launch of the dashboard. The idea behind this dashboard is that the program partners can monitor updates in real-time. They can analyse which farmers get red alerts and which have not uploaded in the last week. It also gives the partner a drill down to see the images that farmers have uploaded and the pest count. This dashboard can sound warnings to the partners if there is a widespread infestation at a district level. It is a critical part of our solution. This encourages focussed monitoring. It makes us proactive and not reactive. We believe that the app is just one part of the answer. What we currently have is a high touch model. From a long term sustainability perspective, we need to automate some parts of the process. This dashboard will be critical in optimizing the allocation of resources as we scale.

Lessons learned and new challenges in front of us

There was one complaint in our summer pilot, which we corrected—recommendations. Our recommendations essentially advised the cotton farmers to use a pesticide from a series of many. But this gave no option to organic farmers. We course-corrected there. We were also told that not all kinds of pesticides are available in different parts of the country. That was a valuable learning experience and our recommendation now accounts for that as well. Another key piece of information we incorporated was the time of sowing. This dictates how much the farmer needs to spray. 

We have also come across challenges brought on by nature. Unseasonal and erratic rainfall ruined a lot of fields, this meant a small subset of farmers abandoned their crop. This translated to them not taking pictures anymore. While unfortunate, it is an important data point for us. 

If looked at in its entirety both pilots have taught us a lot about how farmers react to technology and how closely they follow instructions on the app. While our real validation will come soon, this Kharif pilot has given us a lot of knowledge, which will build for the vulnerable in India. We will chalk this up as another win. 

  • Wadhwani AI

    We are an independent and nonprofit institute developing multiple AI-based solutions in healthcare and agriculture, to bring about sustainable social impact at scale through the use of artificial intelligence.

ML Engineer


An ML Engineer at Wadhwani AI will be responsible for building robust machine learning solutions to problems of societal importance; usually under the guidance of senior ML scientists, and in collaboration with dedicated software engineers. To our partners, a Wadhwani AI solution is generally a decision making tool that requires some piece of data to engage. It will be your responsibility to ensure that the information provided using that piece of data is sound. This not only requires robust learned models, but pipelines over which those models can be built, tweaked, tested, and monitored. The following subsections provide details from the perspective of solution design:

Early stage of proof of concept (PoC)

  • Setup and structure code bases that support an interactive ML experimentation process, as well as quick initial deployments
  • Develop and maintain toolsets and processes for ensuring the reproducibility of results
  • Code reviews with other technical team members at various stages of the PoC
  • Develop, extend, adopt a reliable, colab-like environment for ML

Late PoC

This is early to mid-stage of AI product development

  • Develop ETL pipelines. These can also be shared and/or owned by data engineers
  • Setup and maintain feature stores, databases, and data catalogs. Ensuring data veracity and lineage of on-demand pulls
  • Develop and support model health metrics

Post PoC

Responsibilities during production deployment

  • Develop and support A/B testing. Setup continuous integration and development (CI/CD) processes and pipelines for models
  • Develop and support continuous model monitoring
  • Define and publish service-level agreements (SLAs) for model serving. Such agreements include model latency, throughput, and reliability
  • L1/L2/L3 support for model debugging
  • Develop and support model serving environments
  • Model compression and distillation

We realize this list is broad and extensive. While the ideal candidate has some exposure to each of these topics, we also envision great candidates being experts at some subset. If either of those cases happens to be you, please apply.


Master’s degree or above in a STEM field. Several years of experience getting their hands dirty applying their craft.


  • Expert level Python programmer
  • Hands-on experience with Python libraries
    • Popular neural network libraries
    • Popular data science libraries (Pandas, numpy)
  • Knowledge of systems-level programming. Under the hood knowledge of C or C++
  • Experience and knowledge of various tools that fit into the model building pipeline. There are several – you should be able to speak to the pluses and minuses of a variety of tools given some challenge within the ML development pipeline
  • Database concepts; SQL
  • Experience with cloud platforms is a plus

ML Scientist


As an ML Scientist at Wadhwani AI, you will be responsible for building robust machine learning solutions to problems of societal importance, usually under the guidance of senior ML scientists. You will participate in translating a problem in the social sector to a well-defined AI problem, in the development and execution of algorithms and solutions to the problem, in the successful and scaled deployment of the AI solution, and in defining appropriate metrics to evaluate the effectiveness of the deployed solution.

In order to apply machine learning for social good, you will need to understand user challenges and their context, curate and transform data, train and validate models, run simulations, and broadly derive insights from data. In doing so, you will work in cross-functional teams spanning ML modeling, engineering, product, and domain experts. You will also interface with social sector organizations as appropriate.  


Associate ML scientists will have a strong academic background in a quantitative field (see below) at the Bachelor’s or Master’s level, with project experience in applied machine learning. They will possess demonstrable skills in coding, data mining and analysis, and building and implementing ML or statistical models. Where needed, they will have to learn and adapt to the requirements imposed by real-life, scaled deployments. 

Candidates should have excellent communication skills and a willingness to adapt to the challenges of doing applied work for social good. 


  • B.Tech./B.E./B.S./M.Tech./M.E./M.S./M.Sc. or equivalent in Computer Science, Electrical Engineering, Statistics, Applied Mathematics, Physics, Economics, or a relevant quantitative field. Work experience beyond the terminal degree will determine the appropriate seniority level.
  • Solid software engineering skills across one or multiple languages including Python, C++, Java.
  • Interest in applying software engineering practices to ML projects.
  • Track record of project work in applied machine learning. Experience in applying AI models to concrete real-world problems is a plus.
  • Strong verbal and written communication skills in English.