Wadhwani AI at Google I/O

We attended Google I/O as one of the winners of the Google AI Impact Challenge, and had a chance to demo our solution and take part in a three-week-long accelerator program. Here are some highlights from our experience.
Rajesh Jain (Senior Director – Programs), Dhruvin Vora (Product Manager), and Aman Dalmia (Research Fellow) at the Google Accelerator Program

By Aman Dalmia, Research Fellow

Earlier this year, at Google I/O, Jeff Dean, the head of Google AI, announced the winners of the Google AI Impact Challenge which aims to provide coaching from Google’s AI experts, a grant from a $25 million pool, and credits and consulting from Google Cloud. Out of 2,600+ applications, 20 projects were selected and the integrated pest management project that we have been working on at Wadhwani AI was one of them, receiving a grant of $2 million. We were the only team selected from India and had a booth at I/O, which gave us a chance to demo our solution. Plus, each grantee could send a three-member team to attend a week-long accelerator program to kick-off a six-month mentorship period.

The entire group in attendance: folks from Google.org, Google Launchpad, Google AI and all the grantees

In June, I attended the program in San Francisco, with two of my colleagues, Dhruvin Vora and Rajesh Jain. Our hosts were the passionate and energetic folks from Google Launchpad, Google.org, and Google AI. The week was packed with 1:1 mentorship sessions; keynotes on various aspects of an AI solution; an introduction to the design sprint process; and other best practices from Google and a whole lot of networking activities.

The highlights of the bootcamp, for me, were:


On each of the first three days, there was a keynote session held by someone from Google AI. One was by Jason Mayes, a Senior Creative Engineer at Google (and, incidentally, our AI coach) who talked about the creative applications of AI, such as Google’s Thing Translator, which translates the text in an image and rewrites the translated text back onto the image as shown below.

The complete slide deck can be found here.

Our CEO, Dr. P Anandan, and VP of Product & Programs, Raghu Dharmaraju, in conversation with Jeff Dean, Senior Fellow & SVP, Google AI (Research and Health), at Google I/O

Another keynote was by Peter Norvig, a legend at Google and the author of Artificial Intelligence: A Modern Approach, one of the most famous books written on AI. He essentially talked about the fact that most people obsess about the AI part of building an AI product, when that is actually only about 10% of the entire pipeline. The entire pipeline consists of data collection, preparation, annotation, cleaning, analysis, engineering, etc. He made an interesting point about using backpropagation to tackle errors in the AI element, but there is no such algorithm to handle errors in any other component of the pipeline, and hence, we should really focus on getting that right.

Finally, Andrew Zaldivar gave a talk about the aspect of ethics while building an AI solution. Specifically, he referred to a large body of work initiated in Google through the People and AI Research (PAIR) team. I’d like to mention two specific tools that he suggested:

1. People + AI Guidebook that provides a framework for designing human-centered AI products:

2. The What-If Tool: This tool can be used to inspect the performance of any machine learning model including several features, some of which are shown below, and all you need to use it is Jupyter.

Mentorship Sessions

This was one of the most helpful components of the accelerator program. For two complete days, each grantee had several hour-long sessions with various Google mentors, catering to the need of each grantee.

For example, our first day was full of sessions with technical mentors, which helped us clarify various doubts regarding the algorithmic pipeline, the overall infrastructure and several best practices when it comes to building a scalable solution. However, we also had various product-based doubts and because of the diversity of the mentors, our second day was structured to attend towards those needs like product strategy, fundraising, Long-term sustainability, User Centric Design etc.

The Design Sprint

A Design Sprint provides a framework to test and validate ideas in a five-day cycle by bringing all team members together in a room to focus on just one thing.

The coordinators wanted us to feel like we were participating in an actual design sprint but since we had just three hours, only a few components were taken on. And we were divided into teams of five with each one working to solve a common problem.

The entire activity involved sketching, brainstorming, conducting interviews, writing down ideas on Post-Its and finally prioritising the ones that received the majority vote. One key rule of thumb that they stressed was to get everyone on the team—designers, engineers, product managers, researchers, even accountants—together because each person provides a unique perspective. This also turned the exercise into a networking session because members from different teams were asked to come together and collaborate.

Objectives & Key Results (OKR)

This is the framework that Google uses internally to track project-level and sometimes, individual-level progress over a period of time. OKR was a key focus of the accelerator program and all the grantees were supposed to set their respective OKRs for the next six months of mentorship. We were first given a talk by Zachary Ross on what OKRs stand for and how to set them correctly, along with examples of both good and bad ones. Both our AI coach and our GSM, were closely involved in the process of setting OKRs and we’ll keep checking in with them going forward on our progress.

Here are the key lessons that I take back from my experience and would like to share with everyone:

Punctuality: Being extremely mindful during meetings and ending them within the slotted time made sure that our discussions were focused and didn’t waver off to a million topics.

Regular acknowledgement: More often than not, we tend to focus on the “things that need to be done” and forget to celebrate “things that have been done”. An almost daily acknowledgement of the time and effort put in by various people was the culture that was set from day one and this meant that people were really happy doing their jobs.

Regular feedback, constructive criticism and lifelong learning: A major event at the end of each day was a call for feedback on the entire day. To put things into perspective, Google Launchpad has already coached 500+ startups till date and the fact that they are still looking for feedback is a good reminder that you can always improve.

Being a part of the Wadhwani AI team at the accelerator program was one of the most fruitful experiences of my life so far. It also happened to be the first time that I had travelled outside India, which allowed me to witnessed different cultures and meet with people of various backgrounds. From being inserted into an unknown environment to making new friends and exploring new places, it was a week that I will always remember.

  • Wadhwani AI

    We are an independent and nonprofit institute developing multiple AI-based solutions in healthcare and agriculture, to bring about sustainable social impact at scale through the use of artificial intelligence.


ML Engineer


An ML Engineer at Wadhwani AI will be responsible for building robust machine learning solutions to problems of societal importance; usually under the guidance of senior ML scientists, and in collaboration with dedicated software engineers. To our partners, a Wadhwani AI solution is generally a decision making tool that requires some piece of data to engage. It will be your responsibility to ensure that the information provided using that piece of data is sound. This not only requires robust learned models, but pipelines over which those models can be built, tweaked, tested, and monitored. The following subsections provide details from the perspective of solution design:

Early stage of proof of concept (PoC)

  • Setup and structure code bases that support an interactive ML experimentation process, as well as quick initial deployments
  • Develop and maintain toolsets and processes for ensuring the reproducibility of results
  • Code reviews with other technical team members at various stages of the PoC
  • Develop, extend, adopt a reliable, colab-like environment for ML

Late PoC

This is early to mid-stage of AI product development

  • Develop ETL pipelines. These can also be shared and/or owned by data engineers
  • Setup and maintain feature stores, databases, and data catalogs. Ensuring data veracity and lineage of on-demand pulls
  • Develop and support model health metrics

Post PoC

Responsibilities during production deployment

  • Develop and support A/B testing. Setup continuous integration and development (CI/CD) processes and pipelines for models
  • Develop and support continuous model monitoring
  • Define and publish service-level agreements (SLAs) for model serving. Such agreements include model latency, throughput, and reliability
  • L1/L2/L3 support for model debugging
  • Develop and support model serving environments
  • Model compression and distillation

We realize this list is broad and extensive. While the ideal candidate has some exposure to each of these topics, we also envision great candidates being experts at some subset. If either of those cases happens to be you, please apply.


Master’s degree or above in a STEM field. Several years of experience getting their hands dirty applying their craft.


  • Expert level Python programmer
  • Hands-on experience with Python libraries
    • Popular neural network libraries
    • Popular data science libraries (Pandas, numpy)
  • Knowledge of systems-level programming. Under the hood knowledge of C or C++
  • Experience and knowledge of various tools that fit into the model building pipeline. There are several – you should be able to speak to the pluses and minuses of a variety of tools given some challenge within the ML development pipeline
  • Database concepts; SQL
  • Experience with cloud platforms is a plus

ML Scientist


As an ML Scientist at Wadhwani AI, you will be responsible for building robust machine learning solutions to problems of societal importance, usually under the guidance of senior ML scientists. You will participate in translating a problem in the social sector to a well-defined AI problem, in the development and execution of algorithms and solutions to the problem, in the successful and scaled deployment of the AI solution, and in defining appropriate metrics to evaluate the effectiveness of the deployed solution.

In order to apply machine learning for social good, you will need to understand user challenges and their context, curate and transform data, train and validate models, run simulations, and broadly derive insights from data. In doing so, you will work in cross-functional teams spanning ML modeling, engineering, product, and domain experts. You will also interface with social sector organizations as appropriate.  


Associate ML scientists will have a strong academic background in a quantitative field (see below) at the Bachelor’s or Master’s level, with project experience in applied machine learning. They will possess demonstrable skills in coding, data mining and analysis, and building and implementing ML or statistical models. Where needed, they will have to learn and adapt to the requirements imposed by real-life, scaled deployments. 

Candidates should have excellent communication skills and a willingness to adapt to the challenges of doing applied work for social good. 


  • B.Tech./B.E./B.S./M.Tech./M.E./M.S./M.Sc. or equivalent in Computer Science, Electrical Engineering, Statistics, Applied Mathematics, Physics, Economics, or a relevant quantitative field. Work experience beyond the terminal degree will determine the appropriate seniority level.
  • Solid software engineering skills across one or multiple languages including Python, C++, Java.
  • Interest in applying software engineering practices to ML projects.
  • Track record of project work in applied machine learning. Experience in applying AI models to concrete real-world problems is a plus.
  • Strong verbal and written communication skills in English.