Field Lessons: Why Tech Solutions Should Focus on Low-Income Users

As new tech solutions are developed, designers have a responsibility to their users to ensure that the technology solves the problems it intends to solve, without creating new problems in their place.
How can an organization driving tech solutions be sure that their users have enough information to actually use them?
Looking for Ground Truths

In November 2021, I was at a village in Rajasthan, conducting an on-field design research study towards improving our pest management solution. An important part of this exercise was interacting with the cotton farmers in the village about their farming practices and everyday struggles. The interviews began with me asking about their daily routine during the cotton season, and slowly moved towards more technical aspects.

I recorded several of these conversations with the farmers in the village, and listened to them several times even after I returned to the office. They usually went like this—

Smartphone hai na aap ke paas? Koi app chalaate ho kya aap?”

[Do you have a smartphone? Do you use any apps on your phone?]

“Madam, dekho— WhatsApp, Youtube aur Facebook to main chalaata hun. Ye jo aap ki app hai na, CottonAce, ye mujhe bhaisahab ne sikhayi, par kuch samay baad inko bhi samajh nai aa raha tha ke kya problem hai isme? Aap pehle inki training karo ache se, aur fir humaari.” 

[Madam, see, I use WhatsApp, YouTube and Facebook apps on my phone. The field facilitator here helped me to learn how to use the CottonAce app that your team has developed. However, after some time, I had some difficulty with using the app, and the field facilitator also got confused. You must first train these field facilitators well and then also train us farmers.]

I also interviewed the field facilitators in the village: they are experts who work with these farmers on the ground, helping them learn better farming practices. They also help the farmers to install and use apps like CottonAce. The field facilitator interviews conveyed their struggles with using the app, due to limited exposure to such solutions. They also clearly communicated about how they lacked confidence with using such technology and their need for better training materials to study from, both for themselves and the farmers under their charge.

Following this field visit, I collaborated with my colleagues working on the pest management solution to gather the existing training materials that we had used in the past. I reviewed this material based on the insights gathered from the field, and it dawned on me that much of the material would be unrelatable to farmers: for it to be effective, the language used and the communication approach would have to be very different, keeping in mind their specific needs and limited exposure to technology.

The Need for a Design-Led Process

I started looking for other agencies and organizations who may have been conducting training sessions for similar low-income users, and I found that most training manuals for app-based solutions were simple instruction manuals that often assumed prior knowledge, delivered in digital formats, which clearly did not meet the needs of low-income users. To my surprise, I did not find a single training manual that was the result of a clear, design-driven process.

Why not? Was this not needed? Was it not considered necessary? Was it not a priority? There are many promising tech innovations out there that are made for this group of users: but how are the solutions being taken to them? Who is training these users? How can organizations driving tech solutions be sure that their users have enough information to actually use them?

All of these questions felt overwhelming at first, but this process helped me to align my thoughts. It made me question the need for training once again, so I conducted further, more in-depth interviews with other field facilitators and farmers to understand the issues that typically arise with in-app onboarding processes. I was validated when, again, the need for better training was clearly apparent as one of the key reasons  for low adoption and the trust issues that occur frequently when app-based solutions are used in the field.

It is important to remember that users in low-income communities are primarily occupied with ensuring that they have enough income to survive.
Why Focus on Low-Income Users?

Artificial intelligence, machine learning, IoT, etc. are an integral part of the tech-enabled world that we are now a part of, and inform almost every aspect of our lives. And these technologies advance fast; often more rapidly than we could imagine.

Adapting to these changes is easy for smart, educated users living in urban, tech-friendly communities. However, accepting these changes and adapting to them is much more difficult for low-income and less digital-savvy users like farmers and frontline public health workers.

It is important to remember that users in low-income communities are primarily occupied with ensuring that they have enough income to survive. They have obvious limitations around capacity, bandwidth, and even inclination, when it comes to learning new technologies and any associated processes.

New innovations for users like the aforementioned cotton farmers are mostly focused on solving their pain points first; the user experience is usually considered as a secondary objective. When I spoke to friends and fellow designers about the need to design better training manuals for farmers, I often heard statements like: ‘Everything is moving quickly to the digital medium and you are thinking about training manuals. Who needs a training manual to use Facebook?’

Many of us may not need a manual to use apps, because we have been familiar with them for a long time, and are well-acquainted with the design and content patterns in most types of apps. Users in urban communities are adapting to the digital world very fast but users like farmers are not.

Low-income users have to make more of an effort and prefer to learn at their own pace—they are willing to learn but they also have many questions and can feel stuck when things aren’t immediately clear. However, they do take the time to question, experiment, and then decide if that makes sense to them or not. These users are often presented with innovations, but without the necessary context, or opportunities to learn more, learn better and do better.

As a design researcher, I strongly believe that designers have a responsibility to their users to ensure that any tech-based solution solves the problems it intends to solve, without creating new problems in their place. I believe that designers—and the teams that they work in and with—should focus on making tech products simple enough, and their important features obvious enough, for the masses to figure out on their own. 


Coming Up Next: How has Wadhwani AI incorporated these learnings into enhancing its own solutions and training materials to make them easier to use and more relevant to low-income users?

ML Engineer

ROLES AND RESPONSIBILITIES

An ML Engineer at Wadhwani AI will be responsible for building robust machine learning solutions to problems of societal importance; usually under the guidance of senior ML scientists, and in collaboration with dedicated software engineers. To our partners, a Wadhwani AI solution is generally a decision making tool that requires some piece of data to engage. It will be your responsibility to ensure that the information provided using that piece of data is sound. This not only requires robust learned models, but pipelines over which those models can be built, tweaked, tested, and monitored. The following subsections provide details from the perspective of solution design:

Early stage of proof of concept (PoC)

  • Setup and structure code bases that support an interactive ML experimentation process, as well as quick initial deployments
  • Develop and maintain toolsets and processes for ensuring the reproducibility of results
  • Code reviews with other technical team members at various stages of the PoC
  • Develop, extend, adopt a reliable, colab-like environment for ML

Late PoC

This is early to mid-stage of AI product development

  • Develop ETL pipelines. These can also be shared and/or owned by data engineers
  • Setup and maintain feature stores, databases, and data catalogs. Ensuring data veracity and lineage of on-demand pulls
  • Develop and support model health metrics

Post PoC

Responsibilities during production deployment

  • Develop and support A/B testing. Setup continuous integration and development (CI/CD) processes and pipelines for models
  • Develop and support continuous model monitoring
  • Define and publish service-level agreements (SLAs) for model serving. Such agreements include model latency, throughput, and reliability
  • L1/L2/L3 support for model debugging
  • Develop and support model serving environments
  • Model compression and distillation

We realize this list is broad and extensive. While the ideal candidate has some exposure to each of these topics, we also envision great candidates being experts at some subset. If either of those cases happens to be you, please apply.

DESIRED QUALIFICATIONS

Master’s degree or above in a STEM field. Several years of experience getting their hands dirty applying their craft.

Programming

  • Expert level Python programmer
  • Hands-on experience with Python libraries
    • Popular neural network libraries
    • Popular data science libraries (Pandas, numpy)
  • Knowledge of systems-level programming. Under the hood knowledge of C or C++
  • Experience and knowledge of various tools that fit into the model building pipeline. There are several – you should be able to speak to the pluses and minuses of a variety of tools given some challenge within the ML development pipeline
  • Database concepts; SQL
  • Experience with cloud platforms is a plus
mle

ML Scientist

ROLES AND RESPONSIBILITIES

As an ML Scientist at Wadhwani AI, you will be responsible for building robust machine learning solutions to problems of societal importance, usually under the guidance of senior ML scientists. You will participate in translating a problem in the social sector to a well-defined AI problem, in the development and execution of algorithms and solutions to the problem, in the successful and scaled deployment of the AI solution, and in defining appropriate metrics to evaluate the effectiveness of the deployed solution.

In order to apply machine learning for social good, you will need to understand user challenges and their context, curate and transform data, train and validate models, run simulations, and broadly derive insights from data. In doing so, you will work in cross-functional teams spanning ML modeling, engineering, product, and domain experts. You will also interface with social sector organizations as appropriate.  

REQUIREMENTS

Associate ML scientists will have a strong academic background in a quantitative field (see below) at the Bachelor’s or Master’s level, with project experience in applied machine learning. They will possess demonstrable skills in coding, data mining and analysis, and building and implementing ML or statistical models. Where needed, they will have to learn and adapt to the requirements imposed by real-life, scaled deployments. 

Candidates should have excellent communication skills and a willingness to adapt to the challenges of doing applied work for social good. 

DESIRED QUALIFICATIONS

  • B.Tech./B.E./B.S./M.Tech./M.E./M.S./M.Sc. or equivalent in Computer Science, Electrical Engineering, Statistics, Applied Mathematics, Physics, Economics, or a relevant quantitative field. Work experience beyond the terminal degree will determine the appropriate seniority level.
  • Solid software engineering skills across one or multiple languages including Python, C++, Java.
  • Interest in applying software engineering practices to ML projects.
  • Track record of project work in applied machine learning. Experience in applying AI models to concrete real-world problems is a plus.
  • Strong verbal and written communication skills in English.
mls