Field Lessons: Human-Centred Design for More Inclusive Tech Solutions

Learning about specific challenges farmers face, not just in using app-based solutions, but even in accessing them, has helped us to better understand how to address their needs.
Human-Centred Design
An HCD-driven approach to designing better training material for farmers and field facilitators.

In my earlier article about why tech solutions should focus on low-income users, I argued that although not everyone may need training manuals to use tech-based solutions, they can go a long way towards supporting rural users such as farmers who are not entirely familiar with technology and its uses in agriculture. 

Let me recount our team’s efforts on this front so far. In addition to creating learning materials to use on the field, we also conducted a survey (referred to as an ‘adoption study’ internally) to understand what was needed to make it successful.

What have we tried already?

Remote training using slide decks: The COVID-19 pandemic has forced us to learn to communicate from a distance, but Zoom calls have helped us retain some emotional and professional connections. We carried out a training session for key personnel from partner organisations, including team and PU managers, field facilitators, and lead/ cascade farmers. This was an hour-long Zoom call, in which an overview of the problem, the solution, and its impact was explored.

Human-centred design approach: I reviewed the deck and listened to the recordings of the calls and realised that it was not the best way to train those users. It might have worked as an introduction, but they needed more in-person training. Users confirmed this observation during another adoption study

In-person demonstration: A session to demonstrate the app’s functionality was also organised, in which farmers and partners learned , step-by-step, how to use the app on the field—installing traps,  taking photos on the app, understanding alerts, and using advisories.

Human-centred design approach: When I interviewed the users later , this method was remembered as a success; everybody was happy and had gained enough confidence to try using the app themselves!

Printed training material (work in progress): Training material has been prepared in the past but not tested in the field. We decided to review it from an HCD perspective and found some opportunities to improve the material.

Human-centred design approach: As per our design review, we had a few clear needs in terms of training material:

  1. User–specific material which will be tailored to the two different types of learners in this context—farmers and field facilitators
  2. Printed material to make it easy for farmers to refer to it while using the app. 
  3. Training manual kit for the solution, not just the app.  Our solution is app-based but it is not limited to the app. There will be many different outcomes and effects based on the decisions users make, and we need to address these in the training manuals. 

What could be improved

One–time training: Our team had partial success with training through the on-field demo, but it happened only once. Farmers and field facilitators kept sharing that this format allowed space for learning and asking questions that wouldn’t be possible otherwise. 

The same type of training for everyone: Remote training did not have a larger impact because though the participants consisted of different types of users, the presentation was created keeping partner organisations in mind. We must recognise the varied needs and interests that exist within our audience, in order to act inclusively. This seems like an easy problem to solve, but it is not. Keeping everyone engaged and focused on the same objective is a tough task. We are looking to implement a more participative method of training.

Training manuals that are only accessible on phones: While a part of the training audience was technologically savvy and had already been using apps in their day-to-day lives, farmers rarely had the time to explore their apps, mostly using non-farming related ones when they could. They used their phones to watch videos and get weather updates. This little bit of familiarity allowed them to take some interest in the presentation, but it was not enough to keep them engaged for long.. 

It made us realise that we need to leave something  tangible with them post-training which they can refer to or learn from. It could be a tool or some reading material, something they can touch and hold. One objective of such a tool would be to kickstart deeper engagement and more regular feedback, which will help us build trust with those who may benefit from our work.

No room for dialogue: The kind of training we conducted remotely did not leave room for dialogue, which is why the on-field demo was so much more successful. 

It is important for us to provide consistent support to low-income users so that they can ask questions, give us feedback, and participate in designing newer and better features. 

Reflection: While we have identified specific methods which worked or didn’t work, we also need to adopt an approach which is driven by reflecting on gaps in our systems and processes, asking questions, and understanding users.

We are now preparing training material that can engage different user groups, such as program partners, field facilitators, and farmers.

How are we moving forward?

Training for different user groups: We are now preparing training material that can engage different types of users, such as program partners, field facilitators, and farmers. We are also working on equipping our trainers better so that they can teach users more engagingly and effectively.

Clear training goals for user groups:

  1. Partners: Our goal is to introduce the solution to them and build long-lasting relationships throughout the deployment and beyond. 
  2. Field facilitators: Our goal is to give them enough confidence to handle situations on the ground and allow for more learning and sharing of feedback
  3. Farmers: Our goal is to make the solution simple enough that they are comfortable implementing it in their own way, and to develop a relationship of trust, in which they feel welcome to ask questions and share their feedback. 

Regular and frequent training: There is a need to ensure that enough training material is created so that there is space for regular training sessions, based on various user needs and the feedback shared by our users. We are working on this. 

YouTube knowledge centre: We learned from an adoption study that farmers who use apps on their phones consume a lot of content on YouTube when they have some free time. Some users shared that they love to watch agriculture experts talking about new seeds, new pesticides, and new practices, which farmers are adopting across the country. Some of them also shared that they have subscribed to these YouTube channels and follow them regularly. 

Is this surprising? Not really. Visual content is more engaging and can aid in long-lasting learning. We found many organisations which are engaging farmers on YouTube. We are currently building a knowledge centre in the form of a YouTube channel and will find ways to engage cotton farmers. 

On-field use cases and recommendations: Our user manuals cover information about the solution, monitoring guidelines, etc., but we learned that our farmers have been experimenting with the app and their traps because they are curious and want to test the technology before they trust it. It sounds fair, doesn’t it? But it has also added an extra burden on the field facilitators to answer unexpected questions they were not trained for. We have included such field experiences and user scenarios along with recommendations on how to navigate them. This is primarily for the benefit of the field facilitators, so they can gain enough confidence in order to answer difficult questions. As an organisation, we will keep evolving as we learn more.

Farmer’s stories: The farmers’ trust in the solution is paramount. Why would they trust something they have never seen or tried before? We noticed how farmers share stories with each other about bad experiences due to pest attacks. They also talk about their visits to agricultural fairs and doctors, who themselves are farmers, to learn about new practices. This inspired us to add a section titled “Farmers’ stories” in the farmers’ version of the training manual.

As we continue to reflect on our work, designing more inclusive processes that will allow our users to reflect with us will only make our vision clearer. Learning about specific challenges farmers face, not just in using app-based solutions, but even in accessing them, has helped us to better understand how to address their needs.

ML Engineer

ROLES AND RESPONSIBILITIES

An ML Engineer at Wadhwani AI will be responsible for building robust machine learning solutions to problems of societal importance; usually under the guidance of senior ML scientists, and in collaboration with dedicated software engineers. To our partners, a Wadhwani AI solution is generally a decision making tool that requires some piece of data to engage. It will be your responsibility to ensure that the information provided using that piece of data is sound. This not only requires robust learned models, but pipelines over which those models can be built, tweaked, tested, and monitored. The following subsections provide details from the perspective of solution design:

Early stage of proof of concept (PoC)

  • Setup and structure code bases that support an interactive ML experimentation process, as well as quick initial deployments
  • Develop and maintain toolsets and processes for ensuring the reproducibility of results
  • Code reviews with other technical team members at various stages of the PoC
  • Develop, extend, adopt a reliable, colab-like environment for ML

Late PoC

This is early to mid-stage of AI product development

  • Develop ETL pipelines. These can also be shared and/or owned by data engineers
  • Setup and maintain feature stores, databases, and data catalogs. Ensuring data veracity and lineage of on-demand pulls
  • Develop and support model health metrics

Post PoC

Responsibilities during production deployment

  • Develop and support A/B testing. Setup continuous integration and development (CI/CD) processes and pipelines for models
  • Develop and support continuous model monitoring
  • Define and publish service-level agreements (SLAs) for model serving. Such agreements include model latency, throughput, and reliability
  • L1/L2/L3 support for model debugging
  • Develop and support model serving environments
  • Model compression and distillation

We realize this list is broad and extensive. While the ideal candidate has some exposure to each of these topics, we also envision great candidates being experts at some subset. If either of those cases happens to be you, please apply.

DESIRED QUALIFICATIONS

Master’s degree or above in a STEM field. Several years of experience getting their hands dirty applying their craft.

Programming

  • Expert level Python programmer
  • Hands-on experience with Python libraries
    • Popular neural network libraries
    • Popular data science libraries (Pandas, numpy)
  • Knowledge of systems-level programming. Under the hood knowledge of C or C++
  • Experience and knowledge of various tools that fit into the model building pipeline. There are several – you should be able to speak to the pluses and minuses of a variety of tools given some challenge within the ML development pipeline
  • Database concepts; SQL
  • Experience with cloud platforms is a plus
mle

ML Scientist

ROLES AND RESPONSIBILITIES

As an ML Scientist at Wadhwani AI, you will be responsible for building robust machine learning solutions to problems of societal importance, usually under the guidance of senior ML scientists. You will participate in translating a problem in the social sector to a well-defined AI problem, in the development and execution of algorithms and solutions to the problem, in the successful and scaled deployment of the AI solution, and in defining appropriate metrics to evaluate the effectiveness of the deployed solution.

In order to apply machine learning for social good, you will need to understand user challenges and their context, curate and transform data, train and validate models, run simulations, and broadly derive insights from data. In doing so, you will work in cross-functional teams spanning ML modeling, engineering, product, and domain experts. You will also interface with social sector organizations as appropriate.  

REQUIREMENTS

Associate ML scientists will have a strong academic background in a quantitative field (see below) at the Bachelor’s or Master’s level, with project experience in applied machine learning. They will possess demonstrable skills in coding, data mining and analysis, and building and implementing ML or statistical models. Where needed, they will have to learn and adapt to the requirements imposed by real-life, scaled deployments. 

Candidates should have excellent communication skills and a willingness to adapt to the challenges of doing applied work for social good. 

DESIRED QUALIFICATIONS

  • B.Tech./B.E./B.S./M.Tech./M.E./M.S./M.Sc. or equivalent in Computer Science, Electrical Engineering, Statistics, Applied Mathematics, Physics, Economics, or a relevant quantitative field. Work experience beyond the terminal degree will determine the appropriate seniority level.
  • Solid software engineering skills across one or multiple languages including Python, C++, Java.
  • Interest in applying software engineering practices to ML projects.
  • Track record of project work in applied machine learning. Experience in applying AI models to concrete real-world problems is a plus.
  • Strong verbal and written communication skills in English.
mls