Responsible, Ethical and Effective AI

An extremely important part of an AI project is that it works correctly in the real world. AI being new and rapidly evolving does suffer from some inherent problems. They are vulnerable to a class of issues that do seem to be working well at first but in reality, they may be failing with dire consequences.

As in the case of Uber’s self-driving project, the omission of jaywalkers on the street led to the death of an Arizona woman in the night. The problem was that the machine-learning model was only trained to classify objects when seen closer to the crosswalks. In this case, the model was incapable of understanding the situation that it had not encountered before. Granted the solution was developed by some of the most intelligent brains but leaving out a common scenario like this led to a catastrophic failure. This is an unfortunate truth of building complex technical products. At best, they can only reflect partial considerations and unless a lot of model training with all possible scenarios is done, which is hard, some “edge” cases might still be left out.

Values that underpin the product development process

Products cannot be divorced from the context in which they will be used. Depending on the context, products also need to be safe to use, have longevity, have low manufacturing and operating costs, and avoid creating harmful waste throughout production, use, and at the end of their life. Whenever considering a tradeoff values like sustainability, democracy, privacy, safety, and equality should be considered. Depending on the nationality and culture, people might have different values. A common set of values must guide the development process for the entire team.

Reliance on Data

For Edge AI the biggest part of the training, the machine learning models is the data. Data is called the brick and the thermometer. Brick because you would use the data to train the model. Any over-representation or under-representation of data would result in inaccurate predictions. Common metrics used to evaluate ML models expect sufficient sample representation. Similarly, the data used for the evaluation of the model need to be accurate and wide enough to cover various scenarios. The benchmark data should be able used to judge if the model is performing well.

Further, there are some inherent problems with data. It is usually in a snapshot of time, it is almost always historic and does not take into account the changing world. It is specific to the location of capture and the entity for which it is captured. It is always a sample and never complete. In a lot of cases, these inherent flaws cause discrimination and prejudice in AI.

Responsible Design

For machine learning fairness, the following need to be considered

  • Bias – If a system produces output that favors some and is prejudiced toward others then it is a biased system
  • Discrimination – Outcome of a decision-making process where individuals or groups of individuals are treated differently based on PII (Personally Identifiable Information) and other protective or sensitive markers.
  • Fairness – Though hard to define is usually considered in conjunction with values like equity, equality, and inclusiveness

To mitigate the above an understanding is needed of how and in which context would your design be used.

It is easy to get started with responsible design. Know the limits of your data. Know the limits of your models. Talk to the people who will use your product. If there’s only one takeaway for you from this section, it should be this: KUDOs (Know Ur Data, Obviously) to those that develop responsible edge AI.

  • Wiebke (Toussaint) Hutiri, Technical University of Delft
Black Boxes and Bias on Edge Devices

A lot of times, edge devices, by design are invisible. They merge into the environments in which they are deployed and if certain environmental conditions change, it is hard to assess why they are behaving in a certain way. Furthermore, they are literal black boxes. Their content is invisible and is often protected with layers of security to avoid inspection. This is dangerous because whoever bought this edge device with the algorithm has no idea of what is inside it. They trust the device to make the right decisions.

As we saw in our last post about Elephant Edge, if it captures the movement of the elephant only a quarter of the time then the data that the researchers would get is skewed and so would be their report. When dealing with server-side AI we have a lot of raw data trail that we can validate but Edge devices do not retain all the raw data, they send back only limited information and lose a lot of information which might validate their correctness.

To complicate matters, Edge AI suffers from various biases like

  • Human Bias – every human thinks differently and has a bias
  • Data Bias – datasets represent the data collection process and may not be a real-world sample set
  • Algorithmic Bias – The selection and tuning of algorithms may be biased on the above two
  • Testing Bias – Real-world testing is costly and time-consuming hence the testing is done only on sample scenarios that might leave out edge cases

The above cannot be eliminated but they can be reduced. Careful collection of the dataset, an algorithm that is fit for the task, sufficient budget for real-world testing, and access to domain experts so that human bias is minimized.

Edge AI in the wrong hands

Just like any innovation, Edge AI in the wrong hands could be dangerous. A knife can be used for cooking but also for murder in the wrong hands. AI-enabled cameras can create a dystopian society where authorities can track the movement of people or the movement of people of a particular race.

AI cameras to spot behaviors of wildlife can also be used by poachers to get the location of endangered and exotic animals.

As ethical engineers, it is up to us to consider the possible off-label use cases for which the technology might be used.

We should also consider the scenarios in which our system based on our negligence could misdiagnose patients thus leading to troubling results, enforcing laws against a race, and safety devices could fail thus leading to harm. unsecured devices could be used by criminals, pervasive AI could complicate privacy issues.

Best Practice

The best practice is to value diversity and allocate a budget for that. Assemble a product team with diverse perspectives in both technical expertise and lived experience. Human biases amplify technical biases, and a diverse team is less likely to have blind spots in their collective worldview.

The team should have a collective value system for building the product and consider the biases that may affect the results.

Legally the products could be marked with a Responsible AI License to restrict their misuse or use in off-label scenarios.

Use the following resources to learn about ethical and responsible AI

Summary

In this post, we looked at the various faultlines when building AI for Edge Devices. We looked at how black boxes and biases can result in detrimental scenarios. We also looked at some aspects that the teams should be considering to avoid those and best practices for creating responsible AI products, especially for the edges.

Leave a Comment

Your email address will not be published. Required fields are marked *