By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Prejudice in the Machine Can Jeopardize Your Enterprise

AI algorithms must be well understood if enterprises are to shield their analytics results from bias.

Bias in data collection and analysis has a long pedigree. It is now poised to get significantly worse. The proliferation of unstructured social media news sources creates new data accuracy issues outside of the scope of traditional data quality concerns.

Although sources of bias for structured data could be checked through a rigorous examination of collection strategies combined with statistics and criteria reevaluation, machine learning (ML) data analysis lacks the transparency of earlier techniques and the data is subject to a less rigorous appraisal.

Bias and ML

Bias is a fundamental part of analysis; it is a part of the structure of thought. In analytics, the formulation of questions can introduce bias by mirroring the thought process of the questioner. Definitions can be skewed and result in inaccurate reflections of the respondents' beliefs.

Bias may be introduced through ethnic, local, or national characteristics. In one survey, for example, we found that Chinese responses on adoption of new technologies were 80 percent more positive across the board for all questions than for any other group. Was this part of Chinese enthusiasm in general or did it reflect differences in use of these technologies within Chinese corporations?

Unfortunately, as use of ML and artificial intelligence (AI) is increasing, the models used are not necessarily well understood. The machine often builds the data model on its own. It does not understand correlation; it does not understand other factors that might be instrumental in a decision. Of greatest danger are the types of biases that will yield discrimination that is illegal or treatment based upon spurious classifications.

Solving for Mathwashing

Although human bias is frequently a problem, machine bias is invisible. People will tend to trust the impartiality of the machine, at least in this early stage of algorithmic understanding. However, the biases of those who create the initial models and conduct the training, combined with inherent biases in the data used to train the model, can produce results that do not correspond to the facts.

This tendency to trust the mathematical modeling has been referred to as "mathwashing" and it has serious consequences. Consider that a machine might pick up on characteristics such as a school attended listed on a credit application and discern the applicant's racial group, which may result in illegally denying credit based on a protected class. Similar problems occur in other types of data.

Companies incorporating new algorithms need to be very clear about the possibilities of bias in data. One lesson learned from such algorithm use is that all models need to be carefully evaluated and tested for bias using sample data before they are released. Just because the ML model operates well in the lab does not mean that it will provide a safe and reliable answer in the field.

The issue of bias is particularly important in finance, but any area in which there are opportunities for disinformation or language subject to interpretation will create the possibility of bias -- with potentially expensive results.

Politics, for example, is highly subject to all types of bias in information, from how data is gathered to questionnaire structure to assembly of data. Who is surveyed can introduce bias, as can the terms used in survey questions. Healthcare demands an appreciation of psychology and sociology in addition to health and financial details. Denying healthcare or health insurance on the basis of implicit ethnic codes could be extremely expensive.

Moving Forward

As we move into this age of AI, we are forced to confront our own understanding of human intelligence. We need to look at the sources of bias and the interaction between bias and the delivery of accurate results. In the post-truth era, we have a problem. Bias eats at the edge of every question; if sources are no longer to be trusted, then we may need to rely upon AI. Yet AI itself is likely to be biased according to the information it is provided and with which it is trained.

We cannot escape the intricacies of human thought. We must take responsibility for ensuring that our AI creations perform according to human needs.

About the Author

Brian J. Dooley is an author, analyst, and journalist with more than 30 years' experience in analyzing and writing about trends in IT. He has written six books, numerous user manuals, hundreds of reports, and more than 1,000 magazine features. You can contact the author at bjdooley.query@yahoo.com.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.