By using tdwi.org website you agree to our use of cookies as described in our cookie policy. Learn More

TDWI Articles

Physician, Heal Thyself: Machine Learning and the Ingestion of Data

Data ingestion is a process that needs to benefit from emerging analytics and AI techniques.

Up to 80 percent of a data scientist's time may be spent performing "data janitor" tasks: collecting, cleaning, and organizing data sets. This problem is getting worse as the need for more data and more complex data continues to grow.

Although the need for analysis is becoming more acute, available data analyst skills are in short supply. More companies are demanding more analytics, and data preprocessing is rapidly becoming a bottleneck. Data wrangling came to the rescue in the early stages of big data adoption, using visualization and scripting to help solve difficult standardization problems in a hurry.

As analytics and AI become increasingly important, data wrangling itself is moving toward greater automation and more sophisticated routines. The old adage "physician, heal thyself" is now relevant to machine learning (ML), where an increasing focus on data preparation is helping solve a problem that ML created in the first place.

Scoping the Problem

All the factors that make analytics more complicated are also creating issues in data preparation. The relatively simple ETL strategies of the past have given way to a range of preprocessing, scrubbing, and wrangling tasks developed to normalize data, ensure its accuracy and comparability, and enter it into warehouses or repositories where analysis might be applied.

Preparation of structured and semistructured data is thwarted by huge data sets and differences in data entry and collection (illustrated in the recent NationBuilder case). These huge data volumes, moreover, may include real-time and near-real-time data, and the need for instant analysis might make manual preprocessing impossible. Added to this is the growing demand for managing unstructured data and preparing it for AI or machine learning techniques.

One problem is the lack of a completely automated data integration solution. Some things, such as handling format variation, are simple to automate, but others may involve nuances of language or interpretation that are beyond the scope of current AI.

In handling all the ways data of different types might be coded and entered, the best solution is to automate the obvious, analyze for exceptions, and submit those exceptions to a human expert capable of applying real world knowledge to find a solution. This is similar to many problems involving AI; from 70 to 90 percent of a task might be handled autonomously, but it is also important to flag the exceptions for further analysis, generally by a human analyst.

Firms in the Field

Several vendors are engaged in ML preprocessing. For example, Trifacta has been involved with data wrangling since the early days and is now incorporating ML in its solutions. Other vendors include Paxata and Wealthport. Firms such as Amazon and Google are applying varying AI and analytics to the problem.

IBM recently brought the power of its Watson cognitive computing platform to bear on this problem with Watson Discovery Service. Still in beta, the service combines Watson AI with special tools and APIs to aid in data upload, the enrichment and indexing of large data sets, and linking with public data.

Incorporation of ML and AI into data preprocessing is likely to continue on a gradual and largely unheralded basis as this stage of analysis becomes increasingly integrated with other analytics tasks.

Conclusion

The relatively simple linear data ingestion and processing models of the past are being replaced by more complex strategies, and data ingestion itself needs to benefit from emerging analytics and AI techniques. It is another example of how today's cognitive systems are being built as a flow of multiple processing levels in hybrid man-machine solutions. As we move further into an era of big data analytics and AI, the ability to create usable data sets becomes increasingly important and will require significant AI and analytics resources.

About the Author

Brian J. Dooley is an author, analyst, and journalist with more than 30 years' experience in analyzing and writing about trends in IT. He has written six books, numerous user manuals, hundreds of reports, and more than 1,000 magazine features. You can contact the author at bjdooley.query@yahoo.com.

TDWI Membership

Accelerate Your Projects,
and Your Career

TDWI Members have access to exclusive research reports, publications, communities and training.

Individual, Student, and Team memberships available.