For years, data teams worked with simple data pipelines. These generally consisted of a few applications or data feeds that converged into a standard extract, transform, and load (ETL) tool that fed data into a centralized data warehouse. From that warehouse, data was sent to a set number of places, like a reporting tool or spreadsheets. As a result, data protection was relatively straightforward. There simply was not as much data to protect, and the locations of the data were limited.
This article has been indexed from InfoWorld Security