Migrate Data Without Holding Business Back
Migrating to the cloud can be simplified by data virtualization.
- By Ravi Shankar
- June 15, 2017
Today's organizations are struggling to harness data both at rest and in motion to enable next-generation data architectures that power modern analytics. What does it take to adopt new technologies such as big data implementations and cloud-based analytics?
For many organizations, adopting any new infrastructure means upending deeply rooted on-premises systems that have been in place for decades. How does one go about doing that without bringing day-to-day operations to a grinding halt? The result would be like trying to change the tires while the car is running.
Large insurance companies are often in this bind. Many of them depend on decades-old mainframes that are the lifeblood of their business. For example, mainframes often support the production and dissemination of numerous reports that need to be delivered on a regular schedule, including sales and marketing performance, sales forecasting, and compliance. No insurance company would be able to sustain a prolonged interruption and risk forgoing these reports.
To reduce costs, large insurance companies do whatever they can to streamline business processes, but not at the expense of touching the mainframes, even if they see a way to improve the business. This is because interrupting the mainframes can have real consequences.
Many companies yearn to move to cloud-based infrastructure, inspired by the example of Amazon, which first made its mark as an online bookseller only to reinvent itself as the largest cloud services provider in the world. In the process, they added new revenue streams to their business and created a huge cloud-first market valued at billions of dollars. Unfortunately, many companies that began in the brick-and-mortar years are still hesitant to make the move to cloud because it involves too much risk to the original business.
Enter Data Virtualization
Data virtualization is one solution that can enable migrations and other profound transformations (such as database modernization) with no impact on daily operations. Most migration or modernization solutions first copy all the data from the current system to the new system to ensure that they are in sync and then point users to the new system. However, it is not easy or advisable to revert to the old system if there are any problems. Data virtualization uses a completely different strategy.
Rather than performing a hard switch from one system to the other, data virtualization establishes an abstraction layer that shields data consumers from underlying source systems without having to worry about where the data resides. Data consumers, whether individuals or applications, access the data from the data virtualization layer. That layer gets its view of the data from different sources, such as the current and new systems of a migration or modernization, and combines them into a virtual single repository in real time.
With a data virtualization layer in place, companies do not have to switch to the new system quickly to minimize downtime. Instead, they can slowly move the data from one source to another without impacting users in any way.
In fact, users will often not even be aware that a change has taken place, even if the change is a massive transformation from an on-premises infrastructure to one that is cloud based. If sources are ever moved or changed, data virtualization takes care of all the "rewiring" behind the scenes; again, users are abstracted from all the underlying complexity.
Data Virtualization and Big Data
Cloud migration isn't the only trend; many organizations are also moving to adopt big data implementations. This is because data volumes are growing so much that it is just not feasible from a cost perspective to store these volumes in a traditional system. Also, unstructured data, which is now the dominant data type, cannot be stored in structured systems such as data warehouses.
From a cost and ease-of-use perspective, big data implementations provide a measure of relief. However, many organizations with big data deployments also store a certain amount of data in traditional systems. In these cases, data is often fragmented across both types of systems simultaneously.
Here again, data virtualization establishes a layer above the two types of systems, and as far as users or applications that leverage data are concerned, all the data sits in "one" holistic system. Companies can move data between on-premises and big data systems at their discretion without impacting the data consumption side of the equation.
Make Your Move
As organizations look at these two megatrends -- migrating to cloud and big data systems -- they should see that both have a lot to offer. The benefits are poised to far outweigh the risks, especially because data virtualization can obviate many potential challenges, including downtime. Companies should seriously consider evolving their IT landscape to embrace modern data architectures that include data virtualization as a core technology.
About the Author
Ravi Shankar is senior vice president and chief marketing officer at Denodo, a provider of data virtualization software. He is responsible for Denodo’s global marketing efforts, including product marketing, demand generation, communications, and partner marketing. Ravi brings to his role more than 25 years of marketing leadership from enterprise software leaders such as Oracle and Informatica. Ravi holds an MBA from the Haas School of Business at the University of California, Berkeley. You can contact the author at rshankar@denodo.com.