Now Reading
Bringing real-time replication to the legacy mainframe environment
[vc_row thb_full_width=”true” thb_row_padding=”true” thb_column_padding=”true” css=”.vc_custom_1608290870297{background-color: #ffffff !important;}”][vc_column][vc_row_inner][vc_column_inner][vc_empty_space height=”20px”][thb_postcarousel style=”style3″ navigation=”true” infinite=”” source=”size:6|post_type:post”][vc_empty_space height=”20px”][/vc_column_inner][/vc_row_inner][/vc_column][/vc_row]

Bringing real-time replication to the legacy mainframe environment

Large organisations running legacy databases may find themselves increasingly challenged in replicating data and running analysis and what-if scenarios unless they move to more modern methods of replicating data.

This is according to Sam Selmer-Olsen, CEO of Bateleur Software, the South African distributor of Treehouse Software and tcVISION.

Selmer-Olsen says many large organisations such as government, healthcare and financial institutions in South Africa run legacy mainframe databases. A significant portion of our work is helping them extract data from these databases, he says.

He notes that company data is needed daily in all areas of the company, for communication and analysis. However, it is typically stored on different systems and platforms including legacy mainframes in a variety of formats or decades-old storage methods.

With growing volumes of data and an increased need to replicate data across a variety of cloud and other databases, integrating data for analysis can prove challenging, and older methods of replicating the data are proving ineffective, he says.

Some data replication tools limit the customer to a particular vendor, but todays tools need to be brand agnostic to support flexibility. Organisations using traditional batch extraction from legacy mainframe environments are finding this inefficient, because reading an entire database and making a copy of that for the cloud environment means the data is constantly out of date. It is not always realistic in environments where there are billions of rows of data, Selmer-Olsen says.

Another approach is instead of taking the entire production database and making a copy, replicating only the changes made during the day to the data warehouse. BI enquiries and analysis are carried out there, instead of doing them against the production database. This approach also has its limitations in environments with massive amounts of data and transactions.

However, Selmer-Olsen says there is also a third way to approach this. The most modern way of doing it is to do it not in batch processes, but to replicate instantly and in real-time.

tcVISION, developed by Boss Software and distributed worldwide by Treehouse Software, is a cross-system solution for real-time unidirectional or bidirectional data synchronisation and replication based on changed data.

Selmer-Olsen explains: Whenever an update happens in the production database, tcVISION propagates it to the cloud, data warehouse or wherever you want. It supports real-time data propagation between production databases and wherever you want it to go. We encourage customers using older techniques for replicating data to data warehouses to move to a real-time environment where the breadth of target databases is so much broader. With tcVISION, there is a comprehensive set of options you can write to. It is also unique in that it offers a bidirectional option.

tcVISION allows legacy mainframe environments to continue while replicating data in real-time on a variety of cloud and open systems platforms.

Data can be replicated between IBM Db2, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, or sequential files, and many cloud and open systems targets, including AWS, Google Cloud, Microsoft Azure, Kafka, PostgreSQL and more.

tcVISIONs flexibility allows for unidirectional or bidirectional data transfers in real-time, or in a time-controlled or event-based manner. It supports mass data load from one source to one or more targets, as well as a continuous data exchange process.

tcVISION is an ideal cross-system solution for a number of applications, including for data coexistence in heterogeneous environments with mainframe and distributed systems; to support the gradual migration of data and applications or modernisation of existing infrastructures with streaming and cloud platforms; and to enable the digitisation of real-time mainframe data in data hubs and lakes for analytics and big data.

View Comments (0)

Leave a Reply

Your email address will not be published.