Data Integration

All in one place. Unified data from different data sources, including public sources and databases. While complying with Data Governance, Data Security and GDPR.

Let's talk

The benefits of data integration

We integrate data from different data sources for the purpose of unification, linking, mediation and reporting. During integrations, we pay attention to Data Governance and anonymization of data covered by for e.g. GDPR or internal policies of the organization.


Advanced data analytics

Data integrations enable the linking of relevant data, its transformation and better insight into the data as a whole.

They provide unlimited possibilities for advanced data analytics.


Data quality

Data integrations bring standardization, cleansing, enrichment and validation.

We improve data quality and deliver reliable and insightful reports for end users.


Data Governance

V rámci datových integrací standardizujeme i metadata, centralizujeme je na jednom místě.

Zajišťujeme ucelenou Data Governance.


Data Governance

As part of data integrations, we also standardize metadata, centralizing it in one place.

We provide comprehensive Data Governance.


Data anonymization

Thanks to advanced tools, we can mask integrated data.

Anonymization according to the needs of the organization/department. We can solve data anonymization during the transfer of integrated data and also in any layer of the target storage.



We take care of security both when storing data (to prevent unauthorized access to data) and when mediating it within the platform.

We ensure security in all environments, layers, for all user roles according to corporate Data Governance.


Data classification

For each data source we deal with data classification, starting from the lowest level, which in practice means that different datasets from the same data source may have different classifications

We create different environments that can only be accessed by authorized users.

Technologies for data integration

These technologies ensure that data is properly transferred, transformed and shared between different platforms. They enable organisations to achieve better collaboration, effective decision-making and increased competitiveness.

We usually handle data from the following sources

  • Data acquisition: we transfer data from the source to the destination storage e.g. via Adoki, Spark, Kafka…and we can acquire data using batch/micro batch/real-time approaches.
  • Data processing: data processing is handled by so-called integration workflows andthese are automated using orchestration tools, e.g. Airflow.

General description of the data

  • Data classification
    • internal, public, confidential, secret
  • Owner and technical contacts
    • integration developer or data analyst

Data set description

  • excel and columns  names and comments, data types and other technical dependencies such as:
    • data source names
    • and their links to the target repository, etc.

Follow-up CI/CD pipelines

    • automated data workflows. Once the workflow is created, the entire process is automated and data acquisition takes place
    • automatically at the defined time
    • when new data is indicated

Data integration workflow

As part of data integration, we analyze the source data and the desired output. At the same time, we process and add metadata, which is then used to perform the data integration itself. Integrated data typically flows through several layers, and within each layer it may serve a different purpose and take a different form.

1st layer: Landing

      • the data is stored here in the same format in which it arrived
      • is used to check whether the data has been corrupted during transmission
      • permissions: only administrators and integration developer

2nd layer: Optimized

        • initial transformations of data and their mapping to target data types according to metadata are performed here
        • permissions: in addition to the administrators and integration developers, also selected end users can verify the correctness of the data at the second layer

Final layer: Data mart

        • data is prepared and cleaned for data analysis, reporting and machine learning
        • allows linking multiple data sets into one data mart according to user needs
        • in addition, one or several layers can be located in front of this layer, where, for example, data aggregation and deduplication take place, if the task requires it
        • permissions: additional users who want to extract and report on the final data

Special environment – Group & Personal workspace

        • it is a testing/personal environment for an individual or for a specific team/project
        • completely separate from standard environments with respect to permissions and development support

Case studies

Equa Bank clients were fully migrated to Raiffeisenbank in 12 hours

When Equa Bank was being mergedintoRaiffeisenbank in November 2022, we handled the migration of Equa Bank’s client data into Raiffeisenbank’s CRM system. We also ensured the client master data was propagated to the bank’s core systems.  

hours instead of 3 weeks – shorter live migration thanks to 10 months of testing and agile development

subjects to migrate

people - each with 200 attributes, added to Raiffeisenbank’s client base after the acquisition of Equa Bank

Read more

ŠKODA AUTO: data transfers on analytics platform are comprehensively managed and monitored by Adastra’s Adoki

In 2018-19, Adastra built an on-premise Data Analytics Platform (DAP) at ŠKODA AUTO. Its purpose? To visualize data and use advanced analytics and artificial...

Read more

Automatic categorization for 98.5% of card transactions

With millions of clients conducting millions of operations every day, the bank needed to automatically assign a unique category to every banking transaction (card...

Read more

Just-in-time loan offers: a 10x higher conversion rate

Together with the bank, we used several years’ worth of transaction descriptions and a number of transactions of a specific type to identify a...

Read more

Get inspired on our blog

Unlocking Data Governance: 3 Proven Strategies from Top Czech Managers

Managing data governance within organizations and among stakeholders can be an overwhelming task for CIOs, CDOs, or data strategists. The implementation and sustained adherence...

Read more

Functional Data Governance: Imperative for Success, Yet Over a Fifth of Major Czech Corporations Lack a Starting Point

Up to 72% of major Czech companies are currently in the initial stages of implementing data governance. Approximately one-fifth of these companies recognize its...

Read more

Observability platform vs. observability tools 

Complex information systems fail in unexpected ways. That’s why IT teams need both observability tools and an observability platform. To understand the distinction between...

Read more

Contact us

Ľubomír Maslík
Division director