Adoki

Become a data-driven company where tens of thousands of data transfers per day are handled by a single person – reliably, safely, automatically thanks to Adoki.

Free demo version

Adoki features that make data replication as simple as possible

Adastra has been a Select Partner of Cloudera since 2014 and has also achieved technology certification for the Adoki solution.

How Adoki works

ikona_red_outline1-39

Replicate, synchronize, distribute, consolidate, migrate, snapshot and ingest data from any systems or databases.

ikona_red_outline2-09

Automatically generate target schemas based on source metadata.

ikona_red_outline3-19

Scale up to hundreds of sources and targets.

ikona_red_outline2-39

Safeguard the consistency and integrity of all data structures.

ikona_red_outline2-53

Minimize impact, i.e., it does not overload source or target systems.

ikona_red_outline2-54

It can be set up quickly and easily using an intuitive graphical interface (GUI), without any manual programming.

ikona_red_outline3-17

Makes it easy to configure data transformations, even during data transfers.

ikona_red_outline3-31

Transfer data in legacy enterprise and hybrid environments – in real-time or batch mode.

ikona_red_outline1-95

Centralize data transfer monitoring and management. Thanks to detailed logs, all transfers can be audited.

ikona_red_outline2-40

Customizable modular solution to fit organization processes.

Why replicate data with Adoki

Same data on-premise and on cloud? With Adoki, the automatic data replication tool, it’s a simple matter.

Adoki enables companies to efficiently replicate data between systems, i.e. generate and scale data transfers. It centrally manages and monitors data transfers to and from any data platform (on-premise and cloud) based on metadata.

It uses native solutions and connectors to link to any data repository. It creates and automatically checks data schemes and ensures that data is consistent at all times.

It transfers data quickly, efficiently and securely. Provides statistics to automatically optimize data transfers.

  • Your business will work efficiently with replicated data
  • Support fast and performance-efficient data replicas
  • It will handle data transfers of any volume, format and width
  • Your current transmission solution will be flexible enough to put minimal strain on infrastructure and systems

10 000+

of data transmissions managed by a single person

tens of TB

Adoki easily replicates tens of TB of data per day

Adoki works with different data platforms and technologies

Try the free demo version

Are you struggling with how efficiently get data to all the places where your organization needs it?

To different applications, systems, databases, in the formats you require, to on-premise and cloud environments?

And how do you get it there securely, in consistent structures, at the time you want it, and have full control over its flow?

Meet Adoki!

Test our free demo

10 reasons for getting Adoki

Services

  1. you need to share datafrom systems of different departmentswithin your organisation
  2. you are building a data platform and need to secure data on it
  3. you have dozens to hundreds of systems and need to connect them
  4. need to have data in multiple places at the same time, on-premise and in the Cloud
  5. need to add more data to your existing/current data
  6. you are looking for a simple and reliable solution for moving data
  7. need to systematically archive and back up your data
  8. you repeatedly deal with problems with data transfers: they don’t transfer, break or duplicate
  9. need to provide anonymised data to analysts because of GDPR
  10. you are looking for a tool that ensures that input data is error-free

Industries

  1. you want automation, you no longer want to create new/additional data transfers manually
  2. you want easy maintenance and monitoring
  3. need to keep an eye on the changing data structure
  4. need to speed up the setup of new and modification of existing data transfers
  5. need to reduce the load on your IT infrastructure during peak traffic times
  6. you want to manage, audit and control all data transfers from a single location
  7. need to automatically catch flaws on data sources
  8. you want a metadata-driven tool that feeds metadata back into the organization
  9. you are looking for a tool where you can extend the functionality yourself with new modules
  10. you need to free up developer capacity to create/solve business cases

Adoki use case scenarios

SYNCHRONISATION

Businesses need to move data from one platform to another (or other systems that are on-premise, hybrid or cloud-based) and they need to keep data in sync at all times.

The goal is to keep the data consistent (i.e., have an updated structure if it changes in the source systems) and also to be able to delete some data after a set period of time.

Why Adoki?

Komplex approach 

  • It defines scenarios and applies them to a large number of objects.
  • Scenarios allow you to generate metadata, convert datatypes, and predefine columns and operations. Load multiple tables at once.
  • Data can be grouped into jobs, ensuring that it is transferred at once and maintaining consistency.

Metadata storage

  • All scenarios are stored and verified in a metadata repository.

Evaluating schemes

  • Structures generated based on the source metadata can be automatically deployed to the target platforms and modified if changes occur.

ANONYMISED DATA FOR ANALYSIS

Companies need to perform data analysis and Data Science tasks over data from multiple systems.

Data needs to be transformed and anonymized, available as soon as possible and GDPR compliant. The results are sent back to the source systems.

Why Adoki?

Complex scenarios

  • The inquiry process is simple.
  • They are ready to be checked in by end users.Priority transmissions
  • Each data transfer is assigned a priority; critical jobs must be handled as soon as possible.

Easy transformations

  • Data can be transformed and anonymized as part of the replication process.

Schedule

  • Individual jobs can be timed.

REST API

  • Users receive a notification when the data is ready for transfer.

TRANSFER TO CLOUD

Companies need to transfer data from on-premise systems to the Cloud, but often don’t have the necessary knowledge or infrastructure.

Data needs to be replicated to the Cloud at minimal cost.

Why Adoki?

Custom mapping type

  • Cloud object schemas are automatically generated based on object schemas in the on-premise environment.
  • For selected cases, schema generation can be customised.

Incremental upload

  • Only the required subsets of data are replicated and increments are processed automatically.

Simple transformations

  • Only a subset of the data is worked with and the data is transformed/anonymised before it is stored in the Cloud.
  • Data storage is efficient.

Resource management

  • How many resources the Cloud actually uses is monitored.

OPTIMISATION OF IT SYSTEMS WORKLOAD

Companies need to optimize system utilization during replication to save system resources and monitor ongoing activities.

Typically, they use traditional ETL tools, but the number of concurrent data transfers is increasing and systems have capacity issues with the number of jobs running.

Why Adoki?

Resource management and monitoring

  • Resource capacities and time locks are defined for each system,
  • Resource capacities are taken into account and workloads are optimiSed.

Runs directly on the Big Data platform

  • Adoki can be deployed into existing platforms as a module, reducing hardware requirements.

Traffic statistics

  • Adoki provides detailed resource utilization statistics, based on which workloads can be optimized.

Metadata repository

  • Metadata from Adoki can be exported via REST APIs and can be used by other tools.

Technology examples using Adoki

AUTOMATIC FILLING OF THE DATA LAKE

Adoki automatically replicates any files, tables and databases to the data lake. It uses predefined scenarios that take into account the requirements of the source and target.

The most commonly used scenarios include the mirror scenario, which replicates data from the source systems to the stage layer, where the data is unified and checked. It is then published to the mirror layer, which provides a copy of the data for quick reading and use.

The mirror stage also allows the creation of an archive layer that acts as an economical backup of the data for regulatory and technical purposes.

FROM ON-PREMISE SYSTEMS TO THE CLOUD AND BACK

Adoki replicates data from various on-premise systems to cloud storage and back.

Adoki can transform, anonymize and filter data from on-premise systems before replicating it to various cloud technologies. It can be deployed both on-premise and in the cloud.

Adoki knows how much the connected systems load the infrastructure and what performance each transfer needs. This allows it to manage and scale replications (up and down).

Start using Adoki - easily and intuitively

Graphic user interface

Developers and operators work with Adoki through an intuitive graphical user interface.

The REST interface

Adoki offers a REST interface for systems and automated frameworks.

Elastic/Kibana Dashboards

Adoki provides dashboards in Elastic/Kibana, an open-source tool for visualizing processed logs.

Adoki's user-friendly interface

Users access Adoki through a web interface that allows them to:

Prepare data transfers

Set up what to transfer and how, plan when and how often the transfer should occur, etc.

Control and configure data transfers

Set limits on individual systems, for example, on the number of connections, memory size, etc.

Manage data transfers

Turn them on or off, change the priority of a transfer, track its progress, set up notifications about the progress, etc.

Monitor data transfers

Browse the transfer history, track statistics, monitor platform load, etc.

Case studies

ŠKODA AUTO: data transfers on analytics platform are comprehensively managed and monitored by Adastra’s Adoki

In 2018-19, Adastra built an on-premise Data Analytics Platform (DAP) at ŠKODA AUTO. Its purpose? To visualize data and use advanced analytics and artificial intelligence to perform sophisticated tasks with large volumes of data.

use the solution implemented by Adastra

is the total data storage capacity

Read more

Integrating 8 IoT databases and 1 metadatabase to reduce load and save space in the source system

A large automotive company works efficiently with (IoT) sensor data from manufacturing. We have lightened the system load and introduced data retention in the...

Read more

Banking – data in one place, we transfer 4 TB of data per day

At the bank, we have created a Big Data platform that provides business users with streamed and batch data from various banking applications. To...

Read more

Integrating JIRA data to identify risk

By integrating data directly from the JIRA source system, we are able to prepare a detailed overview of the status of multiple projects, including...

Read more

Do not miss our blog

Adoki: automates ETL data transfers and breaks down corporate data silos

One large Czech bank handles tens of thousands of data transfers every day. How many people does that take? Just one, who manages all...

Read more

8 tips for building mature Big Data platforms

Companies who base their business on data generate more revenue than those who rely on intuition or other approaches in their decision-making. Don’t risk...

Read more

When use data virtualization and when replication? Three key criteria based on out best practise to consider when making a decision

It is increasingly necessary for companies to have the same data in various systems. In this context, data virtualization has become rather a hot...

Read more

Contact us