.

News

06. 05. 2021

The four most typical use cases for data offloading with Adoki

Reading time: 5 minutes

With the transition to data-driven activities in large companies, you often need to have one piece of data in multiple locations so that it can be used by as many different users and services as possible – for example, providing data on client behaviour on the Web to marketing, sales, risk, and operations data platforms.

Adoki, Adastra's own tool, allows you to do this automatically. It specialises in managing two-way data transfers between primary applications, the data warehouse, the big data platform and the cloud.

Not only can you use it to “pour” data to where you need it, correctly and at the right time, but it also keeps the metadata and schemas on the source and destination synchronised with the data. You will also work with them on both sides according to uniform rules of governance.

1. Data synchronisation - to work with identical data

Adoki's main mission is to synchronise data, i.e. transfer one piece of data from a specific source to several other systems. Adoki replicates data automatically and the transmissions it controls remain stable for a long period of time. It  therefore ensures that data transfer takes place consistently, even if the data or metadata on the source changes for various reasons, e.g. file format, compression type or separator used, column is added to the table, columns are swapped, renamed, etc. Adoki recognises changes in data structures in the source system and automatically copies the new form of data to other systems and platforms.

Adoki provides transmissions according to your needs:

  • continuously
  • several times a day
  • oneoff, e.g. after the end of a shift or working hours
  • on selected days of the week or on weekends
  • or at specific time intervals

Whether your systems run on-premises, in the cloud or in a hybrid, you can easily set up, manage and control all data transfers. Adoki has a user interface in which you choose the most suitable scenario, set the parameters to suit it, and everything else is already running in the background. Adoki can be automated and integrated into existing ETL pipelines via its REST interface. All transmissions can therefore be created, generated and controlled automatically.

In addition, instead of a team of data specialists, only a few trained employees will work with Adoki, even from business. So you save twice over - on both HW and human resources. And your data specialists will be able to devote themselves to complicated data tasks.

2. Anonymisation of data - to meet the requirements of GDPR

Want to get the most out of your business data? Are you hesitant because of GDPR? You don't have to any longer. Using Adoki, you can also easily transfer data to various analytical platforms, anonymise it during the transfer, and find previously hidden connections within it. Enable your Data Scientists to realise the full potential of data and generate opportunities for your business to grow.

Adoki ensures:

  • the regular anonymisation of data according to basic anonymisation rules
  • transfers data from various sources to a unified analytical environment
  • guarantees that data is consistent for analytical tasks
  • provides raw data so that analysts can derive useful attributes and variables
  • ensures that the data are in the same data format at the source and destination so that their data descriptions and dictionary are uniform,
  • allows the transfer of the outputs of predictive and descriptive tasks from the anonymised analytical environment back to the primary one

3. Transition to the cloud - for flexible performance, storage and costs

Are you gradually transitioning to the cloud and creating a temporary hybrid environment? You need to be sure that:

  • you are transferring data from the original onpremise systems correctly and on time
  • you are transferring only the required incremental data additions
  • the data is actually uploaded to all other systems that are continuing to work with it
  • you are reliably archiving data
  • transmissions are taking place with the optimal technical settings of cloud services

With Adoki, you can manage and administer thousands of data transfers from a single point and keep data replication costs to a minimum. When transferring to the cloud, Adoki allows you to flexibly change the volume and number of sources during the day on both the on-premise systems side and the cloud side. If you use the cloud as an archive, then with Adoki you can safely delete data from the original source on-premise systems and free up space for new daily data additions.

4. Load optimisation of IT systems - for fast and proper replication

Are you encountering the limits of the IT infrastructure used or your IT ecosystem during planned data transfers? The painful spots include:

  • are you struggling with a growing number of concurrent data transmissions that are set to the same point in time but there's no reason for this?
  • is data replication taking too long or at the wrong time?
  • in an enterprise infrastructure, are you struggling with a heterogeneous environment, and the prioritisation of individual technologies and tasks is very difficult or impossible because each is managed separately?

Adoki can resolve all these issues comprehensively without overloading the infrastructure. It allows you to define resource capacities and set time locks. It can operate all systems and optimise their loads. To do this, it uses detailed statistics on resource utilisation, based on which it detects both weak points in the synchronisation process and failures. By using Adoki, you not only transfer data, but also optimise the workload of and burden on data systems.

Are you facing a similar situation? Adoki may be able to help you.


Are you facing a similar situation? Contact us for a free consultation.

Thank you

We will contact you as soon as possible.

Jakub Augustín

Big Data Competency Lead

Marcel Vrobel

CTO, Adoki