Programming language for easy data management

Do you need to process huge volumes of data? Build applications that easily scale to any load you need and will run smoothly till the end of ages? Because with Scala, you can create smart and compact code that is fully compatible with Java. With this one language, you can build anything from an ETL pipeline for Big Data to an amazing website.

Why Adastra fell in love with Scala

fewer lines of code than Java

Faster development

Scala developers

For small applications and big data platforms

Higher quality
machine learning models

For 300% more successful customer behaviour prediction

3 months
for building an application on top of the Cassandra database

Unrivalled speed of deployment

Scala saves you development costs

  • Faster development with fewer lines of code - 50% fewer lines of code than Java.

  • Cheap to scale for increased performance. 

  • Concise and readable code leads to higher productivity and faster testing. 

  • Functional programming means easy to debug and to bulletproof code. 

  • High level abstraction enables focusing on business logic and generating business value.

  • Java compatible: You can use Java code you already have seamlessly within a Scala app.

Adastra has hands-on experience with Scala

  • Experienced team of 10+ Scala developers.
  • Practical experience from large scale projects in banking to telco industries.
  • Business oriented approach – code is just a mean to an end, it has to generate business value
  • Our code is cleanmaintainabledocumented, and above all, tested.
  • Code coverage and performance tests are a must.
  • We are glad to help you with initial development and following run.
  • Initial project could be anything from a small-scale app with 2 developers to building a full SDK, big data platform, or a full ETL pipeline.

Our happy Scala clients

  • Spark
  • Big Data, HDFS, Hadoop
  • Real-time streaming
  • Kafka
  • NiFi
  • Akka
  • Parallel distributed applications
  • NoSql databases, e.g. Cassandra, HBase
  • Data Science
  • Machine Learning

How our clients use Scala

Banking - Transaction store

  • We have managed to build a scalable high throughput application on top of Cassandra database in under 3 months.
  • Extensive use of future, modern Scala libraries and NoSql Cassandra database allow for unrivaled speed that can be used for anything from analytics to internet banking.
  • This app easily scales to any amount/velocity of data simply by adding more cheap nodes to the cluster.

Banking – ETL offloading tool

  • Scala ingestion tool that is able to take any input format and store it cheaply and efficiently on Hadoop platform.

  • It enables mirroring of current relational databases on Big Data platforms.

  • This allows for extremely fast advanced analytics queries, short learning times of machine learning models and real time data streaming.

  • This app is highly optimized to run 24/7 and transfers over 4TB a day in both directions.

Telecommunication – Big Data platform and anonymization framework

  • We have developed both batch and streaming ETL Spark pipelines.

  • Thanks to the development of anonymization framework in Scala we were able to use the data for machine learning algorithms.

  • We have built the whole solution from scratch, including the Big Data platform itself. Its capacity is now at 1PB of data storage, 1400 threads and 7TB of RAM.

  • This platform and Scala ETL framework allow for advanced data analytics and machine learning models that means for example that prediction of customer behavior is 300% more successful than previous approaches.


Manufacturing  – ETL and compaction pipeline

  • We have developped ETL pipelines for more than 20 analytical projects.
  • We used advanced Scala data compaction pipelines to boost effectivity of the Big Data platform and its storage capacity.
  • The Big Data platform was built by us from scratch and is now used to integrate data from various relational sources and allow advanced analytics and machine learning over them.

Scala and Big Data Synergy

If you are thinking about tackling Big Data, Scala is the way to go because of several reasons

  • Scala is the optimal language for building high-throughput, real-time ETL data pipelines in Spark. Scala also gives you all the latest features in Spark, without needing to wait while its API is modified for other languages.
  • Scala's functional approach is great for creating applications that run in parallel on each node of your Big Data cluster, optimally utilizing its resources.
  • Applications build in Scala are highly resistant to failures of individual nodes and can continue to process data even if most of your Big Data cluster is down. This way, you can be sure that the application is always running and no data are lost.
  • You can rely on all the latest cutting edge libraries for handling Big Data in Scala, as well as freely use any already written reliable Java code.
  • Scala with Big Data can net you anything from recommendation engine and machine learning modes to highly abstracted and simple to use data pipelines.

Do not miss

Would you like to use all the benefits of Scala? Contact us.

Thank you

We will contact you as soon as possible.

Petr Hrabec

Big Data Developer

David Procházka

Big Data Developer