How Query in Place is Reshaping the ETL/ELT Mandate

Data is becoming essential for all organizations. Since it became widely available, companies realized that they could use data to learn more about themselves and their markets, improve processes, improve decision-making, and much more. 

However, all that sounds relatively easy until you try to use raw data in an insightful and actionable manner. There are a lot of challenges along the way. That’s especially true today when large volumes of data need to be gathered, organized, and analyzed correctly.

The question now is whether companies can do this efficiently. The goal is not to spend tons of resources and time to get valuable insights. 

Things companies need from data


Not having to rely on data silos 

Different data silos sources create a variety of issues. Data professionals can have difficulties deciding where to start, how to share their data, and how to deal with inconsistencies. 

Ability to do SQL querying 

Lots of engineers don’t have time to learn new technologies. On the other hand, they could have trouble obtaining the resources they need. Companies also don’t have the budget to hire experts that can help them. 

Get insights faster 

One of the most significant issues with data is time to insights. Companies are often forced to wait for weeks or even months to get valuable reports. This kind of performance is unacceptable, especially in the modern dynamic business environment. 

Ability to connect with a variety of data 

Different data formats can always be challenging to handle. Organizations often don’t have the necessary connectors or have to implement complex custom solutions that require a lot of time and effort. 

The industry is relying on traditional ETL and ELT 

All critical business decisions, plans for future expansions, investments, scaling, and other crucial processes rely on massive data volumes and complex reporting. Companies couldn’t access this data in a helpful way, as it was stored in many different silos. 

Actions like analysis or data visualization require a lot of preparation. In other words, it was impossible to create valuable real-time reports that would provide useful information. 

Data complexities led quickly to the development of ETL / ELT systems to extract valuable insights from big data. ETL / ELT systems are various tools that can collect data from various sources, transform it, and load it into desired data warehouses. 

Even though ETL / ELT tools do help, most of the data structures today are still traditional. They include data warehouses meaning that each step with data requires extractions, transformations, and loading in some order. It makes the business intelligence process slower and more complex. 

Challenges of using traditional ETL and ELT 

Most typical ETL/ELT platforms involve a lot of manual work, making the process slower, including importing metadata for various stages. In other words, a lot of time is being wasted on this part alone. 

These platforms are also troubled with a variety of errors. If one of the stages encounters an error, it will be impossible to continue to the next stage until it’s fixed. It leads to a lot of downtimes that companies can’t afford. 

Traditional tools can only copy certain data types of datasets that are important for the whole data analysis process. This weakness leads to various gaps in results, reports, and compromises the integrity of the whole data, making it completely unusable and leading only to partial results. 

Legacy tools also rely heavily on schemas that can’t handle unstructured and discrete data formats. In many cases, data analysts had to remove certain parts of data to get some results out of their work. 

Another major issue with ETL/ELT tools is that they have great hardware requirements. It’s sometimes impossible to integrate them with current infrastructures leading to more overhead for the whole organization. These platforms also require a lot of manual work for building and managing data pipelines. 

New solutions for reshaping ETL/ELT


Luckily, data-related technologies, practices, and processes are constantly evolving. The industry understands that there is a need for better options that can tackle these issues more effectively. More specifically, the open-source community has recognized the problems and created new tools to deal with them. 

One of the technologies that you can use for this purpose is PrestoDB. It’s a distributed SQL query engine that’s completely open-source. It is designed to run analytic queries for all kinds of data sources. It offers unique advantages that make data analytics a lot easier. 

Presto query in place/analysis 

The most crucial feature of Presto that makes it so different from a variety of data platforms. Instead of having internal storage Presto can connect to most data storage types used today and directly read data from them. 

It’s important to mention that Presto can also write data, but the key feature we’re interested in here is its ability to read data. It can be connected to many different data stores that run on NoSQL and SQL databases. It can also read data from HDFS and S3 that isn’t modeled. 

In other words, this platform can work with raw data saving a lot of time in the process – query in place. There’s no need to pre-process data with ETL or ELT. As soon as the data is stored within the data lake, it can be accessed. 

It reduces the time necessary to get insight regardless of where it resides. On top of that, Presto can work with several data sources at the same time. It can also cross-reference data coming from different sources. 

Improved query optimization 

PrestoDB is equipped with a lot of features that make query execution and planning faster. First of all, Presto offers Cost-Based Optimization, so it automatically chooses the most efficient options for different queries using resource availability and table statistics.

That makes query processing a lot shorter. It also includes Dynamic Filtering that lets users query filtering to be applied first, so the tables are joined after filtering, making query execution a lot faster. 

Managed Presto service 

Not only does Presto bring these unique functionalities for free, but you can also get it as a managed service from Pandio, making things even easier. 

You’ll no longer have to deal with manual and tiresome tasks. Instead, focus on your core tasks and let Presto and Pandio work for you in the background. Get your data where you want it and when you want it, quickly and easily for your BI analytics. 

The bottom line 

Presto is a perfect fit for processes where ETL/ELT tools are traditionally used. Take the time to try it out and see the magic happen for yourself. Of course, it’s not perfect, but it brings many advantages over traditional platforms.

Leave a Reply