Data Engineering

  • 10.0 Rate

  • 33 Lecture

  • 66 hours

  • 17 Weeks

Data Engineering is the foundation on which the modern Data ecosystem is built. At its core, Data Engineering focuses on designing and building systems and infrastructures that automatically collect, store, and process data. Above all, its goal is to develop methodologies and tools that ensure process efficiency, data reliability, and sustainability.

Alongside fundamental concepts and principles, students will learn to use key technologies and tools in Data Engineering, such as Apache Airflow, Apache Kafka, Apache NiFi, Apache Avro, Apache Parquet, Apache Arrow, temporary and persistent storage systems (SQL, NoSQL, PostgreSQL, Redis, ClickHouse), designing web services for data (FastAPI), and various Python libraries. They will also understand how these technologies interact with each other, and how theoretical concepts strengthen practical implementation.

Outcome

Skills Acquired: Python, FastAPI, Apache Airflow, Apache Kafka, Apache NiFi, Apache Arrow, Apache Parquet, Apache Avro, PostgreSQL, Redis, MongoDB, ClickHouse, SQL

  • Understand fundamental concepts of data and its value;

  • Use Python for data transformation and manipulation;

  • Apply Python libraries for Data Engineering and databases in practice;

  • Design high-performance and resilient system architectures;

  • Build and design high-performance web services using FastAPI;

  • Ensure data reliability and quality;

  • Differentiate and implement ETL and ELT processes in different data projects;

  • Design, orchestrate, and analyze data transformation processes;

  • Work with temporary and persistent storage systems, both columnar and row-based (PostgreSQL, ClickHouse, Redis, MongoDB, etc.);

  • Apply workflow orchestration systems with Apache Airflow;

  • Understand DAGs (Directed Acyclic Graphs) and Schedulers;

  • Design and analyze Data Workflow systems;

  • Grasp fundamental concepts of Apache Kafka;

  • Set up and use a functional Kafka cluster;

  • Apply and configure Apache NiFi for data transmission and orchestration;

  • Work with different serialization, compression, and storage formats (Apache Arrow, Parquet, Avro);

  • Work with Data Streams and Message Brokers;

  • Analyze and design data dependencies and relationships;

  • Understand principles of different data storage concepts and apply them (Data Warehousing, Data Marts)

Apr 07 2800₾

Tue 19:00-21:00 | Sat 12:00-14:00

Split your payment
TBC installment
BOG installment

Who is this course for?

Data analysts

For data analysts who know Python and want to develop their career.

Developers

Someone who has already started learning Python and has an analytical, detail-oriented mindset.

Program includes

Alumni Club

After successfully completing the final exam, graduates will be automatically enrolled in the Alumni Club. This membership grants them access to exclusive events, content, and special offers from our partner companies

Work Based Learning

The course includes practice-based learning, including assignments/exercises and individual projects.

Bilingual Certification

Upon successful completion of the course, students will receive a bilingual certificate.

Graduate feedback

10.0 Rate

Syllabus

What is Data Engineering
Data Engineer's Responsibilities
Data-driven Mindset
Perfectionism
What is Parallelism
Identifying “parallelizable” processes
Multiprocessing
Threads vs processes
Multithreading
Lab session / Lecture and practical assignment
Methods to work with CSV Dataframes 50x faster
WORM files (Apache Avro, Apache Arrow, etc)
Delimited files (TSV, CSV)
ACID and I/O
Lab session / Lecture and practical assignment
Data durability
File handling principles
File partitioning principles
File handling algorithms
Project - File systems
Receiving/processing data from API (requests)
Designing and developing APIs
Design Patterns
Middlewares and parallelism in FastAPI
Lab session / Lecture and practical assignment

Pick your suitable time

Lecturers

Guja Lomsadze

Data Engineering

Guja Lomsadze

Data Engineering

Guja has 7+ years of experience in data engineering and analytics, holding leadership positions in the field. He currently leads the Data Engineering team at Crocobet.com and also works as an Analytics Engineer at the German company Carnival Maritime, which provides data and technical support for the fleet of the world’s largest cruise company — Carnival Corporation. For his contributions, Guja Lomsadze has been awarded the title of Data Champion by Carnival Maritime. Previously, Guja worked at EPAM Global as a Senior Data Engineer and also served as a Data Engineering Team Lead at one of the world’s leading pharmaceutical companies, Merck. He holds a Master’s degree in Data Engineering from Constructor University, Germany. During his studies, he also lectured at the university on the topic of Data Management. Guja is an internationally certified data engineer in the following areas: AWS Certified Data Analyst, AWS Certified Cloud Data Engineer, and Google Cloud Platform – Professional Data Engineer.

Linkedin

FAQs for this course

A: Data Engineering is increasingly vital in today's digital world. It bridges the gap between raw data and actionable insights, enabling businesses to make informed decisions. Data Engineers build and maintain the infrastructure that processes massive amounts of data, making it accessible and useful for analysis. This field offers excellent career prospects, competitive salaries, and the opportunity to work with cutting-edge technologies while solving complex real-world problems.
A: The course requires basic knowledge of Python, analytical and detail-oriented thinking abilities, as well as English language proficiency at B2 level. A strong interest and motivation to work with data is essential.

Your search Digital Designer did not match any documents

Unable to locate relevant information?

Get Free consultation

You may interest

Relevant Resources

Show More