Igor

AWS, DATA, Hibernate, Java, Kafka, Spring,

Contact us and we will hire Igor for your team.

About Igor

Igor is a seasoned IT specialist with over 12 years of experience. He is proficient in JAVA, Spring, AWS, Hibernate, DATA, and Kafka, making him a valuable asset to any team or project.

Name: Igor
Birthday: 1 Ncevy 1990
Degree: Znfgre
Experience: 10 Lrnef
Phone: +38 012 345 6789
Email: vasb@rknzcyr.pbz
Address: 123 Fgerrg, Bqrfn, Hxenvar
Freelance: Available

12

Years of

Experience

26

Happy

Clients

68

Complete

Projects

Skills

AWS

96%

DATA

86%

Hibernate

80%

Java

70%

Kafka

76%

Spring

94%

Expericence


Senior Back-End Developer

Playtika

Date: 06.2021 — Present


Project description
Playtika is the leading mobile gaming company with more than 9 million daily and 31 million monthly active users. The company manages hundreds of business pipelines, which extends the user's gaming experience by providing comprehensive data-driven insights and solutions.These pipelines are meticulously designed to capture real-time gameplay data, user interactions, in-game transactions, social interactions, and more.

Responsibilities

● Leading projects designed and supported Studio Data Lake.

● Was developing the personal growth of each team member.

● Monitor project progress, identify potential roadblocks, and implement

mitigation strategies to keep the project on track.

● Oversee the design, development, and implementation of the business

pipelines, ensuring they are scalable, reliable, and efficient.

Achievements:

● Investigated into the new Spark Connect Server app and successfully

integrated it to existing pipelines, which optimized tasks execution time.

● Migrated PB size Datalake from HDFS to Alluxio S3.

● Organized migration of legacy pipelines to a new Camel engine.

● Organized monitoring for each element of an ETL Pipeline



Senior Software Engineer

TiltingPoint

Date: 03.2020 — 06.2021


Project description
TiltingPoint is a free-to-play (F2P) games publisher company. The company provides a platform for gaming studios which includes distributed computing frameworks, scalable storage solutions, and high-performance data processing engines. The platform is designed to handle the massive influx of data generated by millions of gamers around the world, ensuring that the company can derive meaningful marketing campaigns insights and actionable intelligence in near real-time.

Responsibilities

Project domain: Gaming

Responsibilities:

● Provided guidance and direction to less experienced staff in resolution of highly

complex technical problems.

● Communicated project plans, tracking details, status updates with head office.

● Collaborate with data engineers to optimize data storage, processing, and

integration methods for optimal performance.

● Manage ETL platform structure and architecture

Achievements:

● Designed a DSL for Analyst teams that streamlined data processing, improved

efficiency, and enhanced collaboration across the organization

● Organized the migration process of legacy Python ETL jobs to Scala jobs with

Airflow pipelines.

● Set up Grafana/ELK monitoring for crucial Spark Streaming jobs.



Senior Data Engineer

Appsflyer

Date: 09.2018 — 03.2020


Project description
AppsFlyer is the world's leading mobile attribution & marketing analytics platform, helping app marketers around the world make better decisions. The company is processing billions of events and PBs of data every day.

Responsibilities

● Design and build data processing pipelines using tools and frameworks in the

Hadoop/Spark ecosystem

● Implement and configure big data technologies as well as tune processes for

performance at scale

● Manage, mentor, and grow a team of big data engineers

● Collaborated with other teams. Leading cross-teams solution integrations

Achievements:

● Used various optimisations to make daily Spark batch pipeline run hourly and

fit tight SLAs, while processing PB-scale data

● Set up Spark performance tracking profiler to check an impact of a new code

changes



Senior Data Engineer

Health Monitoring System. U.S.-based healthcare IoT startup | DATAART

Date: 06.2015 — 09.2018


Project description
The customer provides health monitoring solutions which continuously tracks incoming user health telemetry. The project was aimed at redesigning the existing monolithic server side system in order to make it scalable to support more clients. Project domain: HealthCare

Responsibilities

● Design scalable, distributed, and fault-tolerant architectures that can handle the

high volume of data generated by IoT devices.

● Design data models that accommodate the dynamic nature of IoT data and

allow for easy retrieval and analysis.

● Create robust APIs and microservices that enable communication between IoT

devices, applications, and the backend infrastructure.

● Engaged in production deploys and ecosystem management in AWS

Achievements:

● Designed a highly-available java server, which handled and preprocessed data

coming from 1M connections each second.

● Designed load test framework which emulated millions of connections

● Splitted initial monolith architecture into several microservices



Senior Software Engineer

Management system | DATAART

Date: 01.2014 — 01.2015


Project description
World's number one independent provider of data integration software based in the US. Complete WEB solution which maintains all major features of the core system. Provides the possibility to work with master data to any groups of users like data providers, supervisors, translators, publishers and others. Wide area of futures include centralized product information management with data governance, data classification, multimedia management and binding, extended search possibilities, business process management and tasks management, data quality management and advanced preview editing feature that can transform customer's product XML data to the popular shops format (e.g. Amazon, EBay) via XSLT to HTML page that will be processed by our custom Vaadin based framework.



Senior Back-End Developer

Real-Time Monitoring and Reporting System | DATAART

Date: 01.2012 — 01.2014


Project description
Framework for reporting systems and several applications based on this framework. Each instance of the application shows the activity in a particular area of several warehouses. As well as showing the real-time picture the system can show actions that happened in the past. The system uses MQ as the main data input and Google web toolkit platform to generate web reports. Sub-project: Scenario Testing Tool for Reporting System. Software for testing XML based scenarios, for the system described before. This tool has abilities to manipulate with huge XML documents, parsing documents and multithreaded scenario execution (sending as message objects)

Responsibilities

● Front-end and back-end development.

● Developing reports on client-side and business-logic for report generation. ●

Design and developing internal caching strategies