Nazarii Melnychuk
Senior Software Engineer
Major Points
- Senior Software Engineer with 9 years of experience in high-performance distributed systems and data pipelines
- Strong production experience with Python, Scala, Java, and Golang
- Expert in data processing tools (Apache Spark, Akka Streams)
- Track record of system optimization and technical leadership
- Technical interviewer and Scala bootcamp trainer
Specialized in
- Big Data solutions with Apache Spark for streaming and batch processing
- Scalable data pipelines using Scala, Python, Airflow, and AWS
- Relational DBs (PostgreSQL, MySQL) and warehouses (Amazon Redshift)
- Microservices architecture with Akka HTTP and Akka Streams
Leadership and mentoring
- Led technical evaluations and onboarding of engineers while being a Scala mentor within my team
- Delivered intensive Scala bootcamp lectures for recruiting and upskilling mid-to-senior engineers
- Conducted internal Scala workshops to enhance teams capabilities
- Facilitated cross-team collaboration to accelerate project delivery and customer value
Industry Experience
- Cloud communications (WhatsApp, RCS, SMS)
- Marketing/Adtech (user engagement data, ad analytics)
- Healthcare data processing and HIPAA compliance
- IoT monitoring for food/beverage companies
- Energy sector digital transformation
Achievements
Communications project
- Developed in-house services replacing costly external APIs, generating $100k-$250k customer savings
- Optimized message routing and billing pipeline during Kinesis-to-Kafka migration
- Achieved improved throughput (830+ MPS) by eliminating processing redundancies
- Implemented comprehensive cross-stream validation to ensure data consistency between source systems during this critical transition
Healthcare project
- Transformed legacy Python batch jobs into scalable Apache Spark workflows, successfully migrating complex healthcare data processing logic
- Contributed significant system architecture decisions across multiple projects to build production-grade ETL pipelines handling diverse healthcare data formats
Skills
Programming Languages
Data Processing & Streaming
Data Warehousing & Analytics
Databases
Cloud & Infrastructure
Experience
October 2021 - Now
Messaging Project
Customer: US SaaS company
Developed core components of omnichannel messaging platform, enabling enterprise customers to reach users across WhatsApp Business, RCS, and SMS channels through unified APIs.
Responsibilities:
- Implemented key parts of sender registration system supporting multiple messaging channels (WhatsApp, RCS and potentially others), handling provider-specific requirements and compliance rules.
- Optimized message delivery latency and reliability while maintaining complex business rules across different OTT providers.
- Resolved critical customer escalations through deep technical investigation and edge case analysis.
- Worked on updating billing logic on a critical path, ensuring accurate and timely billing for customers.
- Contributed to cross-team Scala, Golang and Java projects.
March 2021-October 2021
Marketing Project
Customer: US technology company
Developed and optimized Apache Spark pipelines processing cross-service engagement data with strict privacy preservation requirements across multiple digital entertainment and subscription platforms.
Responsibilities:
- Engineered high-performance Spark jobs processing TB-scale user engagement data.
- Built privacy-preserving data aggregation pipelines enabling anonymous cross-service analytics.
- Optimized data processing pipelines reducing job completion times while ensuring data minimization principles.
- Documented dataset lineage and data flow for compliance and reproducibility.
May 2020-March 2021
Healthcare Project
Customer: US healthcare technology company
Played key role in modernizing healthcare data processing platform, enabling efficient transformation of diverse medical records into analytics-ready formats for business intelligence.
Responsibilities:
- Transformed legacy Python batch jobs into scalable Apache Spark workflows, migrating complex healthcare data processing logic.
- Designed and implemented production-grade ETL pipelines handling diverse healthcare data formats from multiple source systems.
- Optimized large-scale data reprocessing jobs reducing execution time while ensuring HIPAA compliance.
- Developed automated AWS S3 to Redshift data pipeline using PySpark and boto3, enabling real-time BI reporting.
- Contributed significant technical input to architecture decisions affecting multiple project initiatives.
Oct 2019-May 2020
Adtech Project
Customer: Israel adtech company
Developed real-time ad analytics platform processing high-volume impression data (670+ MPS) to enable automated bidding decisions and campaign optimization.
Responsibilities:
- Implemented Spark Structured Streaming jobs for real-time ad performance analysis
- Optimized data processing architecture reducing operational costs while maintaining system reliability
- Modernized legacy Python ETL pipelines to improve maintainability and processing efficiency
Feb 2018-Oct 2019
Communications Project
Customer: US SaaS company
Developed high-throughput streaming applications for SMS data processing, implementing complex rate calculation logic and generating business intelligence insights.
Responsibilities:
- Optimized messaging metadata enrichment pipeline during Kinesis-to-Kafka migration, achieving 830+ MPS through elimination of processing redundancies.
- Implemented comprehensive cross-stream validation to ensure data consistency between source systems during the transition.
- Built cost-effective internal services replacing external API dependencies, resulting in significant operational savings
- Led Scala knowledge-sharing initiatives including mentoring sessions and technical workshops
Jun 2017-Dec 2017
Digital Transformation Project
Customer: US energy company
Contributed to enterprise-wide data modernization initiative, migrating traditional database workloads to cloud-based big data processing platform.
Responsibilities:
- Replaced legacy Oracle and MySQL batch processes with scalable Apache Spark pipelines
- Built new data processing workflows using Spark SQL and Hive, eliminating dependency on legacy SQL jobs
Nov 2015-Jun 2017
IoT Monitoring Platform
Customers: European food and beverage companies
Developed real-time monitoring system processing data from IoT sensors across warehouse and retail locations, enabling predictive maintenance and inventory optimization.
Responsibilities:
- Implemented scalable data processing pipelines using Apache Spark and Akka Streams to handle real-time sensor data
- Built diagnostic tools enabling QA and hardware teams to validate device performance and data accuracy
- Developed anomaly detection system for early identification of hardware issues
Education
2016-2017
MSc Degree, Computer Science, Lviv Polytechnic National University, Ukraine
Professional Development
2023
Natural Language Processing & AI
- Contributed Ukrainian localization and dataset improvements to OpenAssistant/oasst1 open-source project
- Built practical applications using transformer models and HuggingFace libraries
- Explored large language models and their enterprise applications
2022
Deep Learning & Time Series Analysis
- Completed fast.ai’s Practical Deep Learning for Coders course
- Implemented time-series forecasting solutions using Facebook’s Prophet
- Applied deep learning techniques to real-world data problems
Languages
English - Advanced