Haikal Zaini
About Candidate
Data Engineer with 5+ years of experience in building, maintaining, and optimizing data pipelines and data processing systems. Experienced in supporting business and campaign teams by delivering reliable and well-structured data through SQL, stored procedures, and data workflows.
Currently working as a DevOps/Data Engineer, responsible for monitoring and maintaining production data pipelines using PySpark, Shell scripting, and Talend. Skilled in optimizing jobs to improve performance and reduce manual processes, ensuring high availability and data accuracy.
Hands-on experience with big data technologies including Hadoop ecosystem (Hive, Impala) and Google BigQuery for data extraction, transformation, and analytics support.
Education
Studied Information Technology with a focus on software development, databases, and data processing. Gained foundational knowledge in programming, data structures, and system design, which supports my current work in data engineering and data pipelines.
Work & Experience
Maintained and monitored production data pipelines to ensure high availability and data accuracy using PySpark, Shell scripting, and Talend. Optimized and automated data processing jobs to improve performance and reduce manual work. Delivered data solutions using Hive, Impala, and BigQuery to support business and analytics requirements.
Provided data solutions to support business and campaign teams by developing SQL queries and stored procedures. Ensured accurate and efficient data processing for reporting and operational needs. Handled data requests, maintained workflows, and ensured data availability and reliability.
Supported data processing and system operations, including handling data requests, troubleshooting issues, and assisting with database-related tasks. Ensured smooth data flow and system reliability.
Developed and maintained big data pipelines for enterprise clients, focusing on revenue data processing systems using PySpark (Python & Apache Spark). Built and implemented data processing jobs for large-scale big data projects using Talend and Hadoop, ensuring reliable and efficient data workflows. Developed stored procedures in Google BigQuery to support data transformation and analytics requirements across multiple business use cases. Collaborated with cross-functional teams to deliver scalable and high-performance data solutions.
Developed and maintained client web applications, ensuring functionality, performance, and reliability. Worked on backend and frontend features, including database integration, data processing, and system enhancements based on client requirements. Collaborated with teams to deliver scalable and user-friendly web solutions, while handling bug fixing, system improvements, and ongoing maintenance.

