Job Role: Kafka Engineer (10+)
Location- Irvine, CA /Remote
Long Term Contract
- Expertise in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
- Experience building robust and scalable data integration (ETL) pipelines using SQL, Python, Spark or PySpark. Advanced knowledge in one of the programming language is must.
- Maintain and enhance Confluent Kafka architecture, Confluent Kafka design principles, CI/CD Deployment procedures
- Experience with building streaming applications with Confluent Kafka (Confluent Kafka preferred but open-source Kafka acceptable)
- Development experience using Kafka producers, consumers, and streams (Confluent Kafka preferred but open-source Kafka acceptable)
- Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
- Good understanding of AWS technologies (S3, AWS Glue, CDK, ECS, EMR, Redshift, Athena)
- Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
- Knowledge of Engineering and Operational Excellence using standard methodologies.
- Experience with process improvement, workflow, benchmarking and / or evaluation of business processes.
- Familiarity with CI/CD process.
- Work in a fast paced agile environment.
- Experience providing technical leadership and mentoring other junior engineers for best practices on data engineering.
Thanks, & Regards,
Dushyant Som
Lead Technical Recruiter
Phone: (469) 697 2496
Email: - dushyant.som@convextech.com
Website: www.convextech.com