Job Role :: Sr AWS Data Engineer(10+)
Location- Irvine, CA/Remote
Long Term Contract
- Expertise in ETL optimization, designing, coding, and tuning big data processes using Apache Spark or similar technologies.
- Experience building robust and scalable data integration (ETL) pipelines using SQL, Python, Spark or PySpark. Advanced knowledge in one of the programming language is must.
- Experience with building data pipelines and applications to stream and process datasets at low latencies.
- Experience with real-time and scalable systems development using Apache Kafka or Confluent Kafka or Kafka Streams.
- Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
- Good understanding of AWS technologies (S3, AWS Glue, CDK, ECS, EMR, Redshift, Athena)
- Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures.
- Knowledge of Engineering and Operational Excellence using standard methodologies.
- Experience with process improvement, workflow, benchmarking and / or evaluation of business processes.
- Familiarity with CI/CD process.
- Work in a fast paced agile environment.
- Experience providing technical leadership and mentoring other junior engineers for best practices on data engineering.
- Experience in building REST-APIs for data transfers.
- Background in Java and Spring framework is a plus
- Proficiency in one of JIRA, Atlassian, and Git is must
Thanks, & Regards,
Dushyant Som
Lead Technical Recruiter
Phone: (469) 697 2496
Email: - dushyant.som@convextech.com
Website: www.convextech.com