Tuesday 29 November 2022

Subject: Job Title: Data Engineer - Databricks, Spark || CTH || 2 Months Remote

Hello,

My name is Mahima Sharma and I'm a recruiter with Resource Logistics, Inc. Our records show that you are a qualified professional with the skills & experience that is required to fill the below Data Engineer - Databricks, Spark role. In addition to the above, this position is based in Pittsburgh PA, in the United States and this is a Contract Position.THE COMPLETE JOB DESCRIPTION IS BELOW FOR YOUR REVIEW:Job ID: 22-27567Job Title: Data Engineer - Databricks, SparkJob Location: Pittsburgh PA, United StatesJob Duration: 12+ MonthsType of Hire: ContractMode of Interview: Telephone Interview Followed by Video Interview.Mandatory Skill-JOB DETAILS:

Data Engineers will be responsible for design, build and maintain data pipelines ensuring data quality, efficient processing, and timely delivery of accurate and trusted data.

The ability to design, implement and optimize large-scale data and analytics solutions on Databricks, Spark, Snowflake Cloud Data Warehouse is essential.

Ensure performance, security, and availability of the data warehouse.

Establish ongoing end-to-end monitoring for the data pipelines.

Strong understanding of full CI/CD lifecycle.

Must Haves:

·       2+ years of recent experience with Databricks / Spark / Snowflake and a total of 6+ years in data engineering role.

·       Designing and implementing highly performant data ingestion pipelines from multiple sources using spark and databricks.

·       Extensive working knowledge of Spark and Databricks

·       Demonstrable experience designing and implementing modern data warehouse/data lake solutions with an understanding of best practices.

·       Hands-on development experience with Snowflake data platform including Snowpipes, SnowSQL,tasks, stored procedures, streams, resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, cloning, time travel, data sharing and understanding how to use these features.

·       Advanced proficiency in writing complex SQL statements and manipulating large structured and semi-structured datasets.

·       Data loading/Unloading and Data sharing

·       Strog hands-on Experience on SNOWSQL queries, script preparation and stored procedures and performance tunning

·       Knowledge of SnowPipe implementation

·       Create Spark jobs for data transformation and aggregation

·       Produce unit tests for Spark transformations and helper methods

·       Security design and implementation on Databricks

·       Build processes supporting data transformation, data structures, metadata, dependency and workload management.

·       A successful history of manipulating, processing and extracting value from large disconnected datasets.

·       Working knowledge of message queuing, stream processing, and scalable 'big data' data stores.

Good to Have:

·       Valid professional certification

·       Experience in Python/Pyspark/Scala/Hive Programming.

·       Excellent verbal and written communication and interpersonal skills

·       Confidence and agility in challenging times

·       Ability to work collaboratively with cross-functional teams in a fast-paced, team environment


--
You received this message because you are subscribed to the Google Groups "Latest C2C Requirements2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to latest-c2c-requirements2+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/latest-c2c-requirements2/CAMjeKS9Xi7U-CfUeebXK-8cxbv%2B1HvzyN1w8zep0GdGA3CThOQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

No comments:

Post a Comment