Friday, 31 May 2024

Sure Shot Closure _ Azure Synapse Analytics/ Azure Data Factory Consultant Boston Massachusetts

0 comments
Azure Synapse Analytics/ Azure Data Factory Consultant

Boston Massachusetts
Contract

Sr.No
Skill
Technical Area
Must Have/Good to have
Proficiency Rating (1 to 5)
1
Azure Synapse Analytics/Azure data Factory (ADF)
- Hands on Experience in Azure Synapse Analytics, Azure Data Factory and Data Bricks, Azure Storage, Azure Key Vault, SQL Pools CI/CD Pipeline Designing and other Azure services like functions, logic apps
- Linked services, Various Runtimes, Datasets, Pipelines, Activities
- Strong Hands on Experience in Various Activites like Control flow logic and conditions (For Each, if, switch, until), Lookup, Stored procedure, scripts, validations, Copy Data, Data flow, Azure functions, Notebooks, SQL Pool Stored procedures and etc
- Serverless SQL Pool, Dedicated SQL Pool
- Strong hands on exp in deployment of code through out landscape (Dev -> QA -> Prod), Git Hub, CI/CD pipelines and etc
Must Have
 4-5
2
Data Bricks
• Strong hands on in Pyspark and Apache Spark
• Strong hands on in Medallion architecture
• Experience in Native Spark Migration to Databricks.
• Experience in Building Data Governance Solutions like Unity Catalog, Azure Purview etc.
•Highly experienced in Usability Optimization (Auto Compaction, ZOrdering, Vaccuming), Cost Optimization and Performance Optimization.
• Build Very Strong Orchestration Layer in Databricks/ADF…. Workflows.
• Build CICD for Databricks in Azure Devops.
• Process near Real time Data thru Auto Loader, DLT Pipelines.
• Implement Security Layer in Delta Lake.
• Implement Massive Parallel Processing Layers in Spark SQL and PySpark.
• Implement Cost effective Infrastructure in Databricks.
• Experience In extracting logic and from on prem layers, SAP, ADLS into Pyspark/ADLS using ADF/Databricks.
Must Have
 4-5
3
 SQL Server stored procedures
- strong hands on creating the SQL stored procedures
- Functions, Stored Procedures, how to call one SP into another, How to process record-by-record
- Dynamic SQL
Must have
 4-5
4
 Python
- Must have strong background about the Python libraries like PySpark, Pandas, NumPy, pymysql, Oracle, Pyspark libraries
- Must have strong hands on to get data through APIs
- Must be able to install libraries and help users to troubleshoot issues
- Must have knowledge to get the data through stored procedures via Python
- Should be able to debug the Python code
Must have
 4-5
5
 Sparks
- Hands on experioence in Spark Pools, PySpark
- Should be able to merge data/delta loads through Notebooks
- Must have strong background about the Python libraries and PySpark


Thanks & Regards,

Ashok Huria, Account Manager (Client Management)

Phone: 732-289-3139 |Email: ashok.huria@resource-logistics.com

Resource Logistics, Inc.

--
You received this message because you are subscribed to the Google Groups "Latest C2C Requirements2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to latest-c2c-requirements2+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/latest-c2c-requirements2/CAGz6%2BaCD92rxU51ohBwnYC6jUZDosBOcfK%2B%2BS3tuxF3AoDRfyg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

No comments:

Post a Comment