This position is accountable for designing and developing Data Virtualization and ETL process for the implementation Data Marketplace solution.
Qualifications: JOB REQUIREMENTS (Education, Experience, and Technical Skills):
- Minimum education requirement is Bachelor of Engineering in Computer Science or related field,
- 8+ years of experience in data integration from key business systems on prem (Hadoop, Teradata, Oracle) and on-cloud (AWS, GCP)
- Should be well versed with the Data As a Service and Data Marketplace concepts
- Should have atleast 4 years of experience in implementing data Virtualization preferably using IBM Cloud Pak for Data.
- Minimum 4 years of experience in any Data processing tools like IBM DataStage, Informatica, AWS Glue, GCP Dataflow.
- Should have atleast 2 years of experience in real-time data processing using Kafka, Amazon Kinesis and AWS Lambda
- Advanced SQL skills for analysis, standardizing queries, and building data platform involving large-scale relational and non-relational datasets
- Should have the basic knowledge of any one visualization tools (Tableau, Power BI, QuickSight)
- It will be an added advantage if the candidate is having an understanding of Java Spring boot and UI (Angular/React)
- Passion for detail and data quality, an ability to identify weaknesses in data and process, and a willingness to drive improvement
Responsibilities:
- Designing and developing the Data Integration process
- Designing and developing the Data Virtualization process.
- Designing and developing the Data subscription and publication process.
- Interact with Data Analysts to understand the data cleansing/transformation rules and implement the same
Skills | Years of experience |
Hadoop | |
Teradata | |
Oracle | |
AWS/GCP | |
Data Virtualization preferably using IBM Cloud Pak for Data. | |
Any Data processing tools like IBM DataStage, Informatica, AWS Glue, GCP Dataflow. | |
Real-time data processing using Kafka, Amazon Kinesis and AWS Lambda | |