Tuesday 16 March 2021

C2C requirement - Data Engineer - Hartford, CT (Remote right now)

0 comments
 
Title: Data Engineer
Location: Hartford, CT (Remote right now)
Pay Rate: $82/hr On C2C
 
Experience: 10+ Years of experience is required
 
Mandatory Skills: Talend, PySpark,   Redshift
 
Job description:
  • The Senior Data Engineer will support and provide expertise in data ingestion, wrangling, cleansing, technologies. In this role they will work with relational and unstructured data formats to create analytics-ready datasets for analytic solutions.
  • The senior data engineer will partner with the Data Analytics team to understand their data needs and build data pipelines using cutting edge technologies.
  • They will perform hands-on development to create, enhance and maintain data solutions enabling seamless integration and flow of data across our data ecosystem.
  • These projects will include designing and developing data ingestion and processing/transformation frameworks leveraging open source tools such as Python, Spark, pySpark, etc.
 
Responsibilities:
  • Translating data and technology requirements into our ETL / ELT architecture.
  • Develop real-time and batch data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Java, NoSQL DBs, AWS EMR.
  • Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs.
  • Develop custom cloud-based data pipeline.
  • Provide support for deployed data applications and analytical models by identifying data problems and guiding issue resolution with partner data engineers and source data providers.
  • Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
 
Qualifications:
  • Strong experience in data ingestion, gathering, wrangling and cleansing tools such as Apache NiFI, Kylo, Scripting, Power BI, Tableau and/or Qlik
  • Experience with data modeling, data architecture design and leveraging large-scale data ingest from complex data sources
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Strong knowledge of analysis tools such as Python, R, Spark or SAS, Shell scripting, R/Spark on Hadoop or Cassandra preferred.
  • Strong knowledge of data pipelining software e.g., Talend, Informatica
Thanks,
_______________________________________
Govind Singh Kushwah | New York Technology Partners
120 Wood Avenue S | Suite 504 | Iselin NJ 08830
Direct: 201.604.3877 | www.nytp.com | Join me on LinkedIn
 
 Please add me on linkedin to stay in touch: 
 
We respect your online privacy. If you would like to be removed from our mailing list please reply with "Remove" in the subject and we will comply immediately. We apologize for any inconvenience caused. Please let us know if you have more than one domain. The material in this e-mail is intended only for the use of the individual to whom it is addressed and may contain information that is confidential, privileged, and exempt from disclosure under applicable law. If you are not the intended recipient, be advised that the unauthorized use, disclosure, copying, distribution, or the taking of any action in reliance on this information is strictly prohibited.
 

No comments:

Post a Comment