Senior Data Engineer

Job Locations India-HR-Gurugram
ID
2024-10321
Function
Technology

Introduction

Economist Intelligence EIU 2021 Logo

 

The Economist Intelligence Unit (EIU) is a world leader in global business intelligence. We help businesses, the financial sector and governments to understand how the world is changing and how that creates opportunities to be seized and risks to be managed.

 

At our heart is a 50 year forward look, a global forecast of the majority of the world’s economies, we seek to analyse the future and deliver that insight through multiple channels and insights, allowing our clients to take better trading, investment and policy decisions.

 

We’re changing, embedding alternate data sources such as GPS and satellite data into our forecasting, products will increasingly be tailored to individual clients, driven by some of the most innovative data in the market. A highly collaborative team of Product Managers, Customer Experience and Product Engineering is being created with a focus on creating business and customer value driven by real time analytics alongside our traditional products.

 

We are transitioning to operating with agile product teams and adopting cloud native engineering practices and we need your help. As a back-end developer you will work with an amazing team of developers, designers and product managers to design and build applications to service the Economist Intelligence Unit (EIU) clients.

Accountabilities

How you will contribute:

  • Build data pipelines: Architecting, creating and maintaining data pipelines and ETL processes in AWS
  • Support and Transition: Support and optimise our current desktop data tool set and Excel analysis pipeline to a transformative Cloud based highly scalable architecture.
  • Work in an agile environment: within a collaborative agile cross-functional product team using Scrum and Kanban
  • Collaborate across departments: Work in close relationship with data science teams and with business (economists/data) analysts in refining their data requirements for various initiatives and data consumption requirements
  • Educate and train: Required to train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases
  • Participate in ensuring compliance and governance during data use: To ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives.
  • Work within, and encourages a Devops culture and Continuous Delivery process

Experience, skills and professional attributes

The ideal skills for this role include:

  • Experience with programing in Python, Spark and SQL
  • Prior experience in AWS services (Such as AWS Lambda, Glue, Step function, Cloud Formation, CDK)
  • Knowledge of building bespoke ETL solutions
  • Data modelling, and T-SQL for managing business data and reporting
  • Capable of technical deep-dives into code and architecture
  • Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management
  • Experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
  • Effective communication skills

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed