Data Engineer - I

Job Locations India-HR-Gurgaon


The Economist Group logo


The Economist Group is the leading source of analysis on international business and world affairs. We deliver world-class, thought provoking content through a range of formats, from web and app, to newspapers and magazines, conferences, film and audio. What ties us together is the objectivity of our opinion, the originality of our insight and our advocacy of economic and political freedom around the world.


With a growing global subscriber base and a reputation for insightful analysis and opinion on every aspect of world events, The Economist is one of the most widely recognised and well-read current affairs publications and the foundation of our digital consumer product portfolio.


The Role

We are recruiting data engineers across all levels to create a best in class global data engineering hub based out of Gurgaon.  This group will build the data platform and products to support the analytics, insights and data science needs for our core subscription business, Economist Intelligence Unit, Client Solutions and Events business. Data has always been at the heart of the Economist Group, and ultimately we aim to help the business make data driven decisions based on real time, actionable insights. This position is for a Data Engineer I (Big Data).


How you will contribute: 


  • Partner with data product managers, analysts, and data scientists to build and enhance data pipelines, data warehouses/lakes for analytics,reporting and AI/ML use cases.
  • Collaborate with your Tech Lead to build data pipelines and products using Spark,Hive, Hadoop,Airflow.
  • Work on a multi-TB scale data platform on Snowflake/AWS.
  • Learn to use data quality and monitoring tools, CI/CD & test frameworks to maintain data engineering excellence.

Experience, skills and professional attributes

The ideal skills for this role include:

  • You have a degree in software engineering, computer science or a similar field and 1-2 years of data engineering development experience in distributed data systems/platforms or backend services.
  • You have some experience programming in Python or any JVM based language (Scala/Java).
  • You understand database design and modelling techniques.
  • You are good at writing complex SQL queries.
  • You have worked on big data pipelines using Spark, Hive, Hadoop, Kafka.
  • You have experience working with Agile/Scrum methodologies.


To succeed in this role it would be an advantage if you possess:

  • You have experience with enterprise ETL and orchestration tools like Talend or Informatica.


Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed