Data Engineer, Analytics Team
We believe in keeping our teams lean and powerful, where we value results-oriented team players that have impeccable ownership. We are now looking for a Data Engineer to join our Analytics team. This role is at the core of our analytics for Player Support, Trust & Safety and other operations vital to the game’s health.
Data that we collect about our operations plays an integral part across many key areas such as understanding and impacting the way we do player support or how we keep our games fair and safe for everyone. In this position you will be responsible for making sure our ETL and pipelines are rock solid and follow modern practices. You are also a proactive and entrepreneurial problem solver that has a passion for using data and facts to help identify opportunities and contribute to keep improving on everything from implementing our data collection to understanding how we are impacting our games.
We pride ourselves on the level of thoughtfulness, quality and focus put into every aspect of our work. Be it the games themselves, or how we communicate them to our players – we are passionate about not compromising on quality ever and choosing a long term approach to make Supercell a truly global games company that will last for many decades to come.
All of this is way easier said than done. It takes vision, commitment, and super talented (and perhaps slightly crazy) people who will pursue only the very best work possible. If that sounds like you (including the “slightly crazy” part), then we welcome you to apply.
- Support data analysts and other functions with ad hoc requests related to data validity and semantics
- Define what data is collected to best serve our business needs
- Maintain data pipelines for various internal and external data sources
- Proactively suggest and implement improvements that increase scalability, robustness and availability of data systems
- Contribute to developing the company wide vision and strategy for data engineering practices
- +5 years of experience in data engineering or another relevant domain
- Expert knowledge of Python and SQL
- Track record of maintaining large scale ETL processes
- Experience with modern data stack
- Familiarity with large scale data warehouse technologies (Databricks, BigQuery, Spark, RedShift, Snowflake)
- Familiarity of build, deployment and orchestration tools (Azkaban or Airflow, Jenkins, Terraform)
- Ability to innovate and work independently
- Passion for pumping out the highest quality facts in a fast-paced environment
- Fluency in English