PagerDuty is growing, and we are looking for a Data Engineer III to join our global Enterprise Data team in IT to manage and contribute to the software and services that we provide to our users. As a Data Engineer III, you will be responsible for designing, building, deploying, and supporting solutions for teams across PagerDuty's growing global user base. You are scrappy, independent, and excited about having a big impact on a small but growing team.
Together with the other members of the Data Platform team, you will have the opportunity to redefine how PagerDuty designs, builds, integrates, and maintains a growing set of software and SaaS solutions. In this role, you will be working cross-functionally with business domain experts, analytics, and engineering teams to re-design and re-implement our Data Warehouse model(s). The ideal candidate will have experience as a technical team lead, with excellent English communication skills (both written and verbal), and the ability to provide technical direction to the team in Chile.
KEY RESPONSIBILITIES
- Translate business requirements into data models that are easy to understand and used by different disciplines across the company. Design, implement, and build pipelines that deliver data with measurable quality under the SLA.
- Partner with business domain experts, data analysts, and engineering teams to build foundational data sets that are trusted, well understood, aligned with business strategy, and enable self-service.
- Be a champion of the overall strategy for data governance, security, privacy, quality, and retention that will satisfy business policies and requirements. Assist with initiatives to formalize data governance and management practices, rationalize our information lifecycle, and key company metrics.
- Own and document foundational company metrics with a clear definition and data lineage.
- Identify, document, and promote best practices. Contribute to the data ecosystem and build a strong data foundation for the company.
- You will design, implement, and scale data pipelines that transform billions of records into actionable data models that enable data insights.
- You will provide hands-on technical support to build trusted and reliable domain-specific datasets and metrics.
BASIC QUALIFICATIONS
- 5+ years of experience working in data integration, pipelines, and data modeling.
- Experience designing and deploying code in Data platforms in a cloud-based and Agile environment.
- Knowledge and experience of relational databases and being capable of writing complex SQL.
- Experience working with Cloud-based Data Warehousing Platform such as Snowflake or AWS, or Databricks.
- Experience with Python and SQL.
- Experience working with SQL, AWS S3, and ETL/Automation Workflow tools like Workato, etc.
PREFERRED QUALIFICATIONS
- Bachelor's degree in Computer Science, Engineering, or related field, or equivalent training, fellowship, or work experience.
- Knowledge of at least one Data Visualization tool, like Tableau, Power BI, is preferred.
- Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners.
- Smart Individual to pick up and jump into any new technology as and when needed, and a good team player.
- Hands-on knowledge and experience with Gen AI-related projects using some of the tools, like Snowflake, Databricks, is a big plus.
PagerDuty is a flexible, hybrid workplace. We embrace and encourage in-person working as an integral part of our culture. Both our employees and external research tells us that co-located collaboration strengthens connections, drives innovation, and accelerates learning.
This role is expected to come into our Santiago office 2 days per week, so you can thrive in your new role and fully embrace being a Dutonian!