If interested, please send a word version of your resume to firstname.lastname@example.org
THE COMPANY: A leading media and entertainment company with a new streaming entertainment offering is the culmination of some of the most innovative new technology and greatest creative talent in the industry.
primary responsibility will be to develop the data models and semantic layer used in the analyses and dashboards that drive decision-making across the organization.
You will define key business metrics, automate data transformation & testing, and advance standards for data quality and self-service analytics.
DAY TO DAY:
Build data models in Snowflake and Looker that support flexible querying and data visualization
Advance automation efforts in data processing & testing that help the team spend less time manipulating & validating data and more time analyzing it
Participate in the creation and support of analytics development standards and best practices for our Airflow, Snowflake, and Looker environments
Rapidly deliver on concepts through prototypes that can be presented for feedback
Train fellow employees on best practices for analytics data modeling and help others act as successful stewards of our data and tools
Assist with the selection, implementation, and integration of new data tools
5+ years of relevant experience in analytics/data engineering
5+ years of writing clean, optimized SQL
Significant experience with general-purpose programming (e.g. Python, Java, Go), dealing with a variety of data structures, algorithms, and serialization formats
Experience in transforming flawed/changing data into consistent, trustworthy datasets, and in developing DAGs to batch-process millions of records
Expertise in data-warehousing concepts such as star schemas, slowly changing dimensions, ELT/ETL, and MPP databases
Advanced ability to build reports and dashboards with business-intelligence tools (such as Looker and Tableau);
Proficiency with Git (or similar version control) and CI/CD best practices
Ability to write clear, concise documentation, and to communicate generally with a high degree of precision
Ability to solve ambiguous problems independently
Ability to manage multiple projects and time constraints simultaneously
Care for the quality of the input data and how the processed data is ultimately interpreted and used
Experience with big-data technologies (e.g. Spark, Kafka, Hive)
Master’s degree is a plus
Bachelor’s Degree in Computer Science, Engineer, Mathematics, or similar required
The Company is an equal opportunity employer and makes employment decisions on the basis of merit and business needs. The Company will consider all qualified applicants for employment without regard to race, color, religious creed, citizenship, national origin, ancestry, age, sex, sexual orientation, genetic information, physical or mental disability, veteran or marital status, or any other class protected by law. To comply with applicable laws ensuring equal employment opportunities to qualified individuals with a disability, the Company will make reasonable accommodations for the known physical or mental limitations of an otherwise qualified individual with a disability who is an applicant or an employee unless undue hardship to the Company would result.