Wavicle designs and delivers data and analytics solutions to reduce time, cost, and risk of companies’ data projects, improving the quality of their analytics and decisions now and into the future. As a privately-held consulting service organization with popular, name brand clients across multiple industries, Wavicle offers exciting opportunities for data scientists, solutions architects, developers, and consultants to jump right in and contribute to meaningful, innovative solutions.
Our 250+ local, nearshore and offshore consultants, data architects, cloud engineers, and developers build cost-effective, right-fit solutions leveraging our team’s deep business acumen and knowledge of cutting-edge data and analytics technology and frameworks.
At Wavicle, you’ll find a challenging and rewarding work environment where we enjoy working as a team to exceed client expectations. Employees appreciate being part of something meaningful at Wavicle. Wavicle has been recognized by industry leaders as follows:
- Chicago Tribune’s Top Workplaces
- Inc 500 Fastest Growing Private Companies in the US
- Crain’s Fast 50 fastest growing companies in the Chicago area
- Talend Expert Partner recognition
- Microsoft Gold Data Platform competency
About the Role:
We are looking for a Data Engineer with strong real-life experience in building data pipelines using emerging technologies
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of sources like Hadoop, Spark, AWS Lambda, etc.
- Experience with AWS Cloud on Data Integration with Apache Spark, EMR, Glue, Kafka, Kinesis and Lambda in S3, Redshift, RDS, and MongoDB/DynamoDB ecosystems.
- Strong real-life experience in Python development, especially in PySpark in AWS Cloud environment.
- Design, develop, test, deploy, maintain and improve data integration pipeline.
- Develop pipeline objects using Apache Spark / Pyspark / Python or Scala.
- Design and develop data pipeline architectures using Hadoop, Spark and related AWS Services.
- Load and performance test data pipelines built using the above-mentioned technologies.
- At least 5 years of hands-on professional work experience with AWS and Python programming, experience with Python frameworks (e.g., Django, Flask, Bottle) is required
- Expert level knowledge of using SQL to write complex, highly-optimized queries across large volumes of data.
- Working experience on ETL pipeline implementation using AWS services such as Glue, Lambda, EMR, Athena, S3, SNS, Kinesis, Data-Pipelines, Pyspark, etc.is required.
- Hands-on experience using programming language Scala, Python, R, or Java.is required.
- Hands-on professional work experience using emerging technologies (Snowflake, Matillion, Talend, Thoughtspot and/or Databricks) is highly desireable.
- Expert level knowledge of using SQL to write complex, highly-optimized queries across large volumes of data is required.
- Knowledge or experience in architectural best practices in building data lakes.
- Strong problem solving and troubleshooting skills with the ability to exercise mature judgement.
- Bachelor or Master’s degree in Computer Science, or related field is required.
- Must reside in Knoxville, TN or open to relocation.
- Relocation assistance is available for full-time, salaried hires.
Equal Opportunity Employer
Wavicle is an Equal Opportunity Employer and committed to creating an inclusive environment for all employees. We welcome and encourage diversity in the workplace regardless of race, color, religion, national origin, gender, pregnancy, sexual orientation, gender identity, age, physical or mental disability, genetic information or veteran status.
Wavicle Data Solutions