Sr. Data Engineer
We are fast-paced, forward thinking and driven by data.
We are accelerating the used car industry.
We are looking for creative, talented, hard-working individuals to join us.
Buckle up. It's going to be a great ride.
Based in Chicago, DRIVIN is accelerating the used car industry by bringing data and technology together in a spectacular, first-of-its-kind fashion to help dealers acquire the right cars, at the right price, right now. We are committed to delivering data-driven solutions with a high-touch, personalized level of service to each of our clients. We are looking for people that will help us create a culture that is exciting, empowering, motivating and challenging. Are you interested in learning more?
DRIVIN is looking to expand our data team as we continue to grow our data platform. The candidate should have a strong background with Python and SQL. As a member of the data team the main responsibilities are implementing/maintaining ETL jobs, using Python to ingest external data sources into the Data Warehouse, and working closely with the Product and Data Science teams to deliver data in useable formats and to the appropriate data sources.
DRIVIN has a polyglot data model using many cutting-edge data platforms. We are currently using MPP Postgres (Greenplum, Netezza, DBX) as our Data Warehouse, Elastic Search for location based searching, Postgres for transactional data.
This candidate should be a self-starter who is interested in learning new systems/environments and building new solutions.
The candidate should also work closely with the Data Science team to identify interesting data points for use by the Data Science team.
DRIVIN tech stack is very cutting edge. MPP Postgres drives the Data Warehouse, ElasticSearch enables our location based searching/metrics, and Apache Spark is used to train our models. All environments are run on AWS EC2/RDS/S3 and data processing framework is written in Python.
- Architect and implement ETL jobs for various functions
- Support and maintain daily ETL jobs
- Support the development teams by optimizing data access
- Work with data science teams to deliver metrics to consumers
- Experience with SQL
- Experience with Python/Java
- Experience with Postgres is preferred
- Experience with Spark and Scala preferred
- Experience working in an Agile environment preferred