See more Data Science jobs →

← Back to all jobs

DataOps Engineer

Posted

MileIQ
Headquarters: San Francisco, CA
https://www.mileiq.com/
See all MileIQ jobs →

Lead DataOps Engineer

What we are looking for in this role:

DataOps Engineer is someone who applies agile development and DevOps principles to data discipline. She or he orchestrates and automates data pipelines to make them more flexible while maintaining a high level of quality. The DataOps Engineer uses tools to break down the barriers between operations, analytics and data scientist unlocking a high level of productivity for the entire team.

Why we feel this is a unique opportunity:

Over 2 million users trust MileIQ, the top grossing finance app in both app stores, to automatically log their mileage for their largest deductions and reimbursements ever.

At MileIQ, we work in small, independent, cross-functional Scrum teams to craft innovative software solutions for our customer's needs.

Our Services team is fast-paced and agile with weekly production pushes. We are a community of talented engineers who are passionate about building highly scalable, data intensive, solutions on an open source software stack.


We make architecture decisions collaboratively and iterate quickly. We keep alive an inherently startup-like, risk-taking culture but we also take the responsibility seriously to carry one ofthe world's most recognizable software brand names - Microsoft.

Join us in reinventing the future of productivity!

What you will have ownership for delivering:

  • Manage, organize, and test fault-tolerant data and ML pipelines that process large amounts of data from many diverse storage systems.
  • Drive the release cycle of our internal data products, utilizing automation to ensure high quality code and accelerate the pace of development.
  • Troubleshooting issues including optimizing SQL, ETL jobs, and ML pipelines.
  • Collaborate with data scientists and engineers to deploy new machine learning and deep learning models into complex and mission critical production systems. Select the right tool(s) for the job and make it work in production.
  • Promote a culture of self-serve data analytics by minimizing technical barriers to data access and understanding.
  • Relentless focus on automation all around.
  • Stay current with the latest research and technology and communicate your knowledge throughout the enterprise
What you will need to be successful:

  • 3+ years’ industry experience working with data in a production environment
  • Very strong Linux skills with a passion for automating everything.
  • System Administration /performance tuning/troubleshooting experience in distributed data stores (SQL-DW/No-Sql DB).
  • Ability to perform backup and restore of data.
  • Experience with CI-CD tools (VSTS or Jenkins)
  • Have good understanding of metric collection for monitoring and alerting. Experience with monitoring tools like New Relic, Graphite is a plus
  • Strong programming skills in a variety of languages (e.g. python, bash)
  • Understanding of relational database systems (e.g MS SQLServer, SQL Data Warehouse)
  • Experience with distributed computing frameworks (e.g. Spark or TensorFlow)
  • Passion for data democratization
  • Experience working with containers (Docker, Kubernetes) or other related technologies
  • BS/MS in Computer Science or other engineering discipline

Salary: $120,000.00 to $160,000.00 /year