About Crowe Data Science
Everything we do is about making the future of human work more purposeful. We do this by leveraging state-of-the-art machine learning, modern architectures, and industry experts, to create AI-powered software-as-a-service.
Our world-class Data Science professionals give Crowe the ability to capture and harness the power of data. A combination of data scientists and machine learning engineers create and deploy AI functionality into software to solve complex problems. This type of group, rarely seen anywhere except large tech companies, focuses on building scalable AI products, not just one-time insights. You will learn and build on a wide range of technologies – distinguishing Crowe in the market and driving the firm’s technology and innovation strategy.
The future is powered by AI, come build it with us.
About The Team
We believe machine learning is software. While our Data Scientists experiment in Jupyter notebooks, they are not the end of our machine learning pipelines. Our team uses modern frameworks and tools (git, Docker, Kubernetes) to engineer machine learning solutions.
We foster great Software Engineering. You are given the time and resources you need to shepard software best practices in your project, and you will guide fellow machine learning engineers, data scientists and other Crowe staff through your work.
We want to make good engineers better. Engineers have plenty of options on the job market. We want to be one of the best ones. We have regular opportunities for team members to teach each other, support for conferences, and recurring reading groups. Moreover, our team offers generous PTO, a flexible work from home policy, and a strong commitment to work-life balance. Company-wide, Crowe routinely wins awards for our support of our staff, including [to be filled in by Talent Services]
About The Role
We work on medium-term machine learning projects (4-8 months) with a high degree of autonomy.
We are not external facing. We harness the subject matter expertise of other Crowe teams to build machine learning products.
We architect and build complex distributed computing systems utilizing High Performance Computing (HPC) computing resources on-premise and in the cloud.
We design and implement efficient and scalable machine learning systems from data collection and training to deployment and real-time serving using Python
We write and review code for productionizing machine learning models from the RESTful API specification to deployment automation.
Education. An advanced degree in a Computer Science or related technical field
Docker practicioner. Minimum two years of experience with containerized orchestration and virtualization frameworks. Experience with a Kubernetes package manager such as Helm a plus.
Linux experience. Configuring and troubleshooting Linux-based systems. You will be performing daily work with a Linux laptop.
Continuous Integration and Deployment. Implementing multi-stage CI/CD pipelines preferably in GitLab with automated deployments.
Cloud Computing familiarity. Experience with at least one Cloud Computing platform, experience with Microsoft Azure a plus.
Distributed Systems Know-How. Designing, deploying, and troubleshooting distributed systems
Networking chops. Strong networking knowledge (OSI network layers, working with proxies, API management)
Interface Saavy. Creating and interacting with RESTful HTTPS API's, websockets, and webhooks
Programming experience. Your job will require you to code - we write Python and Go, so experience in these languages is preferred. Python web framework experience is a plus.
DevOps experience. You enjoy DevOps. You've had some experience monitoring and maintaining mission critical systems and services with uptime requirements.
Capacity for autonomy. You won't have to hit the ground running on day one, but you will have to manage your time between staying on the cutting edge, debugging issues with existing deployments and current sprint deliverables.
Strong communication skills. You can both dive deep in DevOps, software engineering and infrastructure related topics and explain concepts to a non-technical audience.
Scrappiness. You like to code and have interest in learning new technologies without significant outside direction. You aren't afraid to take a stab at firing up a proof-of-concept of an experimental piece of technology before it becomes widely used and documented. Using Docker and Kubernetes as regular parts of your workflow sounds exciting.
Lifelong Engineering. You want to stay fresh with regards to practicing engineering. Continuous improvement is a mantra for you as any system you work with gradually improves over time.