The Machine Learning Developer is responsible for developing and deploying advanced analytics to improve decision-making at Staples. The individual plays a key role in the development of state-of-the-art systems which utilize Data Science techniques such as machine learning, statistical analysis, and mathematical optimization. These systems’ requirements will come from all parts of Staples and will include challenges like:
• Architecting and developing SKU demand prediction and item sourcing engine that drive growth and profitability while minimizing Staples’ investment in inventory.
• Building and deploying an application that reduce order fulfilment costs across warehouse and transportation network through robotic automation, process control, and capacity optimization.
• Engineering and implementing a planning platform to ensure the right products are being offered to customers, and stocked in the right locations with right replenishment policies.
• Work closely with key partners to understand business requirements, and then architect, develop, deploy, and maintain analytical systems.
• Investigate design approaches, prototype new technology, and evaluate technical feasibility.
• Deliver data science software solutions against aggressive schedules.
• 0-3 years industry experience plus PhD or 3-5 years industry experience plus MS or 5-7 years industry experience plus Undergrad in Computer Science, Computer Engineering or related field
• Hands-on experience with developing advanced applications that use machine learning, data mining, statistical inference, mathematical modeling, and mathematical optimization techniques and technologies
• Experience building scalable infrastructure, and large scale systems using multi-threaded programs and parallel and high-performance distributed computing for commercial online services
• Strong fundamentals in algorithms and data structures. Keen attention to detail, ability to dive deep, and easily adaptable to changes in environment and prioritization
• Experience architecting machine learning infrastructure, and developing & deploying large-scale data science applications (e.g. forecasting, search, personalization, recommendation, fraud detection engine, A/B testing/experimentation platform, benchmarking system etc.) using micro-services
• Experience working with development toolsets such as Spark, Hadoop, Python, C++, Java, Docker, Azure/AWS/Google Cloud
• Experience deploying advanced supervised and unsupervised learning techniques, natural language processing, robotics, computer vision, markov decision process, deterministic and non-deterministic mathematical programming, constraint satisfaction, and control theory
• Experience productionizing machine learning methodologies such as regression, classification, clustering, matrix factorization, predictive analytics, decision trees, support vector machines, neural networks / deep learning
• Understanding of data science relevant technologies such as Scala, Flask, R, Lucene/SOLR, Elasticsearch, SparkMLib/SparkML, Spark Streaming, Kafka, Zookeeper, Yarn, Hive, H2O, Jupyter Hub, Tensor Flows, GPU computing, MYSQL, Teradata, MongoDB, NOSQL, Power BI, Tableau
• Participation in competitions/hackathons, and open source contributions
• The ability to think big, manage ambiguity to move quickly, and deliver results in an entrepreneurial environment
• Fearless but thoughtful in their pursuit of results
These job descriptions are examples. Looking for work?Find JobsFind Jobs