This year’s plenary speakers include Dr. Kate Darling, a leading expert in social robotics and MIT Media Lab research specialist, and Chris Wiggins, Associate Professor of Applied Mathematics at Columbia University and Chief Data Scientist at The New York Times.
During the show, we will be presenting our simulation and reinforcement learning work with a model focused on automated guided vehicle (AGV) management in a factory, as well as our collaborative work with Engineering USA. We will be live for Q&A following each event.
Pathmind & Engineering USA: Solving Complex Business Problems with Simulation and AI
Summary: Engineering USA presents technology partner Pathmind. Businesses using simulation modeling to solve problems need an optimization method capable of handling the complexities and variability of real-world operations. When traditional optimizers and heuristics underperform, Pathmind’s AI can help. Using a branch of AI called reinforcement learning (RL), Pathmind can beat your heuristic and offer new insights into speed and profitability. Pathmind makes it easy to adopt Reinforcement Learning, even for teams without AI experts or experience with neural networks. During this showcase, you will:
- Learn key RL terms
- Examine how Pathmind RL is implemented in a model
- See the steps of uploading a model to Pathmind and training an AI policy
- Compare how a Pathmind AI policy performs against the heuristic
All attendees will be invited to create a free Pathmind account. Learn more…
Harnessing Deep Reinforcement Learning to Coordinate Automated Guided Vehicles
Summary: We present a deep reinforcement learning (DRL) policy that was trained to control a fleet of Automated Guided Vehicles (AGVs). The policy determines the loading and drop off tasks of AGVs as they move products through a factory. The goal of the policy is to maximize product throughput. Notably, the policy shows a 50% improvement in throughput over a shortest-queue heuristic, while lowering the AGV utilization by 15%. We attribute the improvement to the policy’s intelligent management of congestion. Interestingly, the policy chooses to “hide” AGVs away from the center of the factory floor, resulting in lower AGV utilization and lower congestion. Our presentation will detail the adaptation of the AGV simulation as a DRL problem, the policy training process and the quantitative results. Additionally, we will discuss how the same workflow can be readily applied to a variety of other use cases. Learn more…
Deep Reinforcement Training and Machine Learning Applications for Industry 4.0: Optimization of Field Service Management and Manufacturing Operations
Summary: Making the right decision now by considering all possible implications in the future is not an easy task. Digital Twins are useful support tools for exploring the potential impact of findings from a systemic perspective. They leverage simulation modeling techniques and usually rely on heuristics to replicate the behavior and logic of how systems evolve. But when it comes to searching for an optimal solution, especially when the goal is many decision steps in the future, there is a need to explore the solution space. Without an automated approach, this can be nearly impossible; mathematical optimization techniques can be compelling, but they are extremely difficult to implement when dealing with long-term decision-making and an environment rich with uncertainty. In this space, Deep Reinforcement Learning (RL/DRL) is gaining attention. Why? Because it can deliver a policy for sequential decision-making for even the most complex, non-linear environment. The two real cases presented aim to highlight the benefits that a DRL policy can bring with respect to established heuristics. In the first case, we explore the application of DRL for identifying an optimal Operations & Maintenance strategy for a wind farm equipped with Prognostics & Health Management capabilities. In the second case, we explore how the application of DRL methodologies enables a Food & Beverage distributor to make smarter decisions about production order sequencing, reducing processing time by 16%. Learn more…