Our aim was to implement a more cost-effective and environmentally friendly solution to deliver parcels from companies to customers by letting the parcel ‘hitchhike’ with taxis that are already in traffic. This reduces transport costs, reduces pollution and reduces traffic congestion. To train this ‘intelligent parcel’, we used Reinforcement Learning on historical Manhattan traffic data. For more information, see our github repository and our current site
Team: Lukas Kirchdorfer, Noah Mautner, Cosmina Ionut, Denisa Dragota, Agnieszka Lenart, Maren Ehrgott
In today's delivery systems, customers order at a store and the store then uses its own delivery service or an external delivery provider (e.g., Lieferando for food delivery) to deliver the order within a certain deadline to the customer. While these delivery services provide a fast and reliable delivery for the customers, they come with multiple issues. They cause a high amount of commercial trips, lead to more vehicles on the roads and require parking space, which is sparse in big cities. The deliveries also increase environmental pollution due to their emissions. For small local stores, using an own or external delivery provider might also prove to be too expensive, but offering delivery services could be critical to the store's success as fast online retail is becoming more and more important. In order to overcome these problems, an alternative to traditional delivery systems is needed
In our project, we aim to establish a delivery system that consists of a decentralized dispatching of hitchhiking boxes containing customer orders. The boxes are to be delivered from any store to any customer location (only given these start and final coordinates) within a certain time period within New York City. For this, multi-hop ride sharing (i.e., a box rides on vehicle for some time and then transfers to another ride) using available taxi rides provided by the city of New York is to be implemented. The system is to be trained using historical training data and tested on random and specific custom orders. This approach is supposed to satisfy society's mobility and logistics needs (e.g., high demand and lower costs for a ride), challenge the traditional work organization (e.g., saving money for external delivery providers) and improve environmental protection as well as urban quality of life (e.g., less traffic).
As New York City is to be the setting of the hitchhike system, we constructed the street network of the city as a graph with nodes (location consisting of latitude, longitude and a node ID) and edges (including speed limits and lengths of edges). In this graph we strategically place hubs according to store locations, customer population distribution and the mostly travelled nodes by the taxis. The graph provides the environment for delivering our boxes. Start and final location of delivery are put in as coordinates, which are mapped to the nearest node, which then is mapped to the nearest hub. At the current status, we only provide hub-to-hub delivery, which is why this mapping needs to be done.
Having the mapping of start and final position on the graph, we initialize the time with the current time and keep track of the deadline (24 hours) to which the box has to be delivered, otherwise the delivery is conceived as failed.
Having the current position and time as input, we aim to push the box into the direction of the final hub by only taking available shared rides. The available rides are provided from the historical taxi trip data of the city of New York, which was pre-processed and saved into a database. The database is efficiently accessed via SQL views and queries to get the available trips (and respective timestamps).
Having access to the available trips at a certain hub at a certain time, the box autonomously decides whether to wait, take a trip to some hub, or, in case the deadline is only 2 hours away, book an own trip directly to the final hub. For this, the box is implemented as a Reinforcement Learning Agent. It is trained on the historical trip data and its performance in this training is measured with multiple metrics. In order to finally test the performance on new random and specific custom orders (which we generate), an agent can be compared with benchmarks and other RL agents regarding multiple metrics and the agent's performance and routes taken are visually displayed on a dashboard.