**Authors: Stella Stoyanova, Shiv Saxena**

**Introduction**

Bogota, Lima, Mexico City, and Rio de Janeiro are among the most congested cities in the world. General mobility trends in the area, together with the population growth in urban areas, suggest that the levels of congestion currently present in cities in the area will not be reversed soon ^{1}. The logistics industry, particularly last-mile logistics, bears a disproportionate amount of this burden, particularly with the growth of e-commerce in the wake of the COVID-19 pandemic.

Optimized deployment of existing resources (vehicles and drivers) can improve the level of service for customers, reduce carbon footprint, enrich the well-being and livelihood of drivers, and lessen the industry’s overall impact on urban congestion. With the opportunity to create such an impact, the Omdena collaborators and partner Carry came together to address the issue at hand.

**About Carryt**

The Colombian company and Omdena partner, Carryt wants to optimize routes to improve logistics using artificial intelligence and route planning. Carryt, a technology company with a field-services solution has recently become a last-mile logistics provider with a technology product empowering the gig economy, providing drivers with a livable wage, and offering delivery services to customers. Carryt conducts operations in Mexico and soon in Brazil with more than 200K deliveries per month in 2021 and aiming to increase up to 1 million deliveries per month in 2022 ^{2}.

In this article, we describe the data processing and modeling teams’ journey in working with proprietary geospatial data and exploring alternative modeling approaches and our use of the open-source software suite called Operational Research tools (OR-tools) from Google AI. The suites offered the flexibility to consider the constraints, restrictions, and transportation regulations while finding near-optimal routes for two kinds of vehicle routing problems: one where the deliveries are picked up by vehicles from a depot and delivered to various parts of the city, and the other where the items are first picked up and then dropped off in various parts of the city.

**Data wrangling**

The project the collaborator worked on during the 10 weeks involved two main parts. Part one was processing the datasets provided by the partner in order to create a map of the city of Bogota for routing. Part two was creating a routing solution using the map created above. As the partner provided two exhaustive datasets for the work, the team did not engage in data gathering.

**Creating a city map for Bogota from partner’s datasets**

The routing solution for Carryt works based on a custom-built map created on the basis a of shapefile and osm file provided by them.

The **shapefile** format is a geospatial vector data format for geographic information system (GIS) software ^{3}. The shapefile for the work contained proprietary information (such as average speeds for each line segment) incorporated by Carryt. This information was essential for finding the shortest paths during routing. Shapefiles can be read into GeoPandas dataframe – a package that extends pandas to allow working with geospatial data. The partner’s file consisted of Linestrings (vectors of coordinates between points) and their essential attributes – direction, max speed, average speed, and length. We extracted the first and last coordinates of each LineString. We added a “weight” column – calculating the time it takes to travel each LineString (formula: meters/1000/average_sp – this gives us the time to travel in hours). We also added a “start” and “end” columns containing the starting and ending node of each LineString. Below is an example of what the GeoPandas dataframe looked like.

The team decided to use Python’s package NetworkX to create the routing map. NetworkX is a versatile framework for working with any type of network, including geospatial data. It has the functionality to read a shapefile directly and easily and create a DiGraph from it (**read_shp** function). A DiGraph is a map with directed edges: an edge from a to b is considered different from an edge from b to a – important to allow us to take into account road direction. Since the team wanted more control and transparency over the map creation, we built the DiGraph out of the shapefile indirectly by adding all the directed edges defined by our Linestrings. We added edges (legally allowable driving road segments) from the shapefile based on the “oneway” field: if the value for the respective linestring is “B” we added to-from and from-to edge, if the value is “TF” we added only to-from edge if the value is “FT” – only a from-to edge, and if the value is “N” – we did not add an edge. The total number of edges is 335,590. The total number of nodes is 153,164. Below is the map of the city of Bogota, as well as some exploratory graphics created from the shapefile (** how much – which ones do we need?**):

**Table 1: One-way field: value counts**

One-way type: |
count: |
notes: |

B (both ways) | 142,883 | |

N (pedestrian) | 56,141 | Not used in the map |

FT (from-to) | 50,706 | |

TF (to-from) | 22 | Matching OSM file, ordered from-to primarily |

The osm file was an Open Street Maps^{ 4 }(OSM) format file extracted by the partner to cover the area of Bogota corresponding to their area of operations. This free open-source map contains information on wrong-turn restrictions (i.e. “no-left-turn”, “only-right-turn”, etc) that couldn’t be captured in the shapefile alone. We used Osmium, a Python library for working with Open Street Map data.

The main goal was to translate each OSM wrong turn restriction to its corresponding set of nodes from the shapefile so that we can restrict these illegal paths in our map. This required us to find the “via” node and the two nodes closest to the “via” from the “to” and “from” ways that exist in the shapefile. ** **An OSM “way” consists of a progression of multiple nodes (typically 5-15 per way). These can be split into several connecting Linestrings in the shapefile from which we had to find the correct partial ways containing the “via” node. The team was able to extract about 99% of the OSM restrictions present in the shapefile and was able to incorporate about 92%. Below is an example of an OSM restriction.

We incorporated the OSM restrictions into the routing solution by modifying the function that looks for the best path in routing (A* algorithm). We can pass that modified function a set of restrictions (a set of ways, consisting of from-via-to nodes that are not allowed) and the function will exclude those from consideration while allowing all other paths through the “via” (or intersection) node. In order to do that, the extracted restrictions have been saved into multiple rows linked to the same restriction. In some cases where we had multiple rows tagged with the same osm_id we had to do further calculations in order to be able to locate the “via” node.

**Modeling part**

Over ten weeks from January to April 2022, Omdena collaborators and Omdena project partner Carryt built algorithms and solutions for finding optimal routes for a fleet of vehicles that either deliver packages picked from a depot to a set of locations, or pick up and deliver packages from and to a set of locations. Here, “optimal” meant routes with the least total distance or time. In addition, there are several constraints attached to this **optimization **problem. They are:

1. Vehicle start and stop geo locations (specified as latitude and longitude)

2. Vehicle capacity (in units); specify the maximum capacity of the items they can carry

3. The pickup time window for each pickup location (start and end times in HH: MM, e.g. 08:00 to 14:00); the vehicles must visit the pickup locations in specified time intervals.

4. The dropoff time window for each dropoff location (start and end times in HH: MM); the vehicles must visit the dropoff locations in specified time intervals.

5. Time spent at each location (taking into account the time it takes to service a location)

6. Priority: where the vehicles must pay a penalty for each visit that is dropped (the penalty would be inversely proportional to the priority).

The most famous routing problem is the Traveling Salesman Problem (TSP): find the shortest route for a salesperson who needs to visit customers at different locations and return to the starting point. A TSP can be represented by a graph in which the nodes correspond to the locations, and the edges denote direct travel between locations.

For example, the graph in the figure m0 shows 4 locations labeled A, B, C, and D. The distance between any two locations is given by the number next to the edge joining them. By calculating the distances of all possible routes, you can see that the shortest route is ACDBA (the total distance is 35 + 30 + 15 + 10 = 90). Here there are only 6 routes. The problem gets harder when there are more locations. For 10 locations the number of routes is 362880 (factorial(9) or in general the number of routes = factorial(number of nodes – 1)).

Here we are only considering minimizing total distance and you can already see an explosion of possibilities. This phenomenon is called the **combinatorial explosion **in mathematics, where there is a rapid growth in the complexity of a problem due to how the combinatorics of the problem is affected by the input, constraints, and bounds of the problem.

A more general version of the TSP is the vehicle routing problem (VRP), in which there are multiple vehicles (fleet). VRPs have constraints: for example, vehicles have capacities for the maximum weight or volume of items they can carry, or drivers are required to visit locations during specified time windows requested by customers.

VRPs are inherently intractable: the length of time it takes to solve them grows exponentially with the size of the problem. For sufficiently large problems, it could take any routing software years to find the optimal solution. So instead of finding the optimal solution, the software must return a solution that is good, but not optimal. The routing software needs to handle constraints listed above that are beyond those of a pure TSP.

**State of the Art**

VRP is one of the most intensely studied combinatorial optimization problems for which numerous models and algorithms have been proposed – after all the booming e-commerce industry has led to an exponential growth in transportation and logistics.

VRP research is traditionally limited to the Operations Research (OR) community. However, with the advances in machine learning (ML) methodologies, researchers have made attempts to tackle the complexities, uncertainties, and dynamics involved in real-world VRP applications using machine ML methods in combination with analytical approaches (as shown in Figure m1 above). These methods often, despite some progress, suffer from issues such as lack of generalization across different scenarios, inefficiency in data use, and the inability to discover insights and interpret solution structures.

For this challenge, we surveyed literature related to the use of Reinforcement Learning and other machine learning techniques. As concluded by the authors Bai, et.al. in their survey (link above in Figure m1), it was evident that it will be very difficult to tackle practical real-life VRP application for the city of Bogota, Columbia using the existing ML models and algorithms. Instead, we focused on providing a comprehensive end-to-end solution using OR tools from Google that were designed to solve VRPs and their variations such as the Capacitated VRP (CVRP), CVRP with Time Windows (CVRPTW).

**Google OR-Tools**

OR-Tools is an open-source software suite for optimization, tuned for tackling the world’s toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming. OR-Tools won four gold medals in the 2021 MiniZinc Challenge, the international constraint programming competition ^{7}.

The software employs local search – a heuristic method for solving computationally hard optimization problems – to find a solution maximizing or minimizing a criterion among a number of candidate solutions (the search space). The algorithm moves from solution to solution in the search space by applying local changes, until a solution deemed optimal is found or a time-bound is elapsed. In addition, it employs search methods called metaheuristics which sit on top of the local search algorithm. One example of a meta-heuristic method is Guided Local Search (GLS). GLS builds up penalties during a local search and uses these penalties to help local search escape local minima and plateaus ^{8}.

**NetworkX**

NetworkX is an open-source Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. It provides data structures for graphs, digraphs, and multigraphs, and many standard graph algorithms such as those used for finding the shortest weighted paths from source to target nodes in a graph.

Besides Google OR-Tools and NetworkX, our implementation uses a number of other Python packages in the modeling pipeline described below. These are used for transforming the set of constraints specified in a JSON file into the input data structure required by the Google OR-Tools.

**Modeling Pipeline**

The input to the modeling pipeline (shown in Figure m2 above) is a NetworkX graph from the output by the data wrangling/EDA step. In addition, the pipeline receives a set of route restrictions from the OSM file. These two are provided once to initialize the modeling software.

Once the software is initialized, it accepts input constraints in JSON files. The JSON file provides the following input information:

1. The fleet of vehicles, each with a start and end location ID start and stop time of service, and capacity

2. List of nodes to be visited each with a location id, start and stop time window, duration of service, pickup or dropoff indicator, demand, and priority.

3. List of network nodes each with an id (matching with the location IDs in 1 and 2 above), latitude, longitude, name, and address

The pipeline transforms this set of constraints from the input JSON into a data structure that can be input to the Google OR-Tools library. The routing solver uses dimensions to keep track of the quantities that accumulate along a vehicle’s route, such as travel time and the total weight it is carrying.

For example, in CVRP, we need to create a dimension for the demands (e.g. weights or volumes of packages to be picked up) which tracks the cumulative load the vehicle is carrying along its route. The solver uses the dimension to enforce the constraint that a vehicle’s load can’t exceed its capacity. In VRPTW, we create a dimension to track each vehicle’s cumulative travel time. The solver uses the dimension to enforce the constraint that a vehicle can only visit a location within the location’s time window.

Finally, the output of the OR-Tools solves method is transformed back to a JSON string and sent back to the caller of the modeling pipeline. The output is in the same format as is provided by the current solution provider to Carryt.

**Build a Google OR-Tools Solver**

There are several steps required to finish all the requirements of the Google OR-Tools model from the input JSON as shown below in figure m3.

**Nodes, Edges, Weights in the NetworkX Directed Graph G**

The input NetworkX graph G is a directed graph with nodes and edges. The direction of the edges specifies whether the edge connecting a pair of nodes is connected both ways or one way just like a grid of roads and traffic interchange intersections in a typical city. The nodes in the provided graph are usually intersections but not always. They may be waypoints along a stretch of road. The weights on the edges correspond to the total time it will take to traverse the edge (distance/speed).

The set of locations in the input JSON does not always correspond to the already given nodes in the graph G. However, the shortest path algorithms (Dijkstra, A*) always find routes between two nodes in the graph. In order to solve this problem, our first attempt was to find the nearest node for each location in the input JSON using the haversine distance metric.

**Optimization Technique 1: Divide and Conquer using a grid of cells**

A naive algorithm to find the nearest node to a given location would be to loop through the list of nodes in graph G and find the node that is closest to the location using a haversine distance metric. This will be an O(nm) algorithm and would not scale well.

Instead, during initialization, we divide the graph into a rectangular grid of cells, and for each cell, keep a list of nodes that are in that cell. Then for a given location with a latitude and longitude, we can quickly figure out the row and column in the grid that this location belongs to and then search for the nearest node in a much shorter list of nodes. This ensures that the nearest node lookup is fast (figure m4).

**The trouble with routing to the nearest node**

As shown in the figure below (figure m5), routing to the nearest node fails to correctly find the route to the black marker (the actual drop-off location). Instead, the route found goes through the nearest node (green marker). In a grid of one-way streets in a big city, this could lead to a significant underestimation of the time required for a drop-off.

**Optimization Technique 2: Finding the nearest edge**

The correct route to the drop-off location using the NetworkX graph would require an additional node to be temporarily added to a copy of the graph G. This involved a series of computations to be performed using the basic haversine formula and additional geometry calculations required for a spherical earth ^{9}.

With the addition of the temporary node, the routing was correct as shown below (figure m6).

**Optimization Technique 3: Incorporating OSM Restriction Set in A***

A* is a graph traversal and path search algorithm, which is often used in artificial intelligence due to its completeness, optimality, and optimal efficiency ^{10}.

NetworkX provides an implementation of the algorithm that returns a list of nodes in the shortest weighted path between source and target nodes. However, there is no way to specify additional restrictions as specified in the OSM restriction set.

Note that these restrictions cannot be simply incorporated into a directed graph by dropping an edge. For example, consider a 4-way traffic intersection in a city. Each segment of the road leading to and out of the intersection is bidirectional. Yet, it is not uncommon to have specific turn restrictions in a city road network based on where you are coming from at the intersection. These restrictions are typically formed as a combination of three nodes: *from*, *intersection*, and *to *nodes; i.e. the turn to the node is not allowed at the *intersection *node if coming *from* the node.

We made modifications to the open-source implementation of the A* algorithm in NetworkX to incorporate the OSM restriction set. This was implemented with minimal modifications in order to keep the highly optimized nature of the A* algorithm in its current form.

**Optimization Technique 4: Modifying A* algorithm to keep track of the optimal cost of edges instead of nodes**

There is yet another problem when using the default implementation of the A* algorithm after incorporating the OSM restriction set (as explained above). The OSM restriction set when applied to a road network was leading to destroying the ability to reach specific locations in the graph.

This was due to the fact that the A* algorithm keeps track of the cost of reaching nodes and does not consider alternate paths to a node if a more optimal path is already found to that node. Now, if a node was found on the path to a destination location in an earlier iteration of the algorithm, but could not make the turn to the destination as the three tuples (*from*, *intersection*, and *to*) was in the OSM restriction set, then any other alternate path to this node will not be considered in the future iterations of the A* algorithm.

If this node has to be traveled in order to reach the destination, then we cannot reach the destination node anymore! The solution to this problem is to keep track of the cost of the visited edges instead of nodes. In a directed graph, you will be able to revisit a different edge to this node and be able to reach the destination node using the different edge! Problem solved!!

The code with the modified A* algorithm is provided in the Appendix.

**The disadvantage of using the A* algorithm**

The time complexity of A* depends on the heuristic. The team explored the use of suitable heuristics (like Euclidean distances) but could not complete the evaluation of the heuristic in the presence of OSM and other restrictions on the map of Bogota, Columbia. The use of the heuristic will lead to less than optimal solutions and hence the need to evaluate the penalty of using the heuristic.

In the absence of a heuristic, in the worst-case scenario, the time complexity of A* is O(bd) where b is the branching factor (average number of neighbors per node) and d is the depth of the solution (the shortest path). This is a significant bottleneck and there is significant academic research done in this area to improve upon the exponential time complexity of the A* algorithm.

Road networks exhibit hierarchical properties; for example, there are more important streets (more widely traveled) and less important ones. Commercial navigation systems often use heuristic hierarchical approaches which perform bidirectional searches: While the forward/backward search is inside some local area around the source/target where all roads of the network are considered. Outside of these areas, however, the search is restricted to some highway networks consisting of the ‘important’ roads. This general idea can be iterated and applied to a hierarchy consisting of several levels. Such classification requires manual tuning of the data and a delicate trade-off between speed and suboptimality of the computed routes.

The modeling team had begun looking at this approach but we could not complete the task due to the steep learning curve and the crucial lack of participation of the volunteer ML engineers.

**Optimization Technique 5: ****Redis Cache**

As a fallback, we have used Redis as a cache in the application stack. As shown in figure m7 below, we used Redis as an in-memory data cache, and as a message broker. Before making a compute-intensive call to the A* algorithm, we first check if this computation was already performed before using the latitude and longitude of the source and destination. If it was, we simply reuse the cached value. Otherwise a call to A* is made and its result is cached. This use of caching technique for enterprise applications effectively builds a hierarchy where we hope that many pickup and dropoff locations will be repeated in the various input JSON files (though we ran out of time to do a detailed study of the effectiveness of this approach across multiple JSON files used by Carryt).

**Sample output of the modeling and algorithm code**

The input JSON file was chosen by Carryt to explore and showcase the feasibility of using the shapefile, OSM restrictions, and Google OR-Tools to solve a pickup-dropoff problem.

The solution shown below in figure m8 was approved by Carryt. The depot location is the bottom green marker, the pickup location is the next green marker along the one-way road and the drop-off location is the red marker along the same one-way road but before the green marker! The correct solution identifies the two round-trips required for the first pickup, then, drop-off, and finally return to the depot.

With the identified solution for optimizing routes with the customized OR tool, while meeting the restrictions, the team then explored the best approach for deployment of the solution.

**Future work**

1. Implement the ability to handle routing problems that have no feasible solution, due to constraints. For example, if we are given a CVRP problem in which the total demand at all locations exceeds the total capacity of the vehicles, no solution is possible. In such cases, the vehicles must drop visits to some locations. The problem is how to decide which visits to drop. This aspect is supported by Google OR-Tools API which they support the notion of penalties (which can be proportional to the priority – the higher the priority, the higher the penalty for dropping the visit). This has not been implemented yet.

2. Explore additional parameters of the Google OR-Tools in order to be more effective in providing feedback to the users of the algorithm. In its current implementation, we simply return either success with a valid output JSON file or failure with the error message “Failed to find a solution”. We need to understand the OR-Tools API better in order to provide more actionable feedback in case of failure to find a solution.

3. We should provide alternate solutions by running the solver more than once with different meta-heuristics, solver run time (currently set to 30 seconds), etc. This is analogous to how Google Maps provides you with a list of “alternate” routes and leaves it to the driver to choose the best route.

3. Build a more efficient caching strategy. Our current strategy involves caching all the results of the A* algorithm in an in-memory Redis cache. This will likely hit a wall as eventually, we will run out of memory.

4. Implement a hierarchical A* path search algorithm. We have a hierarchy (‘jerarquia’) field in the shp file that we can use for identifying paths that are traveled more frequently than others. Then we can split the map into hierarchical layers and implement the A* algorithm for each layer and recursively find the shortest path of these two points ^{11}.

**Appendix**

defastar_path_custom_edges(G, source, target, heuristic=None, weight="weight", restrictions=set()): if source not in G or target not in G: msg = f"Either source {source} or target {target} is not in G" raise nx.NodeNotFound(msg) if heuristic is None: # The default heuristic is h=0 - same as Dijkstra's algorithm def heuristic(u, v): return 0 push = heappush pop = heappop weight = _weight_function(G, weight) # The queue stores priority, node, cost to reach, and parent. # Uses Python heapq to keep in priority order. # Add a counter to the queue to prevent the underlying heap from # attempting to compare the nodes themselves. The hash breaks ties in the # priority and is guaranteed unique for all nodes in the graph. c = count() queue = [] for neigh, _ in G[source].items(): queue.append((0, next(c), source, neigh, 0, None)) # Maps enqueued nodes to distance of discovered paths and the # computed heuristics to target. We avoid computing the heuristics # more than once and inserting the node into the queue too many times. enqueued = {} # Maps explored nodes to parent closest to the source. explored = {} while queue: # Pop the smallest item from queue. _, __, curnode, dstnode, dist, fromedge0 = pop(queue) if dstnode == target: path = [dstnode] node = curnode explored[(curnode, dstnode)] = (fromedge0, curnode) while node != source: path.append(node) node, dstnode = explored[(node, dstnode)] path.reverse() return path if (curnode, dstnode) in explored: if explored[(curnode, dstnode)] is None: continue # Skip bad paths that were enqueued before finding a better one qcost, h = enqueued[(curnode, dstnode)] if qcost < dist: continue explored[(curnode, dstnode)] = (fromedge0, curnode) for neighbor, e in G[dstnode].items(): if (curnode, dstnode, neighbor) in restrictions: # print('found', parent, curnode, neighbor) continue ncost = dist + weight(dstnode, neighbor, e) if (dstnode, neighbor) in enqueued: qcost, h = enqueued[(dstnode, neighbor)] # if qcost <= ncost, a less costly path from the # neighbor to the source was already determined. # Therefore, we won't attempt to push this neighbor # to the queue if qcost <= ncost: continue else: h = heuristic(neighbor, target) enqueued[(dstnode, neighbor)] = ncost, h push(queue, (ncost + h, next(c), dstnode, neighbor, ncost, curnode)) raise nx.NetworkXNoPath(f"Node {target} not reachable from {source}")

**References**

** ^{1}** “Urban road congestion in Latin America and the Caribbean.” https://publications.iadb.org/publications/english/document/Urban-Road-Congestion-in-Latin-America-and-the-Caribbean-Characteristics-Costs-and-Mitigation.pdf. Accessed 30 Mar. 2022.

** ^{2} **“Carryt Nueva Logística de Última Milla.” https://www.carryt.co/. Accessed 30 Mar. 2022.

** ^{3}** Wikipedia, https://en.wikipedia.org/wiki/Shapefile

** ^{4} **See https://www.openstreetmap.org/about

** ^{5} **“Vehicle Routing | OR-Tools | Google Developers”. 12 Aug. 2021.

https://developers.google.com/optimization/routing. Accessed 8 Apr, 2022.

** ^{6} **“Analytics and Machine Learning in Vehicle Routing Research – arXiv.” 19 Feb. 2021. https://arxiv.org/abs/2102.10012. Accessed 8 Apr, 2022.

** ^{7} **“OR-Tools | Google Developers.” https://developers.google.com/optimization. Accessed 8 Apr. 2022.

** ^{8} **“OR-Tools | Google Developers.” https://developers.google.com/optimization. Accessed 8 Apr. 2022.

** ^{9} **“Calculate distance and bearing and more between two Latitude/Longitude points ” https://www.movable-type.co.uk/scripts/latlong.html. Accessed 8 Apr. 2022.

** ^{10} **“A* search algorithm – Wikipedia.” https://en.wikipedia.org/wiki/A*_search_algorithm. Accessed 8 Apr. 2022.

** ^{11} **“Route Planning in Road Networks.” http://algo2.iti.kit.edu/schultes/hwy/. Accessed 8 Apr. 2022.