Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

1 Create Way
Singapore, 138602
Singapore

Research project within TUM CREATE. Focussing on modelling and optimisation of architecture and infrastructure, urban systems simulation like traffic and power are the main research interests. Apart from that, a cognitive systems group deal with human computer interaction. 

Solving the Computational Challenges of City-Scale Microscopic Traffic Simulation

Research Projects

Solving the Computational Challenges of City-Scale Microscopic Traffic Simulation

David Eckhoff

computationalChallenges1.jpg

Optimising parallel computing for traffic simulation

City-scale microscopic traffic simulation can require tremendous computing power. Normal desktop computers are often insufficient to provide fast results for large-scale simulations. To alleviate this problem, the simulation needs to be parallelised so more computing power can be utilised, e.g., as provided by supercomputers.

computation_0.png

In parallel simulation, workload is divided and assigned to multiple CPUs on the same computer or even on different computers. However, adding more and more CPUs does not always improve the speed of the simulation. The two main challenges are the load-balancing among processors and the minimisation of inter-process communication. To solve these challenges, the hardware assignment problem is formulated as a graph partitioning problem. Unfortunately, no methods are known to solve this type of NP-hard problem optimally within a reasonable amount of time.

To still achieve high simulation performance, we propose efficient graph partitioning heuristics for the acceleration of city-scale traffic simulations. We are able to reduce a simulation study from days to hours.

singapore-nrgg.png

For parallel simulations of the city of Singapore, the previously proposed Neighbour-Restricting Graph-Growing (NRGG) — an algorithm that reduces inter- process communication by minimising the number of neighbouring partitions — has not only outperformed graph partitioning methods such as METIS and Buffoon, but also stripe spatial partitioning, for the synchronisation protocol used. Currently, NRGG is being improved to achieve even higher efficiency.

computation_3.png
 

STATE FAST-FORWARDING AND MODEL PREEMPTION

fastForward1.png

Once a parallelised simulation fully exploits the available hardware, it can be accelerated further by reducing the computational work. The objective is to avoid unnecessary simulation steps without affecting the simulation results. At AIDA, two methods were developed to achieve this goal.

FAST-FORWARDING AGENT STATES

Microscopic traffic simulation usually updates each vehicle at fixed time-steps, which often makes large-scale simulations time-consuming. Generally, a vehicle’s movement through a road network is affected by nearby vehicles, but it becomes predictable if there are no vehicles around.

We exploit this observation to reduce the simulation time. First, we scan currently isolated vehicles and determine the earliest possible time they share a road segment with other vehicles. Then, we predict the vehicle’s position at that time. The vehicle can then be moved to that position immediately (“fast-forwarded”) without any update needed in between.

preemption_1.png

Our experiments showed that a speedup of up to 2.8x can be achieved in sparse traffic conditions. However, on highly congested roads, vehicles are isolated only for short amounts of time. A more fine-grained scanning method currently under development focuses on fast-forwarding in dense traffic.

ACCELERATED DESIGN SPACE EXPLORATION

Due to the large parameter space, simulation-based optimisation, e.g. of traffic signal timings, may require thousands of long simulations runs to achieve a high-quality result.

We observed that it is often evident early in a simulation run that a certain configuration leads to undesirable results. Therefore, we propose model preemption, which involves which predicting the simulation outcome based on an intermediate simulation state and terminating low-quality runs early.

fastForward2.png