Zheli Xiong2,Defu Lian2*, Enhong Chen2, Gang Chen3 and Xiaomin Cheng23 {zlxiong}@mail.ustc.edu.cn*Defu Lian is the corresponding author.2School of Data Science

University of Science and Technology of China, Hefei, China

Email: {liandefu,cheneh,wh5606}@ustc.edu.cn3Yangtze River Delta Information Intelligence Innovation Research Institute, China

Email: cheng@ustc.win

###### Abstract

OD matrix estimation is a critical problem in the transportation domain. The principle method uses the trafﬁc sensor measured information such as trafﬁc counts to estimate the trafﬁc demand represented by the OD matrix. The problem is divided into two categories: static OD matrix estimation and dynamic OD matrices sequence(OD sequence for short) estimation. The above two face the underdetermination problem caused by abundant estimated parameters and insufﬁcient constraint information. In addition, OD sequence estimation also faces the lag challenge: due to different traffic conditions such as congestion, identical vehicle will appear on different road sections during the same observation period, resulting in identical OD demands correspond to different trips. To this end, this paper proposes an integrated method, which uses deep learning methods to infer the structure of OD sequence and uses structural constraints to guide traditional numerical optimization. Our experiments show that the neural network(NN) can effectively infer the structure of the OD sequence and provide practical constraints for numerical optimization to obtain better results. Moreover, the experiments show that provided structural information contains not only constraints on the spatial structure of OD matrices but also provides constraints on the temporal structure of OD sequence, which solve the effect of the lagging problem well.

###### Index Terms:

OD matrix estimation, deep learning, neural network

## I Introduction

With the development of big traffic data, a large amount of traffic data has been widely used in traffic applications such as route planning, flow prediction and traffic light control. Traffic demand describes the trips between the divided areas, commonly referred to as the OD (Origin-Destination) matrix. The OD matrix has significant value for various traffic tasks such as trafﬁc ﬂow prediction[1], trajectory prediction[2] and location recommendation[3]. On the one hand, the change in traffic demand between regions will affect the flow of the road sections, leading to a change in the optimal vehicle path. On the other hand, OD estimation will lead to more effective traffic regulation since the already known OD demants can provide rationality for management strategies. However, traffic demand is the data that sensors cannot directly observe, and it must be obtained through other traffic data, such as the traffic flow, to estimate the matrix.

OD matrix estimation is mainly divided into two categories: static OD matrix estimation[4] and dynamic OD sequence estimation[5]. Static OD estimation uses integrated traffic counts to estimate the total traffic demand in the period. In comparison, dynamic OD sequence estimation uses time-varying traffic counts in a sequence of intervals to estimate the corresponding OD matrix sequence.The bi-level framework is commonly used for OD estimation[6], which is divided into upper and lower levels. The upper level adjusts the OD matrices by minimizing the numerical gap between real and estimated trafﬁc counts by solving a least squares problem. The optimizing method is mainly gradient-based[7] and optimizes the OD matrices by the steepest descent method. The lower level assigns traffic demand to road sections by analysis[8] or simulation[4] method. The upper and lower layers updates iteratively and converge to an optimum point.

Due to the limited number of road sections and abundant parameters to be estimated in the OD matrices, the optimization problem is heavily underdetermined[9]. Some researchers have alleviated this problem by adding other observable information, such as travel speed[10], cellular probe[11], and bluetooth probe[4]to the estimator. In addition to the above challenge, dynamic OD sequence estimation faces another lag challenge: due to different traffic conditions such as congestion, identical vehicle will appear on different road sections during the same observation period, resulting in identical OD demands correspond to different trips[12], so current OD matrix will refer to different time-varying traffic counts according to different traffic conditions. Furthermore, traffic conditions are caused by OD marices before and after the current OD matrix, which causes a temporal relationship between these OD matrices. To this end, some studies propose fixed maximum lags[13], which assume that vehicles can complete their trip within a fixed maximum interval amount to eliminate the influence of observation intervals amount. And others like [12] propose using the temporal overlap in consecutive estimation intervals to alleviate the influence between OD matrices, as shown in Fig. 1.

To address these challenges above, we use deep learning to fit the mapping relationship between the time-varying traffic counts and the structure of the OD sequence to learn the impact of lagged traffic counts on the OD structure and the relationship between OD sequences.

Consider that numerical space makes the learning space too large for deep learning. However, distribution can constrain the learning space’s size and reflect the structural information. Furthermore, simply using NN to infer numerical matrices will lose important information about assignments presented in the bi-level framework. Therefore, we propose a method that integrates deep learning and numerical optimization, which merges the inferred structural information into the upper-level estimator of the bi-level framework. Providing the spatial and temporal constraints for optimization can effectively avoid to fall into local optimum in advance and help to optimize to a better result.

In this paper, we further deliver the following contributions.

- •
We proposed a deep learning model to infer OD sequence structure which extracts the spatio-temporal constraints from time-varying traffic counts effectively.

- •
We novelly integrates deep learning method and bi-level framework to solve OD sequences estimation.

- •
Through our experiments, it is verified that the structural knowledge inferred by deep learning can provide great help for numerical optimization.

## II Related Works

OD matrix estimation can be mainly divided into static OD matrix estimation and dynamic OD matrices estimation.

### II-A static OD matrix estimation

For solving the static OD matrix estimation, gravity mode adopts a ”gravitational behavior” for trip demand and builds a linear or nonlinear regression model[14]. The maximum likelihood technique[15] estimates the OD matrix over the sampled matrix on the presumption that OD pairs follow independent Poisson distributions. The entropy maximizing/information minimization method[16] seeks to select an estimated OD matrix that adds as little information as possible from traffic counts in order to match the underdetermined problem. The Bayesian method[17] additionally resolves OD matrices by maximizing the posterior probability, which utilizes a mix of the prior OD matrix and the observations. The maximizing/information minimization approach is a particular instance of Bayesian method when the prior is given only a minimal amount of confidence. A method that explicitly considers both observed flow biases and the target OD matrix is built using generalized least squares (GLS)[18]. All of the aforementioned techniques need to use the prior OD matrix, which may be out of date and lead to estimation bias. Additionally, travel time[19], travel speed[10], and turning proportions[20] are also employed directly in OD estimates due to the availability of massive traffic data and traffic simulation.

### II-B dynamic OD matrices estimation

Static OD matrix estimation can only estimate OD for a specific period. While a period is divided into multiple intervals, researchers propose dynamic OD matrix estimation, which aims to estimate the OD matrices in the corresponding time intervals and estimate the entire OD sequence further by using road traffic and other information. However, as mentioned above, the lagged flow may vary in successive intervals according to different traffic conditions, so the estimation by simply applying the static OD matrix is no longer applicable. Dynamic OD matrix estimation is divided into two types: off-line case and on-line case[6].

#### II-B1 Off-line case

The off-line method mainly discusses the direct estimation of the OD sequence when only given the corresponding observation sequence. [5] discusses off-line estimation methods, such as the simultaneous and sequential method, and further[21] proposes a correction method based on the average OD matrix. The simultaneous method focuses on establishing an estimator to optimize all OD matrix slices simultaneously, its idea is similar to the static OD matrix estimation. Compared to the simultaneous method, the sequential method only estimates one matrix slice at a time and infers the next OD matrix slice based on the historical OD matrices that have been estimated. The average-based correction method estimates an average OD over the observation period, then estimates coefficients, which are multiplied by the average OD to obtain the final OD sequence. Since each interval corresponds to an OD matrix slice, there are large amounts of parameters with non-negative constraints to be estimated. The main optimization methods proposed by researchers are the gradient projection method[22] and the Simultaneous Perturbation Stochastic Approximation(SPSA)[23]. Similar to static OD estimation, some researchers have integrated various traffic observations, such as vehicle transit time, travel speed, etc., into the estimators[24] to improve the accuracy.

#### II-B2 On-line case

The on-line method has been widely studied. Unlike the off-line method, it requires the historical periodic OD sequence and the current observation. Traditional modeling methods include Kalman filter CF[25][26], LSQR algorithm[27], Canonical Polyadic (CP) decomposition[28]. [29] used principal component analysis (PCA) combine several machine learning models, and [30][13] utilize Dynamic Mode Decomposition (DMD) based method. At the same time, given the excellent performance of deep learning in prediction, some researchers use deep learning methods for the dynamic prediction of OD sequences. For example, [31] used a Graph Neural Network(GNN) to capture the spatial topology information of the graph structure and combined it with the traditional CF algorithm to improve the accuracy of the OD matrix prediction. [32][33] used a Recurrent Neural Network(RNN) to capture the temporal features of prior OD sequence evolution to predict the current OD matrix. The primary relationship is that the off-line methods can be used to provide a better initialization for on-line methods[34].

$n_{od}$ | the number of OD nodes |

$n_{sec}$ | the number of road sections |

$I$ | the number of estimation intervals |

$o$ | the number of observation intervals |

$\boldsymbol{\epsilon}_{\tau}$ | a vector of traffic counts during observation interval $\tau$, $\boldsymbol{\epsilon}_{\tau}\in\mathbb{R}^{n_{sec}}$ |

$\boldsymbol{E}$ | a tensor composed of $\boldsymbol{\epsilon}_{1}$ to $\boldsymbol{\epsilon}_{o}$, $\boldsymbol{E}\in\mathbb{R}^{o\times n_{sec}}$ |

$\boldsymbol{n}_{i}$ | a vector represents the OD node $i$, $\boldsymbol{n}_{i}\in\{-1,0,1\}^{n_{sec}}$ |

$\boldsymbol{N}$ | a tensor composed of $\boldsymbol{n}_{1}$ to $\boldsymbol{n}_{n_{od}}$, $\boldsymbol{N}\in\{-1,0,1\}^{n_{od}\times n_{sec}}$ |

$M_{ijt}$ | the number of traffic trips from $\boldsymbol{n}_{i}$ to $\boldsymbol{n}_{j}$ during estimation interval $t$ |

$\boldsymbol{T}$ | a tensor transformed from OD sequence, $\boldsymbol{T}\in\mathbb{R}^{I\times n_{od}^{2}}$ |

$\tilde{\boldsymbol{T}}$ | a initial OD sequence is as a starting point of optimization. |

$\hat{\boldsymbol{T}}$ | an optimized OD sequence during optimization phase. |

$\tilde{\boldsymbol{p}}_{t}$ | production flow, a vector of trips leaving from each node during estimation interval $t$, $\tilde{\boldsymbol{p}}\in\mathbb{R}^{n_{od}}$ |

$\tilde{\boldsymbol{a}}_{t}$ | attraction flow, a vector of trips arriving at each node during estimation interval $t$, $\tilde{\boldsymbol{a}}\in\mathbb{R}^{n_{od}}$ |

$\boldsymbol{p}$ | global production flow, a vector concatenated from $\tilde{\boldsymbol{p}}_{1}$ to $\tilde{\boldsymbol{p}}_{I}$, $\boldsymbol{p}\in\mathbb{R}^{I\cdot n_{od}}$ |

$\boldsymbol{a}$ | global attraction flow, a vector concatenated from $\tilde{\boldsymbol{a}}_{1}$ to $\tilde{\boldsymbol{a}}_{I}$, $\boldsymbol{a}\in\mathbb{R}^{I\cdot n_{od}}$ |

$\boldsymbol{D}_{E}$ | a tensor of the distribution of traffic counts by normalizing $\boldsymbol{E}$ |

$\boldsymbol{d}_{p\ or\ a}$ | a vector of the distribution by normalizing $\boldsymbol{p}$ or $\boldsymbol{a}$ |

$\bar{\boldsymbol{d}}_{p\ or\ a}$ | the inferred distribution of production flow or attraction flow from deep learning model |

$\boldsymbol{d}_{p\ or\ a}^{*}$ | the inferred distribution of production flow or attraction flow from the best trained deep learning model during inference phase |

$\hat{\boldsymbol{d}}_{p\ or\ a}$ | the optimized distribution of production flow or attraction flow from bi-level framewrok during optimization phase |

## III Preliminary

### III-A Definitions

As shown in Table 1, an OD node is a cluster created by grouping the intersections of road sections in the city network, and the roads connect the same OD pair are aggregated to one road section (see Fig. 3). Considering a city network consists of $n_{od}$ OD nodes and $n_{sec}$ road sections. We use a vector of length $n_{sec}$ to represent OD node $\boldsymbol{n}_{i}$, with $n_{ij}=1$ if road section $j$ enters node $\boldsymbol{n}_{i}$, -1 if it exits from $\boldsymbol{n}_{i}$, and 0 if it is not connected to $\boldsymbol{n}_{i}$.

During an estimation period divided into $I$ equal intervals $t=1,2,3,...,I$, and an obsevation period divided into $o$ equal intervals $\tau=1,2,3,...,o$. $\boldsymbol{\epsilon}_{\tau}$ denotes the traffic counts of all road sections during observation interval $\tau$. The traffic trips from $\boldsymbol{n}_{i}$ to $\boldsymbol{n}_{j}$ during estimation interval $t$ is denoted by $M_{ijt}$. In addition, we transform the OD sequence into a tensor denoted by $\boldsymbol{T}\in\mathbb{R}^{1\times I\times n_{od}^{2}\times 1}$, and $\boldsymbol{\varepsilon}$ denotes a vector concatenated from $\boldsymbol{\epsilon}_{1}$ to $\boldsymbol{\epsilon}_{o}$. For $\varepsilon_{i}\in\boldsymbol{\varepsilon}$, we normalize its traffic count on each road section with respect to all observation intervals as $D_{E_{ij}}=\frac{E_{ij}}{\sum\limits_{i}^{o}\sum\limits_{j}^{n_{sec}}E_{ij}}$.

Production flows $\tilde{\boldsymbol{p}}_{t}$ is used to represent a vector records the number of trips leaving each node during estimation interval $t$ as $\tilde{\boldsymbol{p}}_{t}=\sum\limits_{j}^{n_{od}}M_{ijt}$. And attraction flows $\tilde{\boldsymbol{a}}_{t}$ is to used to represent a vector records the number of trips that arrive at each node during estimation interval $t$ as $\tilde{\boldsymbol{a}}_{t}=\sum\limits_{i}^{n_{od}}M_{ijt}$.

Additionally, the global production flows $\boldsymbol{p}$ is a vector concatenated from $\tilde{\boldsymbol{p}}_{1}$ to $\tilde{\boldsymbol{p}}_{I}$, and the global attraction flows $\boldsymbol{a}$ is a vector concatenated from $\tilde{\boldsymbol{a}}_{1}$ to $\tilde{\boldsymbol{a}}_{I}$, which are used to denotes the production flows and attraction flows of OD sequence, respectively. Moreover, we normalize their flow on each OD node with respect to all OD nodes and all estimation intervals as $d_{pi}=\frac{p_{i}}{\sum\limits_{k}^{I\times n_{od}}{p}_{k}}$ and $d_{ai}=\frac{a_{i}}{\sum\limits_{k}^{I\times n_{od}}{a}_{k}}$, respectively.

In the inference phase, $\bar{\boldsymbol{d}}_{p\ or\ a}$ denotes the inferred distribution of production flow or attraction flow from deep learning model when being trained. And the best inferred distribution is denoted by $\boldsymbol{d}_{p\ or\ a}^{*}$.

In the optimization phase, $\hat{\boldsymbol{T}}$ denotes the tensor of optimized OD sequence from bi-level framewrok at each iteration, and correspondingly, $\hat{\boldsymbol{d}}_{p\ or\ a}$ denotes the optimized distribution of production flows or attraction flows.

### III-B Bi-level framework

In the bi-level framework, the estimation will start from an initialized OD matrix $\tilde{\boldsymbol{T}}$. In the lower level, for each observation interval, trips in every OD pair are allocated to the road sections in an analytical or simulative way, and then an allocation tensor $\boldsymbol{P}$ is obtained. $\boldsymbol{P}_{\tau t}$ represents a matrix of proportion that OD matrix $\boldsymbol{T}_{t}$ allocated to road sections during the observation interval $\tau$, and $\boldsymbol{P}_{\tau t}\boldsymbol{T}_{t}$ indicates the corresponding traffic counts. In the upper layer, the traffic counts assigned by $\boldsymbol{T}_{1},\boldsymbol{T}_{1},...,\boldsymbol{T}_{t}$ are summed to get the traffic counts $\boldsymbol{\epsilon}_{\tau}$ as shown in Fig 1. Optimizing the least squares estimator reduces the gap between optimized traffic counts $\hat{\boldsymbol{\epsilon}}_{\tau}$ and real traffic counts $\boldsymbol{\epsilon}_{\tau}$ throughout the whole observation period $\tau=1,2,...,o$ to find a better estimated OD sequence. By iteration repeats the alternation of upper and lower levels, the final estimated OD matrix sequence is obtained when converges. The least squares estimator is formulated as follows:

$\displaystyle Z(\hat{\boldsymbol{T}})=\min_{\hat{\boldsymbol{T}}_{1}\geq 0,...%,\hat{\boldsymbol{T}}_{I}\geq{0}}\sum_{\tau=1}^{o}\frac{1}{2}(\boldsymbol{%\epsilon}_{\tau}-\hat{\boldsymbol{\epsilon}}_{\tau})^{\mathrm{T}}(\boldsymbol{%\epsilon}_{\tau}-\hat{\boldsymbol{\epsilon}}_{\tau})$ | ||

$\displaystyle where\ \hat{\boldsymbol{\epsilon}}_{\tau}=\sum_{k=1}^{t}%\boldsymbol{P}_{\tau k}\hat{\boldsymbol{T}}_{k}$ |

The assignment tensor $\boldsymbol{P}\in\mathbb{R}^{o\times I\times n_{sec}\times n_{od}^{2}}$ , can be derived by analysis, for example by taking into account stochastic user equilibrium on traffic counts or by simulation using a simulator like SUMO[35]. It displays the ratio of each OD trip to each road section during estimation intervals. Similar to[4], our assignment tensor P is computed using a back-calculation technique based on traffic counts generated by the simulator during each iteration:

$\boldsymbol{P}=\left(\begin{array}[]{ccc}\boldsymbol{P}_{11}&\ldots&%\boldsymbol{P}_{1I}\\\boldsymbol{P}_{21}&\ldots&\boldsymbol{P}_{2I}\\\vdots&\ddots&\\\boldsymbol{P}_{o1}&\dots&\left(\begin{array}[]{ccc}p_{11}&\ldots p_{1\ n_{od}%^{2}}\\p_{21}&\ldots p_{2\ n_{od}^{2}}\\\vdots&\vdots\\p_{n_{sec}\ 1}&\ldots p_{n_{sec}\ n_{od}^{2}}\\\end{array}\right)_{oI}\end{array}\right)$ |

### III-C Optimization

Considering Eq(1) is a problem with abundant non-negativity constraints on $\boldsymbol{T}$, the gradient projection method is commonly used[22]. The idea is to evaluate the directional derivatives of the objective function at the current point, and to obtain a descent direction by making a projection on the non-negativity constraints.

It is worth noting that we adjust the update step $\boldsymbol{T}^{k+1}:=\boldsymbol{T}^{k}\oplus(\lambda^{k}\odot\boldsymbol{d}^%{k})$ to $\boldsymbol{T}^{k+1}:=\boldsymbol{T}^{k}\odot(\boldsymbol{e}+\lambda^{k}\odot%\boldsymbol{d}^{k})$. It has been proved that compared with the ordinary update step, this adjustment significantly improves the optimization speed of OD matrices[7], since it proposed that update steps for larger variables should be greater. For the upper bound of step size $\lambda^{k}_{max}$ at itertion $k$, we give the corresponding adjustment $\lambda^{k}_{max}=\mathop{\min}\{\frac{-1}{\boldsymbol{d}^{k}_{i}}|\forall i:%\boldsymbol{d}^{k}_{i}<0\}$. Since at the $(k+1)^{th}$ iteration, $\mathbf{A}_{2}\boldsymbol{T}^{k}>0$ and ensure $\mathbf{A}_{2}\boldsymbol{T}^{k+1}>0$, let $\mathbf{A}_{2}\boldsymbol{T}^{k}\odot(\boldsymbol{e}+\lambda^{k}\odot%\boldsymbol{d}^{k})>0$, which implies $\boldsymbol{e}+\lambda^{k}\odot\boldsymbol{d}^{k}>0$. Therefore, $\lambda^{k}<\frac{-1}{\boldsymbol{d}^{k}_{i}},\forall i:\boldsymbol{d}^{k}_{i}<0$, and then we have $\lambda^{k}_{max}=\min\{\frac{-1}{\boldsymbol{d}^{k}_{i}}\},\forall i:%\boldsymbol{d}^{k}_{i}<0$.

Finally, we search for the optimal step size $\lambda^{*k}$ at iteration $k$ based on Eq(2) and then determine the executable step size $\lambda^{k}$ according to $\lambda^{*k}$ and $\lambda^{k}_{max}$, that is, if $\lambda^{*k}<\lambda^{k}_{max}$, set $\lambda^{k}=\lambda^{*k}$; otherwies, set $\lambda^{k}=\lambda^{k}_{max}$.

$\min_{\lambda^{k}}Z(\hat{\boldsymbol{T}}^{k}\odot(\boldsymbol{e}+\lambda^{k}%\odot\boldsymbol{d}^{k}))$ | (2) |

Where $\odot$ denotes the element-wise product and $\boldsymbol{e}$ is a tensor of 1s with the same dimension as $\hat{\boldsymbol{T}}^{k}$.

## IV method

The pipline of our proposed method will be described in detail in this section, including sampling the probe flow to compose datasets, training and inference of NN models, and combining inferred spatial-temporal structural distributions into numerical optimization.

### IV-A Probe Trafﬁc Sampling

Firstly, as presented in our previous work on static OD estimation, most of the important trips are in a small part of OD pairs, leat to the values of other OD pairs are relatively small. So the production and attraction ﬂows of an OD matrix will be uneven in the reality, and this property implies the structural information of the OD matrix. Moreover, as we mentioned in Part 1, we infer distributions rather than real numbers to reﬂect the structure, so the exact value is not a concern. Therefore, it is feasible to set a sparse matrix with limited values of non-zero elements to reconstruct the specific structure information of an real OD matrix. To this end, we set $m$ to represent the maximum value of OD pairs and make it relatively small to speed up the sampling process.

Secondly, since the traffic congestion will cause lag problem, we need some probe vehicles to explore the traffic congestion. We form a original matrices sequence(OMS) by collecting $m$ vehicles from these important OD pairs (with the number of trips $>m$) of the real OD sequence as shown in the left part of Fig. 2, these trips can be a combination of various data, such as car-hailing service data and GPS data since these vehicles can all be seen as probe vehicles. Then, we resample each OD paris on a scale of 0.0-1.0 from OMS to obtain a dataset composed of generated OD sequences, and calculate the corresponding global distributions $\boldsymbol{d}_{p}$, $\boldsymbol{d}_{a}$ and $\boldsymbol{D}_{E}$. The advantage of doing so is that, although we sample from a small number $m$ of vehicles, it also can reconstruct the relationship between various traffic counts distribution and its corresponding golobal distributions under the real traffic conditions.

Finally, $\boldsymbol{D}_{E}$ are used as inputs of NN, $\boldsymbol{d}_{p}$ or $\boldsymbol{d}_{a}$ are used as labels, we form the dataset $(\boldsymbol{D}_{E},\boldsymbol{d}_{p})$ and $(\boldsymbol{D}_{E},\boldsymbol{d}_{a})$ and train the two models separately.

### IV-B Dynamic Distribution Inference

The spatio-temporal evolution of traffic counts can effectively characterize the OD sequence. We utilize traffic counts of observation interval $t$ to $t\times k+\delta$ (for $t\in[0,I-1]$, $k=\frac{o}{I}$, $\delta>0$) as input to characterize the OD matrix of estimation interval $t$. If $\delta>k$, the observations overlap between two estimation intervals as shown in Fig 1, it indicates that the OD matrix of the current estimation interval $t$ will affect the traffic counts of the following $t\times k+\delta$ observation intervals due to the lag problem.

In order to obtain the global distributions, we need to consider the mutual influence relationship of each OD node in spatial and temporal. For example, if the trips of node $n_{i}$ at observation interval $t\times k+\delta_{1}$ and node $n_{j}$ at $t\times k+\delta_{2}$ both need to pass through road section $e$ at $t\times k+\delta$ when the traffic count has been given($\delta>\delta{1}$; $\delta>\delta_{2}$), so there will be pairwise spatial-temporal dependencies between OD nodes.

#### IV-B1 DCGRU

In deep learning field, many studies have shown that the GNN+RNN based method can extract the spatio-temporal features well[36]. Therefore, we choose the DCGRU model as the feature extractor of each OD matrix. It combines Diffusion Convolutional Network(DCN, a spatial-based GNN model) and Gated Recurrent Unit(GRU,an improved RNN model) and has been studied to have an outstanding performance in capturing the long term spatio-temporal evolution characteristics of traffic flow[1] .

DCN can be adapted to deal with dependency between objects in non-Euclidean spaces according to node features $\boldsymbol{\chi}$ and the adjacency matrix $\boldsymbol{W}$. The $K$-step graph diffusion convolution is calculated to extract the upstream and downstream spatial dependencies between their surrounding $K$-order neighbor nodes and form an integrated graph signal, which is formulated as below:

$\displaystyle\boldsymbol{\chi}_{:,e\star g}=\sum_{k=0}^{K-1}(\theta_{k,1}(%\boldsymbol{D}_{o}^{-1}\boldsymbol{W})^{k}+\theta_{k,2}(\boldsymbol{D}_{I}^{-1%}\boldsymbol{W}^{T})^{k})\boldsymbol{\chi}_{:,e}$ | |||

$\displaystyle for\ e\ \in\{1,...,n_{sec}\}$ | (3) |

$\boldsymbol{\chi}_{\star g}$ is the graph signal obtained after each OD node fuses the $K$ order neighbors in every dimension $e$. $\boldsymbol{\theta}\in\mathbb{R}^{K\times 2}$ are learnable parameters for the filter. $\boldsymbol{D}_{o}$ represents the diagonal matrix of the out-degree matrix of graph $g$, and $\boldsymbol{D}_{I}$ represents the diagonal matrix of the in-degree matrix, $\boldsymbol{D}_{o}^{-1}\boldsymbol{W}$, $\boldsymbol{D}_{I}^{-1}\boldsymbol{W}^{T}$ represent the transition matrices of the diffusion process and the reverse one respectively.

GRU sends the integrated graph signal into the cell orderly to capture the temporal dependencies. The update process of feature in the GRU cell is as follows:

$\displaystyle\boldsymbol{r}^{(\tau)}=\sigma(\boldsymbol{\Theta}_{r\star g}[%\boldsymbol{\chi}^{(\tau)},\boldsymbol{H}^{(\tau-1)}]+\boldsymbol{b}_{r})$ | (4) | ||

$\displaystyle\boldsymbol{u}^{(\tau)}=\sigma(\boldsymbol{\Theta}_{u\star g}[%\boldsymbol{\chi}^{(\tau)},\boldsymbol{H}^{(\tau-1)}]+\boldsymbol{b}_{u})$ | (5) | ||

$\displaystyle\boldsymbol{C}^{(\tau)}=tanh(\boldsymbol{\Theta}_{C\star g}[%\boldsymbol{\chi}^{(\tau)},(\boldsymbol{r}^{(\tau)}\odot\boldsymbol{H}^{(\tau-%1)}]+\boldsymbol{b}_{c})$ | (6) | ||

$\displaystyle\boldsymbol{H}^{(\tau)}=\boldsymbol{u}^{(\tau)}\odot\boldsymbol{H%}^{(\tau-1)}+(1-\boldsymbol{u}^{(\tau)})\odot\boldsymbol{C}^{(\tau)}$ | (7) |

where $\boldsymbol{\chi}^{(\tau)}$, $\boldsymbol{H}^{(\tau)}$ denote the input and output at observation interval $\tau$, $\boldsymbol{r}^{(\tau)}$, $\boldsymbol{u}^{(\tau)}$ are reset gate and update gate at $\tau$ respectively. $\star g$ denotes the diffusion convolution defined in Eq(3) and $\boldsymbol{\Theta}_{r}$, $\boldsymbol{\Theta}_{u}$, $\boldsymbol{\Theta}_{C}$ are learnable parameters for the corresponding ﬁlters.

#### IV-B2 Multihead Self-Attention(MSA)

Standard $\boldsymbol{qkv}$ self-attention(SA) compute the attention weight $A$ over all value of elements $\boldsymbol{v}$. $A$ is based on the of query $\boldsymbol{q}$ and key $\boldsymbol{k}$ of elements , and calculate the pairwise dependency between two elements of input sequence $\boldsymbol{\zeta}\in\mathbb{R}^{(n_{od}^{2}+1)\times d}$.

Self-attention (SA)[37] computes the attention weight A overall value of elements $\boldsymbol{v}$ to calculate the pairwise dependency between two elements of input sequence $\boldsymbol{\zeta}\in\mathbb{R}^{(I\cdot n_{od}^{2})\times d}$ where $A$ is based on the query $\boldsymbol{q}$ and key $\boldsymbol{k}$ of elements.

$[\boldsymbol{q},\boldsymbol{k},\boldsymbol{v}]=\boldsymbol{\zeta}\boldsymbol{U%}_{qkv},\ \boldsymbol{U}_{qkv}\in\mathbb{R}^{d\times 3d_{h}}$ | (8) |

$A=softmax(\boldsymbol{qk}^{\mathrm{T}}/\sqrt{d_{h}})$ | (9) |

$SA(\boldsymbol{\zeta})=A\boldsymbol{v}$ | (10) |

We projected concatenated outputs from MSA, which runs $h$ SA procedures concurrently. The dimensions are kept constant by setting $d_{h}$ to $d/h$, where $h$ is the number of heads.

$\displaystyle MSA(\boldsymbol{\zeta})=[SA_{1}(\boldsymbol{\zeta});SA_{2}(%\boldsymbol{\zeta});...;SA_{h}(\boldsymbol{\zeta})]\boldsymbol{U}_{msa},$ | |||

$\displaystyle\boldsymbol{U}_{msa}\in\mathbb{R}^{h\cdot d_{h}\times d}$ | (11) |

$\boldsymbol{U}_{qkv}$ and $\boldsymbol{U}_{msa}$ above are learnable parameters

#### IV-B3 Distribution Learner

We element-wise multiply the node vector $\boldsymbol{n}_{i}$ with the distribution of traffic counts $\boldsymbol{\epsilon}_{\tau}$ to obtain the feature of OD node $i$ at observation $\tau$. In order to get all OD nodes fetures $\boldsymbol{\chi}$ during all the $o$ observation intervals. We expand dimention of $\boldsymbol{N}$ to $n_{od}\times 1\times n_{sec}$ and $\boldsymbol{D_{E}}$ to $1\times o\times n_{sec}$, respectively, then do the broadcast operation $\otimes$ on these two tensor as shown in Eq(12) to get the shape of $\boldsymbol{\chi}$ as $(o,n_{od},n_{sec})$. Then, divide by the dimension $o$ of the tensor, and take the $t\times k$ to $t\times k+\delta$ (for $t\in[0,I-1]$, $k=\frac{o}{I}$, $\delta>0$) each time to obtain the input tensor $(\delta,n_{od},n_{sec})$. In our case, we estimate an OD sequence of 12 hours, with an estimation interval every hour and an observation interval every 10 minutes. So we have $I=12,o=72,k=6$.

$\boldsymbol{\chi}=\boldsymbol{N}\otimes\boldsymbol{D}_{E}$ | (12) |

Subsequently, taking $\{\boldsymbol{\chi}^{(t\times k)},...,\boldsymbol{\chi}^{(t\times k+\delta)}\ %|\ t\in[0,I-1]\}$ as the input of the DCGRU module orderly, a hidden tensor $\boldsymbol{H}=\{\boldsymbol{H}^{(t\times k+\delta)}\ |\ t\in[0,I-1]\}$ is as the output with its shape is $(I,n_{od},n_{sec})$. Then expanded $\boldsymbol{H}$ by the dimension $I$ to obtain $(I\times n_{od},n_{sec})$ as the input of the Transformer encoder. With a shared position embedding parameters added before and after Transformer encoder, we refer to the output as mutual vectors, there are $I\times n_{od}$ mutal vectors for each represents the spatio-temporal mutual information of corresponding OD node. Lastly, we operate element-wise addtion $\oplus$ on all these mutual vectors to one vector * containing the global infromation, and do inner production between each mutual vector and the global information vector * to give a scalar for each node, then perform the softmax operations to obtain the inferred global distribution $\bar{\boldsymbol{d}}_{p\ or\ a}$.

For model training, we choose Jensen-Shannon Divergence(JSD) as the loss function as Eq(13), which measures the distance between two distributions symmetrically.

$\displaystyle Loss_{p\ or\ a}=JS(\bar{\boldsymbol{d}}_{p\ or\ a}||\boldsymbol{%d}_{p\ or\ a})$ | (13) |

### IV-C Estimator

Like other studies in static OD estimation and the off-line case of dynamic OD sequence estimation, we adopt the bi-level framework. The difference is the least squares approach at the upper level merely seeks to reduce the gap between observed and simulated traffic counts, which only facilitates numerical similarity between estimated and real OD sequence. Therefore, we incorporate the optimization with the best inference global distributions $\boldsymbol{d}_{p}^{*}$ and $\boldsymbol{d}_{a}^{*}$ and choose KLD[38] as the objective function as following.

$\displaystyle R(\hat{\boldsymbol{T}})=\min_{\hat{\boldsymbol{T}}_{1}\geq 0,...%,\hat{\boldsymbol{T}}_{I}\geq{0}}\alpha N(\hat{\boldsymbol{T}})+(1-\alpha)S(%\hat{\boldsymbol{T}})$ | (14) | ||

$\displaystyle N(\hat{\boldsymbol{T}})=\sum_{\tau=1}^{o}\frac{1}{2}(\boldsymbol%{\epsilon}_{\tau}-\hat{\boldsymbol{\epsilon}}_{\tau})^{\mathrm{T}}(\boldsymbol%{\epsilon}_{\tau}-\hat{\boldsymbol{\epsilon}}_{\tau})$ | |||

$\displaystyle S(\hat{\boldsymbol{T}})=KL(\hat{\boldsymbol{d}}_{p}||\boldsymbol%{d}^{*}_{p})+KL(\hat{\boldsymbol{d}}_{a}||\boldsymbol{D}^{*}_{a})$ | |||

$\displaystyle where\ \hat{\boldsymbol{\epsilon}}_{\tau}=\sum_{k=1}^{t}%\boldsymbol{P}_{\tau k}\hat{\boldsymbol{T}}_{k}$ |

Our optimization process is shown in Algorithm 2. It is worth noting that since the optimization is alone the approximate distributions rather than the real distributions, the structure should not be optimal when $S(\hat{\boldsymbol{T}})$ converges. Therefore, we then slack the structure constraint (set $\alpha$=1), which further does only numerical optimization and leads to a better point.

## V EXPERIMENTS AND RESULTS

We test our method on a large-scale real city network with a synthetic dataset. The effectiveness of NN for distributional inference is first validated. Then, a comparative experiment is used to demonstrate the advantage of our optimization method compared with traditional numerical optimization. The project uses Python programming and relies on the TensorFlow system[39] for gradient calculation and NN modeling. Sklearn library[40] and Scipy[41] are used for clustering and optimization program, respectively.

### V-A Study Network

We selected the $400km^{2}$ area around Cologne, Germany, as our study network and used SUMO as the simulator program to test on a large-scale city network[42]. 71368 road sections and 31584 intersections make up the network (Fig. 3(a)). We employ the K-means[43] technique to gather OD nodes on the network according to Euclidean distance. In this case, we select $n_{od}$=15 (Fig. 3(b)), and we aggregate the directed road sections from 782 to 64.

OD matrix(o’clock) | $6-7$ | $7-8$ | $8-9$ | $9-10$ | $10-11$ | $11-12$ | $12-13$ | $13-14$ | $14-15$ | $15-16$ | $16-17$ | $17-18$ |

total travels | 76972 | 86580 | 40131 | 31971 | 34211 | 41065 | 50015 | 48899 | 48124 | 74760 | 95042 | 79205 |

density($>m$) | 0.3689 | 0.3644 | 0.2844 | 0.2489 | 0.2533 | 0.2667 | 0.3111 | 0.3200 | 0.3022 | 0.3378 | 0.3644 | 0.3511 |

the largest OD pair | 9097 | 11048 | 4918 | 3395 | 3145 | 4523 | 5664 | 4643 | 4314 | 7937 | 10315 | 8816 |

### V-B Dataset

The synthetic data closely comparable to the actual conditions of urban traffic serves as the ground truth. Refer to [42] for more information.

As shown in Table 2, we took the ground truth OD matrix for 12 hours from 6:00 am to 18:00 pm from a 24-hours traffic simulation. Set the OD estimation interval to 1 hour and the traffic counts observation interval to 10 minutes. For the input of the DCGRU module, we set $\delta$=4 to indicate that the traffic counts of the four flowing observation intervals from the current OD estimation interval are used as input to characterize the current OD matrix. Therefore, the whole length of observation intervals is 76 (12$\times$6 since each OD estimation interval is with six observation intervals)

We set $m$=50 and sample the OMS from the ground truth OD matrix. Then resample from the original matrices sequence to get various OD sequences, and generate the corresponding observation traffic counts $\boldsymbol{\epsilon}$ to form a data set $(\boldsymbol{D}_{E},\boldsymbol{d}_{p})$ and $(\boldsymbol{D}_{E},\boldsymbol{d}_{a})$, the size of each is 10k sample pairs.

In an 8:2 ratio, we split the dataset for training and validation. A model is trained using the training data, and its generalization performance is assessed using the validated data.

### V-C OD estimating evaluation

Referring to[4], we use the numerical value indicators $RMSN(\hat{\boldsymbol{T}}_{t},\boldsymbol{T}_{t})$[44] and structural indicators $\rho(\hat{\boldsymbol{T}}_{t},\boldsymbol{T}_{t})$[45] to measure the gap between the estimated OD matrix $\hat{\boldsymbol{T}}_{t}$ and the real OD matrix $\boldsymbol{T}_{t}$.

$RMSN(\hat{\boldsymbol{T}}_{t},\boldsymbol{T}_{t})=\frac{\sqrt{n_{od}^{2}\sum%\limits^{n_{od}^{2}}_{i}(\boldsymbol{T}_{ti}-\hat{\boldsymbol{T}}_{ti})^{2}}}{%\sum\limits^{n_{od}^{2}}_{i}\boldsymbol{T}_{ti}}$ | (15a) | ||

$\rho(\hat{\boldsymbol{T}}_{t},\boldsymbol{T}_{t})=\frac{(\boldsymbol{T}_{t}-%\boldsymbol{\mu})^{T}(\hat{\boldsymbol{T}}_{t}-\boldsymbol{\hat{\mu}})}{\sqrt{%(\boldsymbol{T}_{t}-\boldsymbol{\mu})^{T}(\boldsymbol{T}_{t}-\boldsymbol{\mu})%}\sqrt{(\hat{\boldsymbol{T}}_{t}-\boldsymbol{\hat{\mu}})^{T}(\hat{\boldsymbol{%T}}_{t}-\boldsymbol{\hat{\mu}})}}$ | (15b) |

where $\boldsymbol{\mu}\in\mathbb{R}^{n_{od}^{2}}_{\geq{0}}$ is a vector with each element value equal to the mean of $\boldsymbol{T}_{t}$, and $\hat{\boldsymbol{\mu}}$, $\tilde{\boldsymbol{\mu}}$ corresponds to $\hat{\boldsymbol{T}}_{t}$ and $\tilde{\boldsymbol{T}}_{t}$, respectively.

### V-D Parameter settings

head number $h$ | 6 |

encoder layer $N$ | 2 |

learning rate $r$ | 1E-4 |

dimention $d$ | 128 |

diffusion convolution step $K$ | 2 |

maximum trips value $m$ | 50 |

OD estimation intervals $I$ | 12 |

observation intervals $o$ | 12 and 72 |

sequence length $\delta$ | 4 |

### V-E Results and analyze

#### V-E1 Training

Firstly, As shown in Fig. 4. We set the real OD sequence as our test set, the test set curve converges indicates our sampled training data is effective for model training. Moreover, the results of the test set is not as good as training and validation set since the distribution of real data and sampled data is not perfectly consistent. Which is a common problem in deep learning and implies it can be further alleviated through more appropriate sampling methods.

Secondly, we tested the impact of different observation intervals length $\delta$ on the inferred results. Our experiment in Fig. 5 shows that, when $\delta=4$ the inferenced results by NN model is the best. It means our model does not completely utilize all the observation intervals information during one estimation interval(should be $\delta=6$), and when there is an overlap ($\delta>6$), the results get worse. It indicates that our current model has not effectively extract all the information introduced by longer observation intervals, more advanced NN model could further improve the inference results.

=

#### V-E2 Distribution Inference

Firstly, Fig. 6(a) shows the best inference results of our NN model on the global production distribution $\boldsymbol{d}_{p}^{*}$, it reflects the ability of our model to give the global spatio-temporal distribution of OD sequence. Secondly, we pick an OD matrix at 12-13 o’clock from the global distriution, as shown in Fig. 6(b), which shows that the model has a good performance on spatial distribution inference, it clearly reflects the proportional between different OD nodes in the same OD matrix. Finally, we picked the distributions of four OD nodes during the whole estimation period, as shown in Fig. 7, the model infers the temporal evolution trend of specific OD node also very well, moreover in Fig. 7 we demonstrate with different size of proportions(from le-2 to le-5) and it shows our model has a high sensitivity that, except 1e-5, since these proportions are too small to cause the model to infer the node has a proportions of 0.

#### V-E3 Optimization

Firstly, we compare two traditional optimization methods. These methods only use numerical optimization with two different observation interval sets: one hour and 10 minutes, respectively. The one hour setting indicates there are 12 observation intervals since we have 12 hours during the whole estimation period, so we refer to this method as Traditional($o$=12). And another is Traditional($o$=72) since there are 72 observation intervals for 10 minutes setting. Although 10 minutes setting has the smallest scale and can provides more constraints to alleviate the underdetermined problem, as mentioned in[5], it also introduces more noise in the optimization process, which may leading to a poor optimization result. As seen in Table 3, the final optimization result obtained by the 10 minutes setting is worse than the one hour setting, which indicates that the noise in the optimization process offset the benefits of excessive observation intervals. Secondly, our method namd Ours with $\boldsymbol{d}^{*}$ can mining significant information from these small sacle observation data to infer accurate global distributions $\boldsymbol{d}_{p}^{*}$ and $\boldsymbol{d}_{a}^{*}$ and then guide optimization process to find a better result. In addition, we also provide the results obtained from Ours with $\boldsymbol{d}$, which utilizing the real global distribution $\boldsymbol{d}_{p}$ and $\boldsymbol{d}_{a}$ as guide, and provides the upper bound of our method, which is shown in Fig. 8. It can indicate that our method has a potent extendibility, such as through better data sampling methods or better models to infer more accurate distributions and get better optimization results.

OD matrix(o’clock) | $6-7$ | $7-8$ | $8-9$ | $9-10$ | $10-11$ | $11-12$ | $12-13$ | $13-14$ | $14-15$ | $15-16$ | $16-17$ | $17-18$ | Average |

$RMSN$ | |||||||||||||

Traditional($o$=12) | 1.77 | 1.88 | 1.76 | 1.81 | 1.96 | 2.02 | 2.00 | 1.90 | 1.82 | 2.11 | 2.17 | 1.60 | 1.90 |

Ours with $\boldsymbol{d}^{*}$ | 1.68 | 1.92 | 1.27 | 1.21 | 1.32 | 1.55 | 1.60 | 1.46 | 1.43 | 2.04 | 2.30 | 1.57 | 1.61 |

Traditional($o$=72) | 2.09 | 2.06 | 2.49 | 1.62 | 1.84 | 2.20 | 1.81 | 1.25 | 2.52 | 2.39 | 1.40 | 1.45 | 1.93 |

Ours with $\boldsymbol{d}$ | 1.33 | 1.51 | 1.01 | 1.51 | 1.43 | 1.41 | 1.41 | 1.35 | 1.15 | 1.44 | 1.51 | 1.18 | 1.35 |

$\rho$ | |||||||||||||

Traditional($o$=12) | 0.8470 | 0.8358 | 0.8149 | 0.7729 | 0.7266 | 0.7502 | 0.7681 | 0.7539 | 0.7573 | 0.7465 | 0.7615 | 0.8583 | 0.7828 |

Ours with $\boldsymbol{d}^{*}$ | 0.8567 | 0.8167 | 0.9096 | 0.9066 | 0.8851 | 0.8614 | 0.8517 | 0.8585 | 0.8560 | 0.7618 | 0.7182 | 0.8610 | 0.8453 |

Traditional($o$=72) | 0.7496 | 0.8677 | 0.5921 | 0.8245 | 0.7661 | 0.6735 | 0.8020 | 0.9255 | 0.5535 | 0.6182 | 0.9055 | 0.8999 | 0.7649 |

Ours with $\boldsymbol{d}$ | 0.9067 | 0.8824 | 0.9435 | 0.8477 | 0.8579 | 0.8764 | 0.8818 | 0.8763 | 0.9064 | 0.8745 | 0.8685 | 0.9188 | 0.8867 |

Inference | Inferred | Optimization | Traditional($o$=12) | Ours with $\boldsymbol{d}^{*}$ | Traditional($o$=72) | Ours with $\boldsymbol{d}$ |
---|---|---|---|---|---|---|

$KL(\boldsymbol{d}_{p}^{*}||\boldsymbol{d}_{p})$ | 0.0711 | $KL(\hat{\boldsymbol{d}}_{p}||\boldsymbol{d}_{p})$ | 0.0756 | 0.0560 | 0.0966 | 0.0105 |

$KL(\boldsymbol{d}_{a}^{*}||\boldsymbol{d}_{a})$ | 0.0826 | $KL(\hat{\boldsymbol{d}}_{a}||\boldsymbol{d}_{a})$ | 0.0772 | 0.0591 | 0.0684 | 0.0102 |

In Fig. 9, we demonstrate the estimation results of four OD pairs along time series with four different orders of magnitude(1e1 to 1e4), respectively, to illustrate that our method has better performance on OD squence estimation tasks of different orders of magnitude. In there we only choose Traditional($o$=12) and Ours with $\boldsymbol{d}^{*}$ since Traditional($o$=12) has the better performance in numerical methods, and Ours with $\boldsymbol{d}^{*}$ is an only prctical way to use approximate distributions $\boldsymbol{d}^{*}_{p}$ and $\boldsymbol{d}^{*}_{a}$ from our distribution learner. And in Fig. 10 we show x-y plots of the estimation results for four different OD matrices.

As shown in Table 4, it is worth noting that the KLD of the best distribution $\boldsymbol{d}_{p}^{*}$ inferred by the our NN model is 0.0711. Our actual results from the final optimization is $KL(\hat{\boldsymbol{d}}_{p}||\boldsymbol{d}_{p})$=0.0560. And KLD of the best distribution $\boldsymbol{d}_{p}^{*}$ inferred by the our NN model is 0.0826, and our actual results from the final optimization is $KL(\hat{\boldsymbol{d}}_{a}||\boldsymbol{d}_{a})$=0.0591, which are much smaller than the approximate distributions be given since the approximate distributions are only used as a guide in the optimization process. Moreover, our convergence speed is also faster than traditional methods. It indicates that using the approximate distribution as a guide can help the optimization converge to a better point and faster.

## VI CONCLUTION

In this paper, we propose a deep learning method to learn the relationship between traffic counts and OD structure information through mining information from a small amount of mixed observational traffic data. Then we use this structure information to constrain the traditional least squares numerical optimization method based on the bi-level framework. We validate that our method outperforms traditional numerical-only optimization methods on 12 hours of synthetic data from a large-scale city. Moreover, we present the space for future improvement of our method by improving the sampling method and deep learning models.

## References

- [1]Y.Li, R.Yu, C.Shahabi, and Y.Liu, “Diffusion convolutional recurrentneural network: Data-driven traffic forecasting,”
*arXiv preprintarXiv:1707.01926*, 2017. - [2]F.Altché and A.deLaFortelle, “An lstm network for highway trajectoryprediction,” in
*2017 IEEE 20th International Conference on IntelligentTransportation Systems (ITSC)*.IEEE,2017, pp. 353–359. - [3]D.Lian, Y.Wu, Y.Ge, X.Xie, and E.Chen, “Geography-aware sequentiallocation recommendation,” in
*Proceedings of the 26th ACM SIGKDDinternational conference on knowledge discovery & data mining*, 2020, pp.2009–2019. - [4]K.N. Behara, A.Bhaskar, and E.Chung, “A novel methodology to assimilatesub-path flows in bi-level od matrix estimation process,”
*IEEETransactions on Intelligent Transportation Systems*, 2020. - [5]E.Cascetta, D.Inaudi, and G.Marquis, “Dynamic estimators oforigin-destination matrices using traffic counts,”
*Transportationscience*, vol.27, no.4, pp. 363–373, 1993. - [6]E.Bert, “Dynamic urban origin-destination matrix estimation methodology,”EPFL, Tech. Rep., 2009.
- [7]H.Spiess, “A gradient approach for the od matrix adjustment problem,”
*a A*, vol.1, p.2, 1990. - [8]M.J. Maher, X.Zhang, and D.VanVliet, “A bi-level programming approach fortrip matrix estimation and traffic control problems with stochastic userequilibrium link flows,”
*Transportation Research Part B:Methodological*, vol.35, no.1, pp. 23–40, 2001. - [9]P.Robillard, “Estimating the od matrix from observed link volumes,”
*Transportation research*, vol.9, no. 2-3, pp. 123–128, 1975. - [10]B.Jaume and L.Montero, “An integrated computational framework for theestimation of dynamic od trip matrices,” in
*2015 IEEE 18thInternational Conference on Intelligent Transportation Systems*.IEEE, 2015, pp. 612–619. - [11]F.Calabrese, G.DiLorenzo, L.Liu, and C.Ratti, “Estimatingorigin-destination flows using opportunistically collected mobile phonelocation data from one million users in boston metropolitan area,” 2011.
- [12]H.Tavana,
*Internally consistent estimation of dynamic networkorigin-destination flows from intelligent transportation systems data usingbi-level optimization*.the Universityof Texas at Austin, 2001. - [13]Z.Cheng, M.Trepanier, and L.Sun, “Real-time forecasting of metroorigin-destination matrices with high-order weighted dynamic modedecomposition,”
*Transportation Science*, 2022. - [14]D.E. Low, “New approach to transportation systems modeling,”
*Trafficquarterly*, vol.26, no.3, 1972. - [15]H.Spiess, “A maximum likelihood model for estimating origin-destinationmatrices,”
*Transportation Research Part B: Methodological*, vol.21,no.5, pp. 395–412, 1987. - [16]H.J. VanZuylen and L.G. Willumsen, “The most likely trip matrix estimatedfrom traffic counts,”
*Transportation Research Part B: Methodological*,vol.14, no.3, pp. 281–293, 1980. - [17]M.J. Maher, “Inferences on trip matrices from observations on link volumes: abayesian statistical approach,”
*Transportation Research Part B:Methodological*, vol.17, no.6, pp. 435–447, 1983. - [18]E.Cascetta, “Estimation of trip matrices from traffic counts and survey data:a generalized least squares estimator,”
*Transportation Research PartB: Methodological*, vol.18, no. 4-5, pp. 289–299, 1984. - [19]J.Barcelö, L.Montero, L.Marqués, and C.Carmona, “Travel timeforecasting and dynamic origin-destination estimation for freeways based onbluetooth traffic monitoring,”
*Transportation research record*, vol.2175, no.1, pp. 19–27, 2010. - [20]H.Alibabai and H.S. Mahmassani, “Dynamic origin-destination demandestimation using turning movement counts,”
*Transportation researchrecord*, vol. 2085, no.1, pp. 39–48, 2008. - [21]H.Spiess and D.Suter,
*Modelling the Daily Traffic Flows on an HourlyBasis*, 1990. - [22]J.T. Lundgren and A.Peterson, “A heuristic for the bilevelorigin–destination-matrix estimation problem,”
*TransportationResearch Part B: Methodological*, vol.42, no.4, pp. 339–354, 2008. - [23]C.Antoniou, B.Ciuffo, L.Montero, J.Casas, J.Barcelò, E.Cipriani,T.Djukic, V.Marzano, M.Nigro, M.Bullejos
*etal.*, “A framework forthe benchmarking of od estimation and prediction algorithms,” in*93rdTransportation Research Board Annual Meeting*, 2014. - [24]M.Bullejos, J.BarcelóBugeda, and L.MonteroMercadé, “A due basedbilevel optimization approach for the estimation of time sliced odmatrices,” in
*Proceedings of the International Symposia of TransportSimulation (ISTS) and the International Workshop on Traffic Data Collectionand its Standardisation (IWTDCS), ISTS’14 and IWTCDS’14*, 2014, pp. 1–19. - [25]G.Bishop, G.Welch
*etal.*, “An introduction to the kalman filter,”*Proc of SIGGRAPH, Course*, vol.8, no. 27599-23175, p.41, 2001. - [26]N.J. vander Zipp and R.Hamerslag, “Improved kalman filtering approach forestimating origin-destination matrices for freeway corridors,”
*Transportation Research Record*, no. 1443, 1994. - [27]M.Bierlaire and F.Crittin, “An efficient algorithm for real-time estimationand prediction of dynamic od tables,”
*Operations Research*, vol.52,no.1, pp. 116–127, 2004. - [28]J.Ren and Q.Xie, “Efficient od trip matrix prediction based on tensordecomposition,” in
*2017 18th IEEE International Conference on MobileData Management (MDM)*.IEEE, 2017,pp. 180–185. - [29]J.Liu, F.Zheng, H.J. van Zuylen, and J.Li, “A dynamic od predictionapproach for urban networks based on automatic number plate recognitiondata,”
*Transportation Research Procedia*, vol.47, pp. 601–608, 2020. - [30]P.J. Schmid, “Dynamic mode decomposition of numerical and experimentaldata,”
*Journal of fluid mechanics*, vol. 656, pp. 5–28, 2010. - [31]X.Xiong, K.Ozbay, L.Jin, and C.Feng, “Dynamic origin–destination matrixprediction with line graph neural networks and kalman filter,”
*Transportation Research Record*, vol. 2674, no.8, pp. 491–503, 2020. - [32]K.-F. Chu, A.Y. Lam, and V.O. Li, “Deep multi-scale convolutional lstmnetwork for travel demand and origin-destination predictions,”
*IEEETransactions on Intelligent Transportation Systems*, vol.21, no.8, pp.3219–3232, 2019. - [33]Z.Pan, Y.Liang, W.Wang, Y.Yu, Y.Zheng, and J.Zhang, “Urban trafficprediction from spatio-temporal data using deep meta learning,” in
*Proceedings of the 25th ACM SIGKDD international conference onknowledge discovery & data mining*, 2019, pp. 1720–1730. - [34]J.Barceló, L.Montero, M.Bullejos, and M.Linares, “A practical proposalfor using origin-destination matrices in the analysis, modeling andsimulation for traffic management,” in
*93rd TRB annual meetingcompendium of papers*, 2014, pp. 14–3793. - [35]P.A. Lopez, M.Behrisch, L.Bieker-Walz, J.Erdmann, Y.-P. Flötteröd,R.Hilbrich, L.Lücken, J.Rummel, P.Wagner, and E.Wießner,“Microscopic traffic simulation using sumo,” in
*2018 21stinternational conference on intelligent transportation systems (ITSC)*.IEEE, 2018, pp. 2575–2582. - [36]A.Pareja, G.Domeniconi, J.Chen, T.Ma, T.Suzumura, H.Kanezashi, T.Kaler,T.Schardl, and C.Leiserson, “Evolvegcn: Evolving graph convolutionalnetworks for dynamic graphs,” in
*Proceedings of the AAAI Conference onArtificial Intelligence*, vol.34, no.04, 2020, pp. 5363–5370. - [37]A.Vaswani, N.Shazeer, N.Parmar, J.Uszkoreit, L.Jones, A.N. Gomez,Ł.Kaiser, and I.Polosukhin, “Attention is all you need,” in
*Advances in neural information processing systems*, 2017, pp.5998–6008. - [38]C.M. Bishop and N.M. Nasrabadi,
*Pattern recognition and machinelearning*.Springer, 2006, vol.4,no.4. - [39]M.Abadi, P.Barham, J.Chen, Z.Chen, A.Davis, J.Dean, M.Devin,S.Ghemawat, G.Irving, M.Isard
*etal.*, “$\{$TensorFlow$\}$: asystem for $\{$Large-Scale$\}$ machine learning,” in*12th USENIXsymposium on operating systems design and implementation (OSDI 16)*, 2016,pp. 265–283. - [40]F.Pedregosa, G.Varoquaux, A.Gramfort, V.Michel, B.Thirion, O.Grisel,M.Blondel, P.Prettenhofer, R.Weiss, V.Dubourg
*etal.*,“Scikit-learn: Machine learning in python,”*the Journal of machineLearning research*, vol.12, pp. 2825–2830, 2011. - [41]P.Virtanen, R.Gommers, T.E. Oliphant, M.Haberland, T.Reddy, D.Cournapeau,E.Burovski, P.Peterson, W.Weckesser, J.Bright
*etal.*, “Scipy 1.0:fundamental algorithms for scientific computing in python,”*Naturemethods*, vol.17, no.3, pp. 261–272, 2020. - [42]S.Uppoor, O.Trullols-Cruces, M.Fiore, and J.M. Barcelo-Ordinas,“Generation and analysis of a large-scale urban vehicular mobilitydataset,”
*IEEE Transactions on Mobile Computing*, vol.13, no.5, pp.1061–1075, 2013. - [43]D.Arthur and S.Vassilvitskii, “k-means++: The advantages of carefulseeding,” Stanford, Tech. Rep., 2006.
- [44]C.Antoniou, M.Ben-Akiva, and H.N. Koutsopoulos, “Incorporating automatedvehicle identification data into origin-destination estimation,”
*Transportation Research Record*, vol. 1882, no.1, pp. 37–44, 2004. - [45]T.Djukic, S.Hoogendoorn, and H.VanLint, “Reliability assessment of dynamicod estimation methods based on structural similarity index,” Tech. Rep.,2013.