Since random processes are collections of random variables, you already possess the theoretical knowledge necessary to analyze random processes. From now on, we would like to discuss methods and tools that are useful in studying random processes. Remember that expectation and variance were among the important statistics that we considered for random variables. Here, we would like to extend those concepts to random processes.
Mean Function of a Random Process:
Mean Function of a Random Process
For a random process {X(t),t∈J}, the mean functionμX(t):J→R, is defined as
μX(t)=E[X(t)]
The above definition is valid for both continuous-time and discrete-time random processes. In particular, if {Xn,n∈J} is a discrete-time random process, then
μX(n)=E[Xn], for all n∈J.
Some books show the mean function by mX(t) or MX(t).
Here, we chose μX(t) to avoid confusion with moment generating functions.
The mean function gives us an idea about how the random process behaves on average as time evolves.
For example, if X(t) is the temperature in a certain city, the mean function μX(t) might look like the function shown in Figure 10.3. As we see, the expected value of X(t) is lowest in the winter and highest in summer.
Figure 10.3 - The mean function, μX(t), for the temperature in a certain city.
Example
Find the mean functions for the random processes given in Examples 10.1 and 10.2.
μX(n)=E[Xn]=1000E[Yn](where Y=1+R∼Uniform(1.04,1.05))=1000∫1.051.04100yndy(by LOTUS)=105n+1[yn+1]1.051.04=105n+1[(1.05)n+1−(1.04)n+1], for all n∈{0,1,2,⋯}.
μX(t)=E[X(t)]=E[A+Bt]=E[A]+E[B]t=1+t, for all t∈[0,∞).
Autocorrelation and Autocovariance:
The mean function μX(t) gives us the expected value of X(t) at time t, but it does not give us any information about how X(t1) and X(t2) are related. To get some insight on the relation between X(t1) and X(t2), we define correlation and covariance functions.
For a random process {X(t),t∈J}, the autocorrelation function or, simply, the correlation function, RX(t1,t2), is defined by
RX(t1,t2)=E[X(t1)X(t2)],for t1,t2∈J.
For a random process {X(t),t∈J}, the autocovariance function or, simply, the covariance function, CX(t1,t2), is defined by
If t1≠t2, then the covariance function CX(t1,t2) gives us some information about how X(t1) and X(t2) are statistically related. In particular, note that
CX(t1,t2)=E[(X(t1)−E[X(t1)])(X(t2)−E[X(t2)])].
Intuitively, CX(t1,t2) shows how X(t1) and X(t2) move relative to each other.
If large values of X(t1) tend to imply large values of X(t2), then (X(t1)−E[X(t1)])(X(t2)−E[X(t2)]) is positive on average. In this case, CX(t1,t2) is positive, and we say X(t1) and X(t2) are positively correlated. On the other hand, if large values of X(t1) imply small values of X(t2), then (X(t1)−E[X(t1)])(X(t2)−E[X(t2)]) is negative on average, and we say X(t1) and X(t2) are negatively correlated.
If CX(t1,t2)=0, then X(t1) and X(t2) are uncorrelated.
Example
Find the correlation functions and covariance functions for the random processes given in Examples 10.1 and 10.2.
RX(m,n)=E[XmXn]=106E[YmYn](where Y=1+R∼Uniform(1.04,1.05))=106∫1.051.04100y(m+n)dy(by LOTUS)=108m+n+1[ym+n+1]1.051.04=108m+n+1[(1.05)m+n+1−(1.04)m+n+1], for all m,n∈{0,1,2,⋯}.
RX(t1,t2)=E[X(t1)X(t2)]=E[(A+Bt1)(A+Bt2)]=E[A2]+E[AB](t1+t2)+E[B2]t1t2=2+E[A]E[B](t1+t2)+2t1t2(since A and B are independent)=2+t1+t2+2t1t2, for all t1,t2∈[0,∞).
Finally, to find the covariance function for X(t), we can write
CX(t1,t2)=RX(t1,t2)−E[X(t1)]E[X(t2)]=2+t1+t2+2t1t2−(1+t1)(1+t2)=1+t1t2, for all t1,t2∈[0,∞).