What makes a matrix stochastic




















A steady state of a stochastic matrix A is an eigenvector w with eigenvalue 1, such that the entries are positive and sum to 1. The Perron—Frobenius theorem describes the long-term behavior of a difference equation represented by a stochastic matrix. Its proof is beyond the scope of this text. Let A be a positive stochastic matrix. Then A admits a unique steady state vector w , which spans the 1 -eigenspace. Moreover, for any vector v 0 with entries summing to some number c , the iterates.

One should think of a steady state vector w as a vector of percentages. And no matter the starting distribution of movies, the long-term distribution will always be the steady state vector. The sum c of the entries of v 0 is the total number of things in the system being modeled. The total number does not change, so the long-term state of the system must approach cw : it is a multiple of w because it is contained in the 1 -eigenspace, and the entries of cw sum to c. Here is how to compute the steady-state vector of A.

The above recipe is suitable for calculations by hand, but it does not take advantage of the fact that A is a stochastic matrix. In practice, it is generally faster to compute a steady state vector by computer as follows:. Here is how to approximate the steady-state vector of A with a computer. Continuing with the Red Box example , we can illustrate the Perron—Frobenius theorem explicitly. The matrix. Notice that 1 is strictly greater in absolute value than the other eigenvalues, and that it has algebraic hence, geometric multiplicity 1.

The eigenvector u 1 necessarily has positive entries; the steady-state vector is. Iterating multiplication by A in this way, we have. This shows that A t x approaches a 1 u 1 , which is an eigenvector with eigenvalue 1 , as guaranteed by the Perron—Frobenius theorem.

What do the above calculations say about the number of copies of Prognosis Negative in the Atlanta Red Box kiosks? Suppose that the kiosks start with copies of the movie, with 30 copies at kiosk 1, 50 copies at kiosk 2, and 20 copies at kiosk 3. Of course it does not make sense to have a fractional number of movies; the decimals are included here to illustrate the convergence.

The steady-state vector says that eventually, the movies will be distributed in the kiosks according to the percentages. Moreover, this distribution is independent of the beginning distribution of movies in the kiosks. Now we turn to visualizing the dynamics of i. The matrix A does the same thing as D , but with respect to the coordinate system defined by the columns u 1 , u 2 , u 3 of C. This is the geometric content of the Perron—Frobenius theorem. Internet searching in the s was very inefficient.

Yahoo or AltaVista would scan pages for your search text, and simply list the results with the most occurrences of those words. Larry Page and Sergey Brin invented a way to rank pages by importance. They founded Google based on their algorithm.

Here is roughly how it works. Each web page has an associated importance, or rank. This is a positive number. This rank is determined by the following rule. The 1 -eigenspace of a stochastic matrix is very important. If we are talking about stochastic matrices in particular, then we will further require that the entries of the steady-state vector are normalized so that the entries are non-negative and sum to 1.

The Perron—Frobenius theorem describes the long-term behavior of a difference equation represented by a stochastic matrix. Its proof is beyond the scope of this text. Let A be a positive stochastic matrix. Then A admits a unique normalized steady state vector w , which spans the 1 -eigenspace. Moreover, for any vector v 0 with entries summing to some number c , the iterates.

One should think of a steady state vector w as a vector of percentages. And no matter the starting distribution of movies, the long-term distribution will always be the steady state vector. The sum c of the entries of v 0 is the total number of things in the system being modeled. The total number does not change, so the long-term state of the system must approach cw : it is a multiple of w because it is contained in the 1 -eigenspace, and the entries of cw sum to c.

Here is how to compute the steady-state vector of A. The above recipe is suitable for calculations by hand, but it does not take advantage of the fact that A is a stochastic matrix.

In practice, it is generally faster to compute a steady state vector by computer as follows:. Here is how to approximate the steady-state vector of A with a computer. Continuing with the truck rental example , we can illustrate the Perron—Frobenius theorem explicitly.

The matrix. Notice that 1 is strictly greater in absolute value than the other eigenvalues, and that it has algebraic hence, geometric multiplicity 1. The eigenvector u 1 necessarily has positive entries; the steady-state vector is. Iterating multiplication by A in this way, we have.

This shows that A t x approaches a 1 u 1 , which is an eigenvector with eigenvalue 1 , as guaranteed by the Perron—Frobenius theorem. What do the above calculations say about the number of trucks in the rental locations? Suppose that the locations start with total trucks, with 30 trucks at location 1, 50 trucks at location 2, and 20 trucks at location 3.

Of course it does not make sense to have a fractional number of trucks; the decimals are included here to illustrate the convergence. The steady-state vector says that eventually, the trucks will be distributed in the kiosks according to the percentages. Moreover, this distribution is independent of the beginning distribution of trucks at locations. Now we turn to visualizing the dynamics of i. The matrix A does the same thing as D , but with respect to the coordinate system defined by the columns u 1 , u 2 , u 3 of C.

This is the geometric content of the Perron—Frobenius theorem. Internet searching in the s was very inefficient. Yahoo or AltaVista would scan pages for your search text, and simply list the results with the most occurrences of those words. Larry Page and Sergey Brin invented a way to rank pages by importance. In fact, the set of all nonsingular stochastic matrices over a field forms a group under matrix multiplication. This group is called the stochastic group.

The following tables give the number of distinct stochastic matrices and distinct nonsingular stochastic matrices over for small. Poole, D. Monthly , , Weisstein, Eric W. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more.

Walk through homework problems step-by-step from beginning to end. Hints help you try the next step on your own.



0コメント

  • 1000 / 1000