In this paper the minimum fill-in problem which arises at the application of the sparse matrix method for linear sparse systems is discussed from the graphtheoretic viewpoint and the author gives some results which can be directly introduced in the design of, so called, the optimal elimination ordering algorithm which gives the minimum fill-in(the number of zeros in coefficient matrix which become non-zero during the elimination process). Through this investigation only graphs are treated instead of the coefficient matrices for linear systems, and the elimination process for a matrix is equivalated to the vertx eliminations for the graph. Then, the results by the theoretical investigation are summarized as following: 1. Optimal elimination for each subgraph which is subdivided
appropriately from whole graph leads to the global optimum.
2. In each subgraph there are only two kind of eliminations. Furthermore, some numerical experiments show the characteristics of the subset of vertices, which subdivide a subgraph from the residual.