Skip to main content

Hardwood Quality

Graph Based Morphology as Used in SilviScan

Silviscan is a world-first instrument for the rapid and non-destructive assessment of wood quality in trees. The technology measures the key wood and fibre properties used for predicting paper and sawn timber properties. It can be used in breeding trees for special industrial uses, and for the speedy assessment of forest resources. SilviScan won the 2001 CSIRO Chairman’s Gold Medal.

Introduction

One of the projects in our Quantitative Imaging Group is a fully-automated segmentation procedure for wood micrographs. These images are not easy to segment because the objects of interest are easily confused with other image features when viewed on a local scale. For this reason, classical segmentation procedures working at the pixel level are not very successful. However, a graph-based technique can be used to explore spatial relationships between image features on a global scale within the image and has therefore proved to be more successful.

A typical wood cross-section consists of the following four features: large elliptical objects, often blocked with bright cell-wall debris or membranes (V); long, linear objects (R); small, circular objects (F) and small circular objects similar to F, only brighter and with less well-defined walls (P). We indicate and refer to these objects with letters because this project is commercial-in-confidence. The segmentation procedure must segment all four of these objects with no human intervention. Shown in below is an example of the type of image that we wish to segment (only a portion of the full 1024 x 1024 image is shown), with examples of the four objects as marked.

img1

Figure 1:  Subset of an example of a wood micrograph with examples of image objects F, P, V and R as marked.

Our graph-based approach is to segment the walls that separate the F objects and then to construct a graph where each node corresponds to an F object. The graph algorithms were developed within the image analysis package Z-IMAGE. In the context of image analysis, a graph is a compact representation of an image that contains only that information which is necessary to carry out a particular imaging task. It is defined as a set of nodes connected by a set of edges. Typically the nodes correspond to an image feature that is repeated throughout the image (in our case, the F objects). The edges determine the neighbourhood of each node. The nodes of the graph are classified using a tree-based procedure called `CART’ (classification and regression tree) so as to find those nodes that represent the genuine type F objects in the image. This involves assigning various attributes to the nodes in the graph and then using CART to discriminate the true F objects from other candidate objects via the derived classification and regression tree. When an image is represented using a graph, it is often easier to extract features that span large regions in the image. For example, the graph can be used to determine the alignment of the F and R objects in the image and this information can then be used by CART in the segmentation procedure. Moreover, the processing time for a graph is significantly less than that for the original image.

Once the F objects have been classified by CART, we then have an image that is divided into two distinct regions: that filled by the F objects and that filled by the remaining R, V and P objects. The search for R, V and P objects can be restricted to this latter region, thus reducing the possibility of segmentation errors. The segmentation of the R objects is accomplished using a distance transform followed by reconstruction. However, the segmentation of the V and P objects is much more difficult because the V objects cannot always be clearly distinguished from the P objects. We have approached the problem using a graph-based method similar to that for the F objects and also an alternative pixel-based method.

To begin with, it is necessary to clean the input image to remove noise obtained during data acquisition. The input image is first processed using a vertical rank order statistic filter of length 3 to remove horizontal striping in the image. The rank order statistic filter reduces the striping while preserving the edges between the F objects. A convolution with a two-dimensional binomial kernel is then used to further smooth noise in the image.

Segmentation of the Walls

We then attempt a segmentation of the walls surrounding the F objects in the image. As these walls are dark, a pixel corresponding to a wall in the image can be defined by the fact that it has a value that is lower than that of adjacent pixels on either side of it. We consider adjacent pixels in the four primary axis directions of the digital image: horizontal, vertical and the two diagonal directions. The result for our example image is shown below.

img2

Figure 2: Grey-scale image of object walls.

As is apparent from this result, the procedure not only finds the walls surrounding the F objects, but all other wall-like features as well, including the walls surrounding the P objects. To remove this unwanted information, we first construct a binary image of the walls by thresholding the grey-level image at a level of 10. This level is less than the lowest level of the walls between the F objects, but greater than the level of the walls surrounding the P objects and other unwanted information. The resultant binary image is shown in Fig. 3 below.

img3

Figure 3: Binary image of walls.

The walls surrounding the F objects are all fully connected, but those surrounding the P objects have been broken up into disconnected pieces. We can therefore apply a skeletonisation procedure to prune the fragments and barbs from the image, thus yielding an image consisting of the walls surrounding the F objects. The result is shown in Fig. 4 below.

img4

Figure 4: Homotopically thinned image of walls.

Construction of the Graph

The homotopic thinning forms a tesselated image where each tesselate represents either an F object or some other type of object. It is now necessary to classify the tesselates in order to find those that represent the true object type F in the image. We begin by constructing a graph from the tesselated image. To each tesselate we assign a node in the graph and edges are placed between nodes in the graph when they represent neighbouring tesselates. The graph constructed from the tesselated image in Fig. 4 is shown in Fig. 5. In this example, there are 544 nodes in the graph (in a full 1024 x 1024 image, there would typically be around 10 thousand nodes).

img5

Figure 5: Graph constructed from thinned image.

The tesselates in the thinned image give an accurate representation of the shape and area of the segmented objects in the image. Attributes such as tesselate shape and area can therefore easily be assigned to the nodes in the graph. Other attributes assigned to the graph nodes might include object brightness and variance, number of neighbours, or average wall strength. Attributes can also be assigned to the edges in the graph, for example variation in grey-level between neighbouring objects, wall strength between objects, or distance between neighbouring objects. The analysis of some of the simple attributes assigned to graph nodes could perhaps be accomplished using classical pixel-based image analysis techniques. However, it would be very difficult to find an image processing corollary to attributes associated with the edges between nodes, such as difference in grey-level, wall strength or distance.

 

 

Determining the Alignment of the R Objects

The R and F objects are aligned at some fixed angle in the image and the value of this angle is required in the procedure for classifying the F objects. The alignment can be found with the aid of the graph. As the edges in the graph connect F objects in the image, the direction of alignment can be determined by examining the angle of the edges in the graph. Fig. 6 shows a smoothed histogram of the angles of the edges in the graph in Fig. 5 (the angles range from -89 through to 90 degrees; in this case, the angle of alignment is 88 degrees). It is apparent that even using a graph derived from only a subset of the full image, the maximum peak in the histogram provides a strong indication of the angle of alignment of the R and F objects in the image.

img6

Figure 6: Histogram of edge angles, where the peak indicates the angle of alignment of the F and R objects.

In addition to the simple attributes associated with the graph described above, it is possible to derive sub-graphs which provide more high-level relationship information (a sub-graph is a subset of a given graph, where particular nodes and edges have been removed). For instance, it is known that cell development occurs along the R axis, so that F objects are similar to their neighbours along this axis but not perpendicular to this axis. A sub-graph can be constructed (shown in Fig. 7) wherein each node is connected only to the 2 neighbours most closely aligned to the R axis, one above and one below each node. It is then possible to derive attributes associated with this new definition of neighbours which more truly reflect the known characteristics of the F objects. Again, it would be very difficult to access such information using pixel-based processing.

img8

Figure 8: The classification result for the F objects, outlined in white.

Deriving the classification tree involved the following procedure. First a small training set of wood images, representative of the variability of the wood to be classified, was selected. Each image was manually segmented into the 4 classes, F, R, P and V. Since the F objects have the most consistent appearance, these were classified first. For each node in the graph, its classification (i.e.whether it was an F object or not) and its set of attributes was fed into the CART software package and a classification tree generated. The resulting classification accuracy for the training set was very high. Because of commercial considerations, we are not at liberty to give details. The classification accuracy when the tree was applied to other images, not included in the training set, was also high, indicating that the tree performed well on new data. Figure 8 shows the classification result on the sample image (which was not part of the training set).

 

Segmentation of the R, S and P Objects

With the F objects classified by CART, we now have an image that is divided into two distinct regions: that filled by the F objects and that filled by the remaining R, V and P objects. As detailed in this section, the search for R, V and P objects can be restricted to this latter region, thus reducing the possibility of segmentation errors.

Segmentation of the R Objects

We begin by segmenting the R objects in the image. A typical approach to this problem used by image analysts would be to apply a white top hat with a linear structuring element in the direction of the alignment of the R objects. However, this approach distorts the shape of the segmented R objects. Instead, we use a distance transform on the regions that have been classified by CART as belonging to the R/V/P regions. After removing the boundary walls between all the R, V and P objects, a distance transform is performed on the region. By applying a threshold at a value of 60 to this distance transform, the R objects are removed and only the centre of the V/P regions remain. The assumption used in this approach is that R objects do not have a width greater than 2 x 60 pixels, whereas the combined V/P objects do. Shown in Fig. 9 below is the result of this procedure, where the centre of the V/P region is shown in grey and the rest of the R/V/P region is shown in white.

img9

Figure 9: Centre of P/V region shown in grey, found using distance transform.

The remaining outer portions of the V/P regions are recovered by doing a reconstruction. The reconstruction is limited to be within a fixed distance so that R objects that are attached to V/P regions are not reconstructed. The result is shown in Fig. 10 below. The segmented R objects, shown in white, have not been distorted in any way.

img10

Figure 10: Reconstruction of the V/P region, limited to a specified distance. The R objects are shown in white.

Segmentation of the V and P Objects

Some work has been done to assess the feasibility of classifying the V and P objects themselves using CART. The problem with this approach is that the attributes of these objects are not sufficiently consistent for them to be reliably classified (as opposed to the F objects, which for example were much more consistent in area and grey-scale). This problem is compounded by the fact that often the walls between these objects are missing and so the nodes in the graph often do not always correspond to distinct objects. As it transpired, the results were unsatisfactory and indicated that this was not a suitable approach.

An algorithm for the segmentation of the V objects has already been proposed by Glasbey during a feasibility study for this project. The procedure involves ellipse fitting using a generalised hough transform. However, there were a number of problems with the approach that we decided could not be resolved: (i) The algorithm can give many spurious V objects where there are none. (ii) The ellipses do not always follow the contours of the V object properly. Ultimately this will always be a problem, as the V objects are not always perfectly elliptical in shape. (iii) The algorithm does not deal satisfactorily with V objects that are not fully contained in the image. We have attempted two other approaches which have proved to be more successful; one of these is detailed below.

The Pixel-Based Method

One of our approaches to segmenting the V and P objects is the following pixel-based method. The basic concept is to put markers in the regions of P objects and markers in the centres of the V objects. These regions are then grown outwards until they touch, thus segmenting the two regions at their boundaries. This pixel-based approach is much simpler than the graph-based method detailed in the previous section. For example, there is no iterative classification required to discriminate between the V and P objects, as this is taken care of in the first step by the choice of the seed points. Secondly, as there is no graph there is no need to form a tesselated image using wall and edge detection (the latter in particular is notorious for its connectivity problems). However, the disadvantage of the approach is that its success relies totally on the choice of seeds used. If they are chosen well, then a good segmentation will result. If they are not however, or a seed is missing for a given V object, then the procedure will fail for that particular V/P region.

We have found that we need only use the outer perimeter of the V/P region to obtain stable markers for the P objects. The use of the perimeter obviates the need for a selection process to find P objects. In contrast, the V object markers are much harder to select and consequently most of our work has been directed at finding good V object markers. This work is ongoing, but one possibility we are examining at present is to use attribute openings and closings. To find markers for dark V objects in the image, a top hat based on an attribute closing with an area constraint is used (conversely, to find markers for light V objects, the top hat is based on an attribute opening). The resulting markers for the V and P regions are shown in Fig. 11 below, where the P marker is shown in black and the V marker in grey.

img11

Figure 11: Markers used for V and P regions.

The seeded region growing is then performed using these markers; the result is shown in Fig. 12 below.

img12

Figure 12: Result of the seeded region growing.

The boundary of the segmented V object can be smoothed into an oval shape, by using for example the convex hull. The result is shown superimposed on the original image in white in Fig. 13 further below. Although the procedure has worked well in this example, in general there are problems separating the V objects from the surrounding P objects and the network of walls. This is mainly because a global threshold is being used in the top hat as opposed to a dynamic threshold. Further development of this method will involve research into the attribute top hats to incorporate the notion of dynamic thresholds.

img13

Figure 13: Segmented V object superimposed on the original image.