Semantic Segmentation Training Case Tutorial

Function Description: An example tutorial for training a deep learning model using images and label files generated with annotation tools.

Steps

1.Prepare Data

Prepare a certain amount of training data, as shown below. The number of images should preferably be over 500.

In this case, 500 panoramic images are prepared.

OpenPointCloud

2.Determine Labeling Classes

This case is for a road defect segmentation task, with the target segmentation classes defined as crack and potholes.

3.Start Labeling

Use the Labeling tool to annotate the data.

1) Click the Start Labeling button to open the annotation function, then click the Open Dir button to open the prepared image data.

OpenPointCloud

2) Click the Create Polygon button to annotate the cracks and potholes in the images, as shown below:

OpenPointCloud

3) After annotating all the images for cracks and potholes, the annotated data is as shown below.

OpenPointCloud

4.Training

1) Click Raster -> Train Image Deep Learning Model. Since this task is semantic segmentation, choose the Yolo-Segmentation model for training.

OpenPointCloud

2) Click Next and select Automatically Split Training Dataset and Validation Datasets.

OpenPointCloud

3) Click the Add button to add the annotated data folder from step 3, then click Next.

OpenPointCloud

4) Set parameters according to Train Image Deep Learning Model. In this example, select GPU, set batch size to 4, and train for 100 epochs.

OpenPointCloud

5) Click Next, then click the Start button to begin training. This process may take a while. As shown below, the training is complete when the progress bar reaches 100%.

OpenPointCloud

6) From the loss curve stabilizing at the end, we know the model has converged.

Loss curve:

OpenPointCloud

5.Inference

1) Click Raster -> Detect or Segment Objects with Trained Model to open the inference interface. Select the inference data and the trained model.

OpenPointCloud
OpenPointCloud

2) Click the GPU button to set the use of GPU or CPU for inference.

OpenPointCloud

3) Set the batch size based on the memory size, refer to Detect or Segment Objects with Trained Model for settings.

4) Click Next to go to the inference page.

OpenPointCloud

5) Click Start to begin inference. You can also click the stop button to interrupt the inference during the process.

OpenPointCloud

6.View Inference Results in the Labeling Tool

Open the inference folder, copy the latest inference file to the same directory level as the images, then click Raster -> Start Labeling -> Open Dir and select the inference folder to view.

OpenPointCloud

7.View Inference Results in Panorama Window

If there are panoramic images in the opened project, you can select the path of the panoramic images to infer panoramic image data. The operation steps are the same as the inference step. After inference, you can click the Panorama Image page, check the labels, and select the latest inference folder.

OpenPointCloud

You can view the results in the panorama window.

OpenPointCloud

results matching ""

    No results matching ""