Skip to main content

WildTrain Model Registration Configuration

Reference for classifier and detector registration YAML configuration files.

Overview

Registration configs control how trained models are registered to the MLflow Model Registry, including weight export format, MLflow tracking URI, and model metadata.

Usage:

# Register classifier
wildtrain register classifier -c configs/registration/classifier_registration_example.yaml

# Register detector
wildtrain register detector configs/registration/detector_registration_example.yaml

Classifier Registration

Configuration Fields

FieldTypeDefaultDescription
weightsstrPath to the classifier checkpoint file (.ckpt)
processing.namestrclassifierModel name in MLflow registry
processing.batch_sizeint8Batch size for inference (used in export)
processing.mlflow_tracking_uristrhttp://localhost:5000MLflow tracking server URI
processing.export_formatstrtorchscriptExport format: torchscript or onnx
processing.dynamicbooltrueUse dynamic axes for ONNX export

Example

weights: checkpoints/classification/best.ckpt

processing:
name: "classifier"
batch_size: 8
mlflow_tracking_uri: "http://localhost:5000"
export_format: "torchscript"
dynamic: true

Detector Registration

The detector registration config supports a two-model architecture (localizer + classifier).

Configuration Fields

Classifier Section

FieldTypeDescription
classifier.weightsstrPath to classifier checkpoint
classifier.processing.batch_sizeintBatch size for export
classifier.processing.export_formatstrExport format: torchscript or onnx

Localizer Section

FieldTypeDescription
localizer.yolo.weightsstrPath to YOLO weights file (.pt)
localizer.yolo.imgszintInput image size
localizer.yolo.devicestrDevice: cuda or cpu
localizer.yolo.conf_thresfloatConfidence threshold
localizer.yolo.iou_thresfloatNMS IoU threshold
localizer.yolo.max_detintMaximum detections per image
localizer.yolo.overlap_metricstrOverlap metric: IOU
localizer.yolo.taskstrYOLO task: detect or obb
localizer.processing.export_formatstrExport format: pt
localizer.processing.batch_sizeintBatch size for export
localizer.processing.dynamicboolDynamic axes (ONNX)

Processing Section

FieldTypeDescription
processing.namestrModel name in MLflow registry
processing.mlflow_tracking_uristrMLflow tracking server URI

Example

classifier:
weights: "checkpoints/classification/best.ckpt"
processing:
batch_size: 8
export_format: "torchscript"

localizer:
yolo:
weights: "runs/detect/train/weights/best.pt"
imgsz: 800
device: "cuda"
conf_thres: 0.1
iou_thres: 0.3
max_det: 300
overlap_metric: "IOU"
task: "detect"
processing:
export_format: "pt"
batch_size: 32
dynamic: false

processing:
name: "detector"
mlflow_tracking_uri: "http://localhost:5000"

Inference Server Config

The inference server config (configs/inference.yaml) controls the LitServe model serving:

FieldTypeDefaultDescription
portint4141Server port
workers_per_deviceint1Number of workers per GPU
mlflow_registry_namestrMLflow model registry name
mlflow_aliasstrModel version alias
mlflow_local_dirstrLocal directory for model download
mlflow_tracking_uristrMLflow tracking server URI

Example

port: 4141
workers_per_device: 1
mlflow_registry_name: detector
mlflow_alias: production
mlflow_local_dir: models-registry
mlflow_tracking_uri: http://localhost:5000

Usage:

wildtrain run-server -c configs/inference.yaml

See also: