Quick Start Guide
Get up and running with WildDetect in minutes! This guide shows you the fastest path to running your first wildlife detection.
Prerequisitesβ
Before starting, ensure you have:
- β Installed all packages (Installation Guide)
- β Activated your Python environment
- β Some aerial images to process
- β A pre-trained model (or use our example)
Quick Start: Detectionβ
1. Using the CLIβ
The simplest way to run detection:
wildetect detect /path/to/images --model model.pt --output results/
2. Using a Script (Windows)β
Edit the configuration file, then run:
cd wildetect
scripts\run_detection.bat
Section removed: WildDetect is CLI-first. Use the CLI or scripts above.
Quick Start: Census Campaignβ
Run a complete census analysis:
wildetect census campaign_2024 /path/to/images \
--model model.pt \
--output campaign_results/ \
--species "elephant,giraffe,zebra"
This will:
- β Detect all animals in your images
- β Generate population statistics
- β Create geographic visualizations
- β Export reports in JSON and CSV
Quick Start: Data Managementβ
Import a Datasetβ
# Import COCO format
wildata import-dataset annotations.json \
--format coco \
--name my_dataset
# Import YOLO format
wildata import-dataset data.yaml \
--format yolo \
--name my_dataset
Visualize Dataβ
# Launch FiftyOne viewer
wildetect fiftyone --action launch --dataset my_dataset
# Or use the script
scripts\launch_fiftyone.bat
Quick Start: Model Trainingβ
Train a Classifierβ
cd wildtrain
wildtrain train classifier -c configs/classification/classification_train.yaml
Train a Detector (YOLO)β
wildtrain train detector -c configs/detection/yolo_configs/yolo.yaml
Using the Web UIβ
Each package has a Streamlit-based web interface:
WildDetect UIβ
wildetect ui
# Or: scripts\launch_ui.bat
Features:
- Run detections interactively
- Configure detection parameters
- View results in real-time
- Export to various formats
WilData UIβ
cd wildata
streamlit run src/wildata/ui.py
# Or: launch_ui.bat
Features:
- Import and export datasets
- Create ROI datasets
- Update GPS metadata
- Visualize data
WildTrain UIβ
cd wildtrain
streamlit run src/wildtrain/ui.py
# Or: launch_ui.bat
Features:
- Configure training runs
- Monitor training progress
- Evaluate models
- Register models to MLflow
Configuration Filesβ
All operations can be configured via YAML files:
Detection Config Exampleβ
Edit config/detection.yaml:
model:
mlflow_model_name: "detector"
mlflow_model_alias: "production"
device: "cuda"
processing:
batch_size: 32
tile_size: 800
overlap_ratio: 0.2
pipeline_type: "raster"
output:
directory: "results"
dataset_name: "my_detections"
Dataset Import Config Exampleβ
Edit wildata/configs/import-config-example.yaml:
source_path: "annotations.json"
source_format: "coco"
dataset_name: "my_dataset"
root: "data"
split_name: "train"
transformations:
enable_tiling: true
tiling:
tile_size: 800
stride: 640
min_visibility: 0.7
Common Workflowsβ
Workflow 1: Detection on New Imagesβ
# 1. Run detection
wildetect detect images/ --model model.pt --output results/
# 2. View results
wildetect fiftyone --action launch
# 3. Export results
wildetect analyze results/detections.json --output analysis/
Workflow 2: Prepare Training Dataβ
# 1. Import annotations
wildata import-dataset annotations.json --format coco --name train_data
# 2. Apply transformations
wildata import-dataset annotations.json \
--format coco \
--name augmented_data \
--enable-tiling \
--enable-augmentation
# 3. Export for training
wildata export-dataset augmented_data --format yolo
Workflow 3: Train and Deploy Modelβ
# 1. Train model
cd wildtrain
wildtrain train detector -c configs/detection/yolo_configs/yolo.yaml
# 2. Evaluate model
scripts\eval_detector.bat
# 3. Register to MLflow
scripts\register_model.bat
# 4. Use for detection
cd ..
wildetect detect images/ --model-name my_detector --output results/
Environment Variablesβ
Create a .env file in the project root:
# MLflow Configuration
MLFLOW_TRACKING_URI=http://localhost:5000
# Label Studio (optional)
LABEL_STUDIO_URL=http://localhost:8080
LABEL_STUDIO_API_KEY=your_api_key
# Model Storage
MODEL_REGISTRY_PATH=models/
# Data Storage
DATA_ROOT=D:/data/
# GPU Settings
CUDA_VISIBLE_DEVICES=0
Launching Servicesβ
MLflow UIβ
Track experiments and manage models:
scripts\launch_mlflow.bat
# Access at http://localhost:5000
Label Studioβ
Annotate images:
scripts\launch_labelstudio.bat
# Access at http://localhost:8080
WilData APIβ
REST API for data operations:
cd wildata
scripts\launch_api.bat
# Access at http://localhost:8441
# Docs at http://localhost:8441/docs
Inference Serverβ
Deploy model as API:
scripts\launch_inference_server.bat
# Access at http://localhost:4141
Quick Referenceβ
Detection Commandsβ
# Basic detection
wildetect detect images/ --model model.pt
# With tiling for large images
wildetect detect large_image.tif --model model.pt --tile-size 800
# Census with statistics
wildetect census campaign images/ --model model.pt
# Analyze results
wildetect analyze results.json
Data Commandsβ
# Import
wildata import-dataset source --format coco --name dataset
# List datasets
wildata dataset list
# Export
wildata dataset export dataset --format yolo
# Create ROI dataset
wildata create-roi annotations.json --format coco
Training Commandsβ
# Train classifier
wildtrain train classifier -c config.yaml
# Train detector
wildtrain train detector -c config.yaml
# Evaluate
wildtrain eval classifier -c config.yaml
# Register model
wildtrain register model_path --name my_model
Getting Helpβ
Command Helpβ
Every command has a --help flag:
wildetect --help
wildetect detect --help
wildata import-dataset --help
wildtrain train --help
Package Informationβ
# System info
wildetect info
# Use CLI for version check
wildetect --version
Next Stepsβ
Now that you've run your first commands:
- π Deep Dive: Follow the End-to-End Detection Tutorial
- ποΈ Understand Architecture: Read the Architecture Overview
- π§ Configure: Explore Configuration Files
- π Learn More: Check out all Tutorials
Questions? Check the Troubleshooting Guide or reach out via GitHub Issues.