Code
app
¶
SegFormer evaluation pipeline for semantic segmentation tasks.
This module orchestrates the evaluation of SegFormer models with various quantization methods on the Scene Parse 150 dataset. It handles model loading, quantization, dataset preparation, evaluation, and result logging using Weights & Biases.
The pipeline supports multiple quantization levels and efficient processing through dataset sharding. Results are tracked and visualized for performance analysis across different model configurations.
Usage
python app.py
Environment variables
WANDB_API_KEY: API key for Weights & Biases logging WANDB_PROJECT: Name of the W&B project WANDB_ENTITY: Name of the W&B entity (team or user)
Functions¶
main()
¶
Main execution function for the SegFormer evaluation pipeline.
This function orchestrates the entire evaluation process, including model loading, dataset preparation, evaluation, and logging results.
Source code in src/app.py
config
¶
Configuration settings for the SegFormer evaluation pipeline.
utils.data_processing
¶
Data processing module for SegFormer evaluation pipeline.
This module contains functions for loading, preprocessing, and handling dataset operations for semantic segmentation tasks using SegFormer models.
Functions:
Name | Description |
---|---|
load_dataset |
Load or download and save the dataset. |
get_processed_inputs |
Process and prepare inputs for model inference. |
convert_to_RGB |
Convert dataset images and annotations to RGB and grayscale respectively. |
The module uses the Hugging Face datasets library and image processing tools to prepare data for SegFormer model evaluation.
Functions¶
convert_to_RGB(dataset)
¶
Convert dataset images and annotations to RGB and grayscale respectively.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
Dict
|
Dataset containing ‘image’ and ‘annotation’ keys. |
required |
Returns:
Type | Description |
---|---|
Dict[List, List]
|
Dict[List, List]: Processed dataset with converted images and annotations. |
Source code in src/utils/data_processing.py
get_processed_inputs(dataset, image_processor, device, bias_dtype=None)
¶
Process and prepare inputs for model inference.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset
|
Dataset
|
The dataset to process. |
required |
image_processor
|
SegformerImageProcessor
|
The image processor to use. |
required |
device
|
device
|
The device to load tensors to. |
required |
bias_dtype
|
dtype
|
Dtype for bias, if any. |
None
|
Returns:
Type | Description |
---|---|
Tuple[Tensor, Tensor]
|
Tuple[torch.Tensor, torch.Tensor]: Processed pixel values and labels. |
Source code in src/utils/data_processing.py
load_dataset_custom(dataset_save_path, dataset_name)
¶
Load or download and save the dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset_save_path
|
str
|
Path to save/load the dataset. |
required |
dataset_name
|
str
|
Name of the dataset to load. |
required |
Returns:
Name | Type | Description |
---|---|---|
Dataset |
The loaded dataset. |
Source code in src/utils/data_processing.py
utils.evaluator
¶
Evaluator module for SegFormer model inference and metric computation.
This module provides functions for model inference and evaluation on semantic segmentation tasks using SegFormer models.
Functions:
Name | Description |
---|---|
infer_model |
Perform model inference and return loss and logits. |
evaluate_model |
Evaluate the model on a dataset shard and compute metrics. |
The module uses PyTorch for model inference and the ‘evaluate’ library for computing semantic segmentation metrics.
Functions¶
evaluate_model(model, dataset_shard, image_processor, device, metric_name, id2label)
¶
Evaluate the model on a dataset shard and compute metrics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
SegformerForSemanticSegmentation
|
The model to evaluate. |
required |
dataset_shard
|
Dataset
|
A shard of the dataset to evaluate on. |
required |
image_processor
|
SegformerImageProcessor
|
The image processor to use. |
required |
device
|
device
|
The device to run evaluation on. |
required |
metric_name
|
str
|
Name of the metric to use. |
required |
id2label
|
dict
|
Mapping of label IDs to label names. |
required |
Returns:
Type | Description |
---|---|
Dict[str, float]
|
Dict[str, float]: Computed evaluation metrics where keys are metric names and values are the corresponding scores. |
Source code in src/utils/evaluator.py
infer_model(model, pixel_values, labels)
¶
Perform model inference and return loss and logits.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
SegformerForSemanticSegmentation
|
The model to use for inference. |
required |
pixel_values
|
Tensor
|
Input pixel values. |
required |
labels
|
Tensor
|
Ground truth labels. |
required |
Returns:
Type | Description |
---|---|
Tuple[float, Tensor]
|
Tuple[float, torch.Tensor]: A tuple containing the model loss as a float and the logits as a torch.Tensor. |
Source code in src/utils/evaluator.py
utils.general_utils
¶
utils.model_loader
¶
Model loading module for SegFormer evaluation pipeline.
This module provides functions for loading and initializing SegFormer models and image processors, with support for local and remote loading.
Functions:
Name | Description |
---|---|
load_base_model |
Load or download and save the base SegFormer model. |
load_image_processor |
Load or download and save the image processor. |
The module uses the transformers library for model and processor management.
Functions¶
load_base_model(model_name, model_save_path, compute_dtype, device)
¶
Load or download and save the base SegFormer model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
Name of the model to load. |
required |
model_save_path
|
str
|
Path to save/load the model. |
required |
compute_dtype
|
dtype
|
Computation dtype for the model. |
required |
device
|
device
|
Device to load the model to. |
required |
Returns:
Name | Type | Description |
---|---|---|
SegformerForSemanticSegmentation |
SegformerForSemanticSegmentation
|
The loaded model. |
Source code in src/utils/model_loader.py
load_image_processor(model_name, tokenizer_save_path)
¶
Load or download and save the image processor.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
Name of the model to load the processor for. |
required |
tokenizer_save_path
|
str
|
Path to save/load the processor. |
required |
Returns:
Name | Type | Description |
---|---|---|
SegformerImageProcessor |
SegformerImageProcessor
|
The loaded image processor. |
Source code in src/utils/model_loader.py
utils.quantization
¶
Quantization module for SegFormer models.
This module provides functions for quantizing SegFormer models using various quantization methods supported by the Quanto library.
Functions:
Name | Description |
---|---|
quantize_models |
Quantize a base SegFormer model using multiple quantization levels. |
The module uses Quanto for quantization and supports float8, int8, int4, and int2 quantization.
Functions¶
quantize_models(base_model, model_name, model_save_path, torch_device)
¶
Quantize the base model using various quantization methods.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
base_model
|
SegformerForSemanticSegmentation
|
The base model to quantize. |
required |
model_name
|
str
|
Name of the model. |
required |
model_save_path
|
str
|
Path to save quantized models. |
required |
torch_device
|
device
|
Device to load models to. |
required |
Returns:
Type | Description |
---|---|
Dict[str, SegformerForSemanticSegmentation]
|
Dict[str, SegformerForSemanticSegmentation]: Dictionary of quantized models. |
Source code in src/utils/quantization.py
utils.wandb_utils
¶
Weights & Biases utility module for SegFormer evaluation pipeline.
Provides functions for managing W&B runs, including initialization, metadata setting, and result logging for the SegFormer evaluation pipeline.
Functions:
Name | Description |
---|---|
create_wandb_run |
Initialize a new W&B run. |
create_wandb_run_meta |
Set metadata for a W&B run. |
log_wandb_results |
Log evaluation results to W&B. |
Note: Requires WANDB_API_KEY, WANDB_PROJECT, and WANDB_ENTITY environment variables.
Functions¶
create_wandb_run(project, entity, name, group)
¶
Initialize and create a new Weights & Biases run.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
project
|
str
|
Name of the W&B project. |
required |
entity
|
str
|
Name of the W&B entity (team or user). |
required |
name
|
str
|
Name of the run. |
required |
group
|
str
|
Group name for the run. |
required |
Returns:
Type | Description |
---|---|
Run
|
wandb.Run: The created W&B run object. |
Source code in src/utils/wandb_utils.py
create_wandb_run_meta(wandb_run, model_name, dataset_name, torch_device, wandb_tag_mode, quant_used, model_used, ds_num_shards, ds_shards_mod)
¶
Set metadata for the Weights & Biases run.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
wandb_run
|
Run
|
The W&B run object. |
required |
model_name
|
str
|
Name of the model. |
required |
dataset_name
|
str
|
Name of the dataset. |
required |
torch_device
|
device
|
Device used for computation. |
required |
wandb_tag_mode
|
str
|
Tag for the run mode. |
required |
quant_used
|
str
|
Quantization method used, if any. |
required |
model_used
|
SegformerForSemanticSegmentation
|
The model being evaluated. |
required |
ds_num_shards
|
int
|
Number of dataset shards. |
required |
ds_shards_mod
|
float
|
Modulo for dataset shard logging. |
required |
Returns:
Type | Description |
---|---|
Run
|
wandb.Run: The updated W&B run object. |
Source code in src/utils/wandb_utils.py
log_wandb_results(results, model)
¶
Log evaluation results to Weights & Biases.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results
|
dict
|
Evaluation results to log. |
required |
model
|
SegformerForSemanticSegmentation
|
The model being evaluated. |
required |
Returns:
Type | Description |
---|---|
None
|
None |