← Back to Artifactiq
Major Release

Artifactiq v5.0.0 (E7): 4.6x Performance Breakthrough

Published February 3, 2026 | By Artifactiq ML Team

Executive Summary

We're excited to announce Artifactiq v5.0.0 (internally E7), our best-performing model to date. Through optimized shard-based training with 3 epochs per shard, we achieved a 4.6x improvement in mAP50 compared to our previous best model (E2/v4 baseline).

Key Achievement: mAP50 improved from 0.000316 (E2) to 0.001454 (E7) - a 4.6x improvement achieved through an important discovery: 3 epochs per shard is critical for convergence; 1 epoch causes regression.

Performance Metrics

4.6x
Improvement
0.001454
mAP50
3.5h
Training Time
46
Shards

Detailed Metrics

MetricE7 (v5.0.0)E2 (Baseline)Improvement
mAP50 0.001454 0.000316 +360%
mAP50-95 0.000409 - -
Precision 0.0022 - -
Recall 0.042 - -

The Key Discovery: Epochs Per Shard Matter

Our training experiments revealed a critical insight: the number of epochs per shard dramatically affects model convergence.

Model Comparison

ModelShardsEpochs/ShardmAP50Status
E2 22 1 0.000316 Previous best
E6 46 1 0.000170 Regressed
E7 46 3 0.001454 Current best
Critical Learning: E6 (46 shards x 1 epoch) actually regressed to 0.000170 mAP50 - worse than E2. Increasing to 3 epochs per shard for E7 not only recovered but achieved 4.6x improvement over the E2 baseline.

Training Configuration

Hardware

ComponentSpecification
DeviceApple M4 Max
BackendMPS (Metal Performance Shaders)
FrameworkUltralytics YOLO11m + PyTorch

Optimized Training Parameters

{
    # Shard configuration
    "shards": 46,
    "images_per_shard": 1500,
    "epochs_per_shard": 3,      # CRITICAL: Must be 3+

    # Speed optimizations
    "batch": 8,
    "mini_val": 100,            # Fast validation subset

    # Stability - Critical for MPS
    "freeze": 10,
    "cos_lr": True,
    "warmup_epochs": 3,

    # MPS Compatibility
    "workers": 0,
    "cache": False,
    "rect": False,
    "mosaic": 0.0,
    "mixup": 0.0,
    "copy_paste": 0.0,

    # Learning Rate
    "lr0": 0.001,
    "lrf": 0.01,
    "optimizer": "AdamW"
}
Key Optimizations:
  • freeze=10: Freezes backbone layers for stability on MPS
  • cos_lr=True: Cosine learning rate annealing for smoother convergence
  • mini_val=100: Small validation set (100 images) for fast per-shard validation
  • batch=8: Optimal batch size for M4 Max memory

Dataset

ItemValue
Training Images67,991
Validation Images7,570
Classes39
Shard Size1,500 images each

Shard-Based Training Architecture

Our training uses a chained shard approach where each shard's output model becomes the input for the next shard:

Base Model (federated_final.pt)
    |
    v
Shard 0 (1500 images x 3 epochs) -> shard_0/weights/last.pt
    |
    v
Shard 1 (1500 images x 3 epochs) -> shard_1/weights/last.pt
    |
    v
... (44 more shards) ...
    |
    v
Shard 45 (1500 images x 3 epochs) -> e7_best.pt

This approach allows:

Installation

Update to v5.0.0 using the Artifactiq CLI:

$ artifactiq model update
Checking for updates... v5.0.0 available (current: v4.0.0)
Downloading artifactiq-v5.0.0.onnx (76.8 MB)... done
Model updated successfully!

Or specify the version explicitly:

$ artifactiq analyze --model artifactiq:v5.0.0 --input image.jpg
Download: v5.0.0 is available on GitHub Releases

Model Files

FileSizeFormat
e7_best.pt40.5 MBPyTorch
artifactiq-v5.0.0.onnx76.8 MBONNX (deployed)

What's Next

With the E7 training breakthrough, we're planning:

  1. E8 Training: Experiment with 5 epochs per shard to see if further improvements are possible
  2. Cloud Training: Leverage A100 GPUs for faster iteration on hyperparameter tuning
  3. Quantization: INT8 ONNX export for faster inference on edge devices
  4. Expanded Classes: Adding more artifact categories based on user feedback

Changelog

DateEvent
2026-02-03E7 released as v5.0.0
2026-02-03E7 training complete (46 shards x 3 epochs)
2026-02-02E6 regressed (1 epoch per shard insufficient)
2026-02-02Optimized training: batch=8, mini_val=100, freeze=10
2026-01-31E2 baseline established (mAP50: 0.000316)
2026-01-24v4.0.0 released