Major Release
Artifactiq v5.0.0 (E7): 4.6x Performance Breakthrough
Executive Summary
We're excited to announce Artifactiq v5.0.0 (internally E7), our best-performing model to date. Through optimized shard-based training with 3 epochs per shard, we achieved a 4.6x improvement in mAP50 compared to our previous best model (E2/v4 baseline).
Key Achievement: mAP50 improved from 0.000316 (E2) to 0.001454 (E7) - a 4.6x improvement achieved through an important discovery: 3 epochs per shard is critical for convergence; 1 epoch causes regression.
Performance Metrics
4.6x
Improvement
0.001454
mAP50
3.5h
Training Time
46
Shards
Detailed Metrics
| Metric | E7 (v5.0.0) | E2 (Baseline) | Improvement |
|---|---|---|---|
| mAP50 | 0.001454 | 0.000316 | +360% |
| mAP50-95 | 0.000409 | - | - |
| Precision | 0.0022 | - | - |
| Recall | 0.042 | - | - |
The Key Discovery: Epochs Per Shard Matter
Our training experiments revealed a critical insight: the number of epochs per shard dramatically affects model convergence.
Model Comparison
| Model | Shards | Epochs/Shard | mAP50 | Status |
|---|---|---|---|---|
| E2 | 22 | 1 | 0.000316 | Previous best |
| E6 | 46 | 1 | 0.000170 | Regressed |
| E7 | 46 | 3 | 0.001454 | Current best |
Critical Learning: E6 (46 shards x 1 epoch) actually regressed to 0.000170 mAP50 - worse than E2. Increasing to 3 epochs per shard for E7 not only recovered but achieved 4.6x improvement over the E2 baseline.
Training Configuration
Hardware
| Component | Specification |
|---|---|
| Device | Apple M4 Max |
| Backend | MPS (Metal Performance Shaders) |
| Framework | Ultralytics YOLO11m + PyTorch |
Optimized Training Parameters
{
# Shard configuration
"shards": 46,
"images_per_shard": 1500,
"epochs_per_shard": 3, # CRITICAL: Must be 3+
# Speed optimizations
"batch": 8,
"mini_val": 100, # Fast validation subset
# Stability - Critical for MPS
"freeze": 10,
"cos_lr": True,
"warmup_epochs": 3,
# MPS Compatibility
"workers": 0,
"cache": False,
"rect": False,
"mosaic": 0.0,
"mixup": 0.0,
"copy_paste": 0.0,
# Learning Rate
"lr0": 0.001,
"lrf": 0.01,
"optimizer": "AdamW"
}
Key Optimizations:
- freeze=10: Freezes backbone layers for stability on MPS
- cos_lr=True: Cosine learning rate annealing for smoother convergence
- mini_val=100: Small validation set (100 images) for fast per-shard validation
- batch=8: Optimal batch size for M4 Max memory
Dataset
| Item | Value |
|---|---|
| Training Images | 67,991 |
| Validation Images | 7,570 |
| Classes | 39 |
| Shard Size | 1,500 images each |
Shard-Based Training Architecture
Our training uses a chained shard approach where each shard's output model becomes the input for the next shard:
Base Model (federated_final.pt)
|
v
Shard 0 (1500 images x 3 epochs) -> shard_0/weights/last.pt
|
v
Shard 1 (1500 images x 3 epochs) -> shard_1/weights/last.pt
|
v
... (44 more shards) ...
|
v
Shard 45 (1500 images x 3 epochs) -> e7_best.pt
This approach allows:
- Incremental training without loading entire dataset
- Progress checkpoints after each shard
- Real-time monitoring via ntfy.sh notifications
- Resume capability if training is interrupted
Installation
Update to v5.0.0 using the Artifactiq CLI:
$ artifactiq model update
Checking for updates... v5.0.0 available (current: v4.0.0)
Downloading artifactiq-v5.0.0.onnx (76.8 MB)... done
Model updated successfully!
Or specify the version explicitly:
$ artifactiq analyze --model artifactiq:v5.0.0 --input image.jpg
Download: v5.0.0 is available on GitHub Releases
Model Files
| File | Size | Format |
|---|---|---|
| e7_best.pt | 40.5 MB | PyTorch |
| artifactiq-v5.0.0.onnx | 76.8 MB | ONNX (deployed) |
What's Next
With the E7 training breakthrough, we're planning:
- E8 Training: Experiment with 5 epochs per shard to see if further improvements are possible
- Cloud Training: Leverage A100 GPUs for faster iteration on hyperparameter tuning
- Quantization: INT8 ONNX export for faster inference on edge devices
- Expanded Classes: Adding more artifact categories based on user feedback
Changelog
| Date | Event |
|---|---|
| 2026-02-03 | E7 released as v5.0.0 |
| 2026-02-03 | E7 training complete (46 shards x 3 epochs) |
| 2026-02-02 | E6 regressed (1 epoch per shard insufficient) |
| 2026-02-02 | Optimized training: batch=8, mini_val=100, freeze=10 |
| 2026-01-31 | E2 baseline established (mAP50: 0.000316) |
| 2026-01-24 | v4.0.0 released |