Wissam's Blog
  • Home
  • Featured Posts
  • Experiences
  • Education
  • Projects
  • Accomplishments
  • Skills
  • Posts
  • Dark Theme
    Light Theme Dark Theme System Theme
  • Posts
  • Guides & Tutorials
    • Fast Inference for Immich ML OCR with PaddleX
  • Software Tools
    • SlurmTUI: Terminal UI for Slurm
Hero Image
SlurmTUI: A Terminal UI for Managing Slurm Jobs

During my PhD, I spent a lot of time running experiments on large HPC clusters, we’re talking thousands of GPU and CPU nodes, massive hyperparameter sweeps, and job arrays that would balloon into hundreds of entries in the queue. If you’ve ever had to babysit Slurm jobs by repeatedly typing squeue -u $USER and squinting at the output, you know the pain. The existing options were either too barebones (raw Slurm commands) or too heavy (web-based dashboards that the cluster admins may or may not have set up). I wanted something I could just SSH into any login node and run immediately, so I built SlurmTUI.

  • Slurm
  • HPC
  • TUI
  • Python
  • PhD
  • Tool
  • Open-Source
Thursday, March 19, 2026 | 3 minutes Read
Hero Image
Fast Inference for Immich ML OCR with PaddleX

This guide provides step-by-step instructions to set up PaddleX for fast OCR inference on GPU for Immich ML. By following these steps, you can significantly improve the performance of OCR tasks in your Immich setup. ~1s inference time per image with 10 concurrent requests on RTX 3080ti 12GB GPU with PaddleX vs 80s+ with Onnxruntime GPU execution provider. All code snippets and configuration files mentioned in this guide can be found in this GitHub repository. Context Note: Test with Immich Version: 2.3.1 Immich ML’s inference engine uses Onnxruntime as the default backend for model inference. However, for OCR tasks which were introduced in Immich 2.3.x, users have reported very slow GPU performance sometimes even slower than CPU inference (see Reddit discussion, GitHub Issue #23462) even when using a powerful GPU or the mobilde version of the OCR model.

  • Immich
  • OCR
  • Inference
  • GPU
  • PaddleX
  • Guide
  • Self-Hosting
  • Bug
Sunday, December 7, 2025 | 11 minutes Read
Contact me:
  • wissam DOT antoun AT gmail DOT com
  • WissamAntoun
  • Wissam Antoun