Module commands
Source - audit
- zernel audit — Compliance audit trail and data lineage
- autopilot
- zernel autopilot — Autonomous training optimizer
- bench
- zernel bench — ML benchmark suite
- cloud
- zernel cloud — One command GPU cluster management
- cluster
- zernel cluster — GPU cluster management
- cost
- zernel cost — GPU cost tracking
- data
- zernel data — Dataset management
- debug
- zernel debug — ML training debugger
- doctor
- env
- zernel env — Environment management
- exp
- fleet
- zernel fleet — GPU fleet management at scale
- gpu
- zernel gpu — GPU management CLI (nvidia-smi replacement)
- hub
- zernel hub — Private model & dataset hub
- init
- job
- job_k8s
- Kubernetes-based distributed training backend.
- job_ssh
- SSH-based multi-node distributed training backend.
- log
- marketplace
- zernel marketplace — Share and monetize ML models
- migrate
- zernel migrate — Live job migration between GPUs
- model
- notebook
- zernel notebook — Terminal notebook
- onboard
- zernel onboard — One-command developer onboarding
- optimize
- zernel optimize — ML Training optimization tools
- power
- zernel power — Smart GPU power management & energy tracking
- pqc
- zernel pqc — Post-Quantum Cryptography tools
- profile
- zernel profile — Full training pipeline profiler
- run
- secure
- zernel secure — System hardening for production ML
- serve
- zernel serve — Unified inference server
- tune
- zernel tune — Adaptive kernel parameter tuning
- watch