Complete Guide · 2026

Top 100 Local AI Models For Privacy + Best Outputs

By Justin MurrayModel Reference Guide
Top 100 Local AI Models 2026 — mosaic of open-source model cards including Qwen, Llama, DeepSeek, FLUX, Mistral, and Gemma

A lot of us are running subscription-based AI like Claude and Codex — but as they give less for the buck and privacy concerns rise, local LLMs are the answer. No subscription fees (just electricity), fully private, and a one-time hardware investment.

In this complete 2026 guide I cover the top 100 AI models — from general purpose to coding, images, video, voice, music, and embeddings — with VRAM requirements and benchmarks so you can match every model to your hardware.

🔧 Use these tools alongside this guide

Why Local AI in 2026

The gap between cloud and local AI has closed for most everyday tasks. GLM, Qwen, Kimi, and MiniMax show open-source is catching up fast. Privacy concerns are real — AI is increasingly a surveillance vector. VRAM and quantization improvements mean yesterday's impossible is today's default.

My rule: never buy the bare minimum. See my Budget, Mid-Range, and Elite build guides for spec recommendations. Confused about terminology? Check the full Glossary.

For a detailed cost breakdown vs. cloud subscriptions, read Local vs. Cloud Agents: The $15,000/year Cost Savings. For why VRAM headroom matters in agentic workflows, see Why AI Agents Need More VRAM.

🧠 General Purpose

#1

GLM-5 (Z.ai)

744B / 40B active (MoE) · MIT

HuggingFace ↗

#1 Artificial Analysis Intelligence Index (score 50). Trained on 100K Huawei Ascend 910B chips — zero US hardware. SWE-bench 77.8%.

VRAM: Multi-GPU 80GB+Context: 205K
#2

MiniMax M2.5

230B / 10B active (MoE) · Modified MIT

HuggingFace ↗

Most-used open-weight on OpenRouter. Interleaved thinking. 60+ t/s on M5 Max 128GB.

VRAM: 128GB+ RAMContext: 256K
#3

Qwen3.5-27B

27B dense · Apache 2.0

HuggingFace ↗

Community top pick for single 24GB GPU. 201 languages. Native vision-language. IFBench 76.5 · AIME 91.3.

VRAM: ~17GB (Q4_K_M)Context: 262K
#4

Qwen3.5-397B-A17B

397B / 17B active (MoE) · Apache 2.0

HuggingFace ↗

Beats GPT-5.2 on IFBench (76.5 vs 75.4). BFCL-V4 tool use 72.2.

VRAM: 128GB+ RAMContext: 262K
#5

DeepSeek V3.2

685B (MoE) · DeepSeek License

HuggingFace ↗

Surpasses GPT-5 on AIME/HMMT 2025. LiveCodeBench 90%.

VRAM: Multi-GPUContext: 128K
#6

Llama 4 Scout

109B / 17B active (MoE) · Llama 4 Community

HuggingFace ↗

10 million token context — largest of any open-weight model.

VRAM: 48GB+Context: 10M
#7

Kimi-K2.5

1T / 32B active (MoE) · Modified MIT

HuggingFace ↗

#2 Intelligence Index (48). Powers Cursor Composer 2.

VRAM: Multi-GPUContext: 256K
#8

GPT-OSS 120B

117B (MoE) · OpenAI open

HuggingFace ↗

OpenAI's first open-weight model since GPT-2. Matches o4-mini on AIME/MMLU.

VRAM: 80GB GPUContext: Long
#9

Llama 4 Maverick

400B / 17B active (MoE) · Llama 4 Community

HuggingFace ↗

Outperforms Scout on reasoning and math.

VRAM: 80GB+Context: 1M
#10

Mistral Large 3 (123B)

123B dense · Mistral Research

HuggingFace ↗

Top multilingual/European benchmark performer. Best for local RAG across 23+ languages.

VRAM: Dual 80GBContext: 128K
#11

DeepSeek R2

236B / 21B active (MoE) · MIT

HuggingFace ↗

Hybrid thinking: toggleable chain-of-thought. MATH 96.6 · AIME 92.1.

VRAM: 64GB+Context: 128K
#12

Yi-Lightning-2 (01.AI)

200B (MoE) · Apache 2.0

HuggingFace ↗

Top LMSYS Arena. Strong Chinese/English bilingual and long-doc summarization.

VRAM: 128GB+ RAMContext: 200K
#13

Falcon 3 (180B)

180B dense · TII Falcon (commercial)

HuggingFace ↗

UAE TII flagship. One of the only truly permissive commercial licenses at 180B.

VRAM: Multi-GPU 160GB+Context: 32K
#14

InternLM3-72B

72B dense · Apache 2.0

HuggingFace ↗

Best bilingual Chinese/English 72B. MATH 90.1 · CEval 92.1.

VRAM: ~44GB (Q4_K_M)Context: 262K
#15

OLMo 2 (32B)

32B dense · Apache 2.0

HuggingFace ↗

Allen Institute. Fully open: weights, training data, code, and evals all public.

VRAM: ~20GB (Q4_K_M)Context: 4K
#16

Mixtral 8x22B

141B / 39B active (MoE) · Apache 2.0

HuggingFace ↗

Gold standard MoE for performance-per-VRAM. Battle-tested JSON/function calling.

VRAM: ~48GB (Q4)Context: 65K
#17

Gemma 3 27B

27B dense · Gemma Terms

HuggingFace ↗

Native vision + text. Outstanding at the 24GB tier. MMLU-Pro 67.5.

VRAM: ~17GB (Q4_K_M)Context: 128K
#18

Command A (Cohere)

111B (MoE) · CC-BY-NC-4.0

HuggingFace ↗

Best tool-use and JSON output accuracy of any open model. Built for multi-step agentic pipelines.

VRAM: Dual 80GBContext: 256K
#19

Aya Expanse 32B

32B dense · CC-BY-NC-4.0

HuggingFace ↗

23 simultaneous languages. Best open multilingual model for local translation.

VRAM: ~20GB (Q4_K_M)Context: 128K
#20

Zephyr 141B-A39B

141B / 39B active (MoE) · Apache 2.0

HuggingFace ↗

HuggingFace DPO fine-tune on Mixtral 8x22B. Exceptionally helpful for consumer chat.

VRAM: ~48GB (Q4)Context: 65K

💻 Coding + Agentic

See also: LM Studio Guide · Ollama + Open WebUI Setup · Why Agents Need More VRAM.

#21

GLM-4.7-Flash

355B MoE (30B active at 24GB) · MIT

HuggingFace ↗

Winner of the 24GB VRAM agentic coding challenge. LiveCodeBench 89% on a single consumer GPU.

VRAM: 24GBContext: 200K
#22

Qwen3-Coder-Next (80B)

80B (MoE) · Apache 2.0

HuggingFace ↗

Multi-file repo editing and agentic workflows. Best Cursor-style IDE integration.

VRAM: 32GB+Context: 128K
#23

Qwen 2.5 Coder 32B

32B dense · Apache 2.0

HuggingFace ↗

HumanEval 92%. Best single-GPU coding model for RTX 5090 users.

VRAM: ~20GB (Q4_K_M)Context: 128K
#24

Qwen 2.5 Coder 14B

14B dense · Apache 2.0

HuggingFace ↗

HumanEval 85%. Best coding model for 16GB VRAM. Community favorite for LM Studio.

VRAM: ~8.8GB (Q4_K_M)Context: 128K
#25

OmniCoder-9B (Tesslate)

9B (Qwen3.5 base) · Apache 2.0

HuggingFace ↗

Fine-tuned on 425K agentic coding trajectories for real software engineering.

VRAM: ~5.5GBContext: 262K
#26

Devstral-2-123B

123B dense · Mistral Research

HuggingFace ↗

Best for deep refactoring, legacy codebase understanding, full-stack architecture.

VRAM: Multi-GPUContext: 128K
#27

DeepCoder-V2-236B

236B (MoE) · MIT

HuggingFace ↗

SWE-bench-lite 61.2%. Native diff generation and test writing built-in.

VRAM: Multi-GPUContext: 128K
#28

StarCoder 2 (15B)

15B dense · BigCode OpenRAIL-M

HuggingFace ↗

600+ programming languages. Gold standard for fill-in-the-middle (FIM) completion.

VRAM: ~9.5GB (Q4_K_M)Context: 16K
#29

CodeLlama 70B

70B dense · Llama 2 Community

HuggingFace ↗

HumanEval 67.8%. Massive LoRA ecosystem for context-rich completion on large codebases.

VRAM: ~44GB (Q4_K_M)Context: 100K
#30

SWE-Llama-3.1-70B

70B dense · Llama 3.1 Community

HuggingFace ↗

SWE-bench 43.8%. Specifically trained to resolve real GitHub issues end-to-end.

VRAM: ~44GB (Q4_K_M)Context: 128K
#31

Jan-nano (Menlo, 4B)

4B · Apache 2.0

HuggingFace ↗

Plug-and-play for Jan.ai desktop. Best zero-config local code completion at 4B.

VRAM: ~3GBContext: 32K
#32

Granite 3.3 Code (34B)

34B dense · Apache 2.0

HuggingFace ↗

IBM. 116 programming languages. HumanEval 78.3%. Best for compliance-bound organizations.

VRAM: ~21GB (Q4_K_M)Context: 128K
#33

DeepSeek R1 Distill 32B

32B dense · MIT

HuggingFace ↗

Self-reflection fine-tune for complex multi-step coding challenges.

VRAM: ~20GB (Q4_K_M)Context: 128K
#34

OpenCoder-8B-Instruct

8B dense · MIT

HuggingFace ↗

Fully transparent pipeline. HumanEval 83.5%. The OLMo of coding models.

VRAM: ~5GB (Q4_K_M)Context: 8K
#35

MagicCoder-S-DS-6.7B

6.7B (DeepSeek Coder) · MIT

HuggingFace ↗

HumanEval 76.8% — beats models 10× its size. OSS-Instruct training methodology.

VRAM: ~4.2GBContext: 16K

📱 Edge + Mobile

#36

Qwen3.5-9B

9B dense · Apache 2.0

HuggingFace ↗

Matches GPT-OSS-120B on GPQA Diamond (81.7 vs 71.5). Overperformer of 2026.

VRAM: ~5.5GB (Q4_K_M)Context: 262K
#37

Phi-4-mini (3.8B)

3.8B dense · MIT

HuggingFace ↗

Fits 8GB RAM. Fast on M1 MacBook Air. Microsoft's best edge release.

VRAM: ~2.3GBContext: 128K
#38

Qwen3.5-2B / 4B

2B / 4B dense · Apache 2.0

HuggingFace ↗

Runs on iPhone in airplane mode. Native multimodal at sub-3B — unprecedented.

VRAM: <2GBContext: 32K
#39

Nanbeige4.1-3B

3B dense · Apache 2.0

HuggingFace ↗

First sub-4B with 500+ round native tool invocations and deep-search.

VRAM: ~2GBContext: 8K
#40

LFM-2.5-350M (Liquid AI)

350M · Liquid AI

HuggingFace ↗

Non-Transformer. Linear context scaling. 40,400 t/s on Apple Silicon.

VRAM: <1GBContext: Linear
#41

Gemma 3 4B / 12B

4B / 12B dense · Gemma Terms

HuggingFace ↗

Top multilingual OCR. Multimodal vision at 4B. Best for mobile RAG pipelines.

VRAM: ~3–8GBContext: 128K
#42

DeepSeek R1 7B Distill

7B dense · MIT

HuggingFace ↗

Best local reasoning for 6–8GB VRAM. MIT licensed.

VRAM: ~4.5GB (Q4_K_M)Context: 128K
#43

Phi-4 (14B)

14B dense · MIT

HuggingFace ↗

MATH 80.4%. Outperforms many 70B models on structured reasoning tasks.

VRAM: ~8.8GB (Q4_K_M)Context: 16K
#44

NVIDIA Nemotron Nano 8B

8B dense · NVIDIA

HuggingFace ↗

Math Index 91.0. Best math scores at the 8GB VRAM tier.

VRAM: ~5GB (Q4_K_M)Context: 128K
#45

SmolLM2-1.7B

1.7B dense · Apache 2.0

HuggingFace ↗

Runs in-browser via WebGPU. Best for Electron apps and Raspberry Pi deployments.

VRAM: ~1.1GBContext: 8K
#46

MobileLLM-125M (Meta)

125M · MIT

HuggingFace ↗

On-device Android/iOS. No server needed. Sub-200M state-of-the-art.

VRAM: ~100MBContext: 4K
#47

H2O Danube 3 (4B)

4B dense · Apache 2.0

HuggingFace ↗

Document intelligence: invoices, tables, financial statements locally.

VRAM: ~2.5GBContext: 8K
#48

Orca Mini 3B

3B dense · CC-BY-NC-4.0

HuggingFace ↗

Microsoft Research synth-data. Best instruction-following at 3B. IoT proven.

VRAM: ~1.9GBContext: 4K

🎨 Image Generation

See also: Best Local AI Image Generators Guide and Stable Diffusion XL VRAM vs Speed and Best GPU for Stable Diffusion.

#49

FLUX.2 Dev (32B)

32B diffusion · FLUX non-commercial

HuggingFace ↗

Best photorealism and text-in-image accuracy. Multi-reference conditioning.

VRAM: 24GB+
#50

FLUX.2 Klein (4B)

4B diffusion · Apache 2.0

HuggingFace ↗

Real-time generation. Fully commercial Apache 2.0 license.

VRAM: 8GB+
#51

FLUX.1 Schnell (12B)

12B diffusion · Apache 2.0

HuggingFace ↗

4-step generation. Fastest high-quality local image gen. Apache 2.0 commercial.

VRAM: 12GB+
#52

FLUX.1 Kontext Dev

12B diffusion · FLUX non-commercial

HuggingFace ↗

Precise region image editing guided by reference images.

VRAM: 16GB+
#53

Stable Diffusion 3.5 Large

8B diffusion · Stability AI

HuggingFace ↗

Best text-in-image. Largest open LoRA ecosystem. Essential for creative fine-tuning.

VRAM: 12GB+
#54

HunyuanImage-3.0

80B MoE diffusion · Tencent

HuggingFace ↗

Largest image MoE. Handles 1,000-word prompts with complete semantic fidelity.

VRAM: Multi-GPU 80GB+
#55

SDXL-Lightning (4-step)

~3.5B diffusion · Apache 2.0

HuggingFace ↗

Quality matching 50-step SDXL in 4 steps. Apache 2.0 commercial.

VRAM: 8GB+
#56

Kolors (Kuaishou)

~9B diffusion · Apache 2.0

HuggingFace ↗

Best Asian aesthetics, anime, and Chinese text-in-image rendering.

VRAM: 12GB+
#57

SDXL-Turbo

~3.5B diffusion · SDXL Turbo

HuggingFace ↗

Single-step adversarial distillation. 24fps live preview on RTX 4090.

VRAM: 8GB+
#58

PixArt-Σ (600M)

600M diffusion · Apache 2.0

HuggingFace ↗

Tiny model, 4K output. Best image-per-VRAM-dollar ratio available.

VRAM: 6GB+
#59

Adobe Firefly Research

~12B diffusion · Research only

HuggingFace ↗

Trained exclusively on licensed content. Best for legally safe creative workflows.

VRAM: 16GB+
#60

InstaFlow (1-step)

~1.8B · CC-BY-NC-4.0

HuggingFace ↗

Rectified Flow. 1-step. 100× faster than diffusion. No multi-step sampling at all.

VRAM: 8GB+

🎬 Video Generation

See also: Best Local AI Video Generators Guide.

#61

Wan 2.2 (Alibaba)

14B video · Apache 2.0

HuggingFace ↗

Leading 2026 open video model. 720P, camera motion controls, best semantic consistency.

VRAM: 24GB+
#62

LTX-2 (Lightricks)

~12B video · LTX License

HuggingFace ↗

Audio + video in one pass. 4K native output.

VRAM: 24GB+
#63

HunyuanVideo 1.5

~13B video · Tencent

HuggingFace ↗

Cinematic output. Most accessible cinematic video — runs at 14GB with offloading.

VRAM: 14GB+ (offload)
#64

SkyReels V2

14B video · Apache 2.0

HuggingFace ↗

33 facial expressions, 400 natural movements. Film and TV production-grade.

VRAM: 24GB+
#65

CogVideoX-5B

5B video · Apache 2.0

HuggingFace ↗

6-second 720P clips. Best entry-point video gen on a single 16GB card.

VRAM: 16GB+
#66

Mochi 1 (Genmo)

~10B video · Apache 2.0

HuggingFace ↗

Apache 2.0 commercial. Strong photorealism and physics simulation.

VRAM: 24GB+
#67

Open-Sora

~4B video · Apache 2.0

HuggingFace ↗

Full open-source Sora replication. Weights + training code + data pipeline all public.

VRAM: 16GB+
#68

AnimateDiff-Lightning

~1.5B video · Apache 2.0

HuggingFace ↗

4-step video distillation. Near-real-time preview generation.

VRAM: 12GB+
#69

ModelScope T2V (1.7B)

1.7B video · CC-BY-NC-4.0

HuggingFace ↗

Most accessible text-to-video on a single mid-range GPU.

VRAM: 12GB+
#70

Pyramid Flow

~2B video · MIT

HuggingFace ↗

768P on single 24GB GPU. Lower compute than diffusion via autoregressive flow.

VRAM: 24GB+

🎙️ Voice / TTS + Lip Sync

#71

Voxtral TTS (Mistral)

~7B · Mistral

HuggingFace ↗

Better than ElevenLabs Flash. 3-second voice cloning from a reference clip.

VRAM: 8GB or CPU
#72

Higgs Audio V2 (BosonAI)

~3B · Apache 2.0

HuggingFace ↗

Top trending TTS on HuggingFace. Exceptional emotional range and prosody control.

VRAM: 6GB+
#73

Qwen3-TTS

~3B · Apache 2.0

HuggingFace ↗

10 languages. Describe the voice you want in plain text. Commercial use.

VRAM: 6GB+
#74

Kokoro-82M

82M · Apache 2.0

HuggingFace ↗

CPU-only. Raspberry Pi compatible. Best quality-per-watt TTS.

VRAM: CPU only
#75

Bark (Suno AI)

~300M · MIT

HuggingFace ↗

MIT. Laughter, sighs, background noise, and music alongside speech.

VRAM: 4GB+
#76

Coqui XTTS-v2

~70M · Coqui Public

HuggingFace ↗

17-language zero-shot voice cloning. <0.8× real-time on CPU. Best for dub pipelines.

VRAM: CPU
#77

F5-TTS

~300M · MIT

HuggingFace ↗

Flow-matching TTS. No duration modeling. Most natural prosody for agent voices.

VRAM: CPU+
#78

Parler-TTS Mini

~880M · Apache 2.0

HuggingFace ↗

Describe speaker characteristics in plain text. No voice reference needed.

VRAM: 4GB+
#79

WhisperX

~1.5B · MIT

HuggingFace ↗

De facto local STT standard. Word-level timestamps and speaker diarization.

VRAM: 4GB+
#80

MuseTalk (Tencent)

~0.5B · MIT

HuggingFace ↗

Photorealistic lip sync at 30+ FPS real time. Best for live avatar streaming.

VRAM: 6GB+
#81

Wav2Lip

~30M · MIT

HuggingFace ↗

Industry standard offline film dubbing. Integrates cleanly with ComfyUI.

VRAM: 4GB+
#82

SadTalker

~300M · MIT

HuggingFace ↗

One photo → talking head video with natural movement and blinks.

VRAM: 6GB+
#83

LivePortrait

~200M · MIT

HuggingFace ↗

Emotion-aware portrait animation. Per-facial-action-unit control.

VRAM: 4GB+
#84

LatentSync (ByteDance)

~500M · Apache 2.0

HuggingFace ↗

Diffusion-based lip sync. No flickering. Best quality for post-production pipelines.

VRAM: 6GB+

🎵 Music Generation

#85

ACE-Step 1.5

~2B · Apache 2.0

HuggingFace ↗

Best 2026 local music model. Up to 10 minutes. Genre/instrument/lyrics control.

VRAM: 8GB+
#86

HeartMuLa-oss-3B

3B · Apache 2.0

HuggingFace ↗

Best lyrics controllability. Fits 8GB VRAM.

VRAM: 8GB
#87

MusicGen Large (4B)

4B · CC-BY-NC-4.0

HuggingFace ↗

Meta. Melody conditioning from audio clips. Best for cinematic scoring.

VRAM: 12GB+
#88

AudioCraft Suite (Meta)

~2B · MIT

HuggingFace ↗

MusicGen + AudioGen + EnCodec in one package. Full local audio post-production.

VRAM: 8GB+

👁️ Vision + Embeddings

#89

Qwen3-VL (235B MoE)

235B MoE · Apache 2.0

HuggingFace ↗

Rivals Gemini 2.5 Pro. Document parsing, OCR, charts, visual reasoning.

VRAM: Multi-GPUContext: 32K
#90

InternVL 3.0 (108B)

108B · MIT

HuggingFace ↗

3D vision perception and document digitalization. Best open VLM for doc intelligence.

VRAM: Multi-GPUContext: 8K
#91

LLaVA-Next (34B)

34B dense · Apache 2.0

HuggingFace ↗

Most battle-tested local VLM. CLIP vision + Mistral text. Structured image Q&A.

VRAM: ~21GB (Q4_K_M)Context: 4K
#92

MiniCPM-V 2.6 (8B)

8B · Apache 2.0

HuggingFace ↗

Video + multi-image + text at 8B. Best vision model for 8GB VRAM setups.

VRAM: ~5.5GB (Q4_K_M)Context: 128K
#93

Florence-2 (Microsoft)

770M · MIT

HuggingFace ↗

Captioning + detection + grounding + OCR + segmentation in one tiny model.

VRAM: ~0.5GB
#94

SAM 2 (Meta)

~300M · Apache 2.0

HuggingFace ↗

Universal segmentation. Click anywhere → instant object segment in image or video.

VRAM: 4GB+
#95

Depth Pro (Apple)

~300M · Apple Sample Code

HuggingFace ↗

Single image → metrically accurate 3D depth map. Free for research.

VRAM: 4GB+
#96

BGE-M3 (568M)

568M · MIT

HuggingFace ↗

Default local RAG embedding. 100 languages. Dense + sparse + multi-vec retrieval.

VRAM: CPU OKContext: 8K
#97

Qwen3-Embedding-8B

8B · Apache 2.0

HuggingFace ↗

Top self-hosted embedding. Outperforms all sub-72B on MTEB English. 32K context.

VRAM: ~5GB (Q4_K_M)Context: 32K
#98

ColPali (Vision RAG)

~3B · Apache 2.0

HuggingFace ↗

Encodes PDF pages as images — bypasses broken PDF parsers. Best for scanned docs.

VRAM: ~2GB
#99

nomic-embed-vision-v1.5

~137M · Apache 2.0

HuggingFace ↗

Compatible image + text vectors in one index. Zero pipeline changes for multimodal RAG.

VRAM: CPU OK
#100

DINO v2 (Meta)

~300M · Apache 2.0

HuggingFace ↗

Universal vision backbone for detection, segmentation, depth, and classification.

VRAM: ~600MB

🖥️ Hardware Quick Reference

VRAMBest ModelsGPU Options
8GBPhi-4-mini, Gemma 3 4B, Qwen3.5-4B, Kokoro, PixArt-ΣRTX 4070 Super
12GBQwen 2.5 Coder 14B, FLUX.1 Schnell, CogVideoX-5B, SDXLRTX 3060 12GB
16GBGemma 3 27B (Q4), HunyuanVideo 1.5, LLaVA-Next 13BRX 9070 XT · RTX 4080 Super
24GBGLM-4.7-Flash, Qwen3.5-27B, FLUX.2 Dev, Wan 2.2RTX 4090 · RTX 3090
32GB+Qwen3.5-27B (full), Llama 4 Scout, Devstral 123BRTX 5090
Multi-GPUGLM-5, Kimi-K2.5, DeepSeek V3.2, HunyuanImage-3.0RTX 5090 vs 5080
Mac UnifiedMiniMax M2.5, Llama 4 Scout, Mixtral 8x22BMac Studio M4 Max · M4 Ultra

🔧 Tools to Run Everything

ToolPurposeLink
OllamaOne-line LLM runner (CLI)Website ↗ · Setup Guide
LM StudioDesktop GUI for local LLMsWebsite ↗ · Guide
Jan.aiOffline-first desktop AI appWebsite ↗
ComfyUINode-graph for image/video genWebsite ↗
Open WebUIChatGPT-style web interfaceWebsite ↗
llama.cppCPU/GPU inference engineWebsite ↗ · Glossary
Ready to build your local AI rig?

Use these tools to find the right hardware for the models you want to run:


By Justin Murray · AI Computer Guide — VRAM-centric hardware validation for private, fast, local AI inference. All 100 models in this guide run fully offline, privately, with no subscription. Q2 2026.

About the Author: Justin Murray

AI Computer Guide Founder, has over a decade of AI and computer hardware experience. From leading the cryptocurrency mining hardware rush to repairing personal and commercial computer hardware, Justin has always had a passion for sharing knowledge and the cutting edge.