X Feed Intel beta

individual tinkerer enterprises
789
Relevant
273
Topics
2290
Total Posts
$1.633
Cost This Week
$1.633
Total Cost
2026-02-23T23:00
Last Fetch
← Back to Topics
Hardware Platforms

Apple hardware for AI inference and on-device ML

Apple M5 Ultra GPU compute, Mac Studio LLM inference, MLX-Audio-Swift SDK on Apple Silicon

4 posts · First seen 2026-02-23 · Last activity 2026-02-23
TimeAuthorPost
2026-02-23T22:54 @RayFernando1337 RT @Prince_Canuma: You can now vibecode your own WisprFlow or Monologue alternative that runs completely locally on Apple Silicon using MLX…
2026-02-23T21:03 @ivanfioravanti RT @Prince_Canuma: Day 1 of 3 days of MLX: Introducing MLX-Audio-Swift SDK 🚀 A modular Swift SDK for voice agents and tasks on Apple Sili…
2026-02-23T18:03 @TheAhmadOsman Let me know when a scaled-up M5 Ultra for the Mac Studio is more than just a rumor. I will happily test it. If it performs well, I will say so. For now, though, there is no meaningful comparison with GPUs. https://t.co/aJGxtRNGlo
2026-02-23T17:38 @mweinbach @TheAhmadOsman @SIGKITTEN @ivanfioravanti The entire point of a Mac Studio vs dGPU is memory I can run a 250B model on a single Mac Studio. Prefill is slow and decode is fine, but it runs. Can’t say that of a 3090 or 5090, they are faster but you need 4-8 or more to match the same model It’s just different ↩ reply parent
@RayFernando1337 2026-02-23T22:54
@ivanfioravanti 2026-02-23T21:03
@TheAhmadOsman 2026-02-23T18:03
@mweinbach 2026-02-23T17:38
↩ reply parent

Markdown Export

Loading...