X Feed Intel beta

individual tinkerer enterprises
670
Relevant
263
Topics
1841
Total Posts
$1.088
Cost This Week
$1.088
Total Cost
2026-02-23T21:39
Last Fetch
← Back to Topics
Hardware Platforms

AMD Ryzen AI Max+ LLM inference deployment

Demonstrations and benchmarks of large language model inference running on AMD Ryzen AI Max+ processors in real-world deployment scenarios.

4 posts · First seen 2026-02-23 · Last activity 2026-02-23
TimeAuthorPost
2026-02-23T18:16 @theRab RT @MarkSnowJr: Watch this: 80B param model flying on AMD Strix Halo 395 (Ryzen AI Max+ 395) via Lemonade Server in my classroom! 6 clients…
2026-02-23T17:35 @SasaMarinkovic 👏 @AMDRyzen https://t.co/jjH1jCgKQK
2026-02-23T17:34 @SasaMarinkovic 🔥👇 @AMDRyzen https://t.co/58Ks3hZe1B
2026-02-23T17:33 @SasaMarinkovic ❤️ ‘Epic CPU and GPU performance from @AMD AI Max+ PRO 395.’ https://t.co/Xp2gGWFA4g
@theRab 2026-02-23T18:16
@SasaMarinkovic 2026-02-23T17:35
@SasaMarinkovic 2026-02-23T17:34
@SasaMarinkovic 2026-02-23T17:33

Markdown Export

Loading...