• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: February 14th, 2024

help-circle


  • This is my use case exactly.

    I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.

    Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.

    This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.

    I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.