Ask HN: Anyone have experience renting cloud GPUs?
Curious to learn from people with experience deploying and running AI workloads on non-hyperscalar cloud GPUs (think like Voltage Park, Hyperstack, Akash, etc).
What went into your choice of GPU cloud? What were the criteria? Have you experienced any frustrations/issues with their infra?
I'd b happy to share my own experiences with clouds in the comments too. Looking forward to hearing from u :)
I used to use akash but sometimes pytorch would just crash mid training run
oh thats ass. Did pytorch fail, or did the entire node just shit itself?
i used to use massedcompute. Don't
why not?
[dead]