You've also gotta get servers to put in there, and in a 42u rack with, say 9 4u servers, a 2u managed switch, and now you have 4 units left to handle power distribution, power backups, any kind of external remote management (say an out-of-band KVM, or a second network for IPMI traffic, or anything else).
ALSO, all those servers? Say the fancy, high-speed switch you need to route all that traffic is $2000.
You wind up needing spend $5000 on auxiliary equipment and installation. This includes IE power cables, network cables, Velcro, and that darn cable you forgot you needed.
Now we're up to $10,000 including the colo costs, and we haven't even gotten to servers yet.
We need 9 4u-tall servers. The reason for the 4u height is because that's the size you want to be the most space-efficient with your full-height GPU's.
You'll want good base servers to slap your graphics cards into. I'm a little out of the loop on the latest and greatest in the server world, but we'll assume it's around $7,500 for a fairly moderate AMD EPYC system (EPYC because A] they wind up being cheaper than their Intel counterparts and B] have many more PCIE lanes).
9 $7,500 servers is $67,500 without any GPU's.
You don't want nVidia's consumer GPU's, because they're hostile to virtualization, so it's either AMD's consumer or enterprise stuff Or nVidia's enterprise stuff.
For GPU's you want to use, you'll be paying at least ~$600 per, and there's around 8-10 slots per server. $600 * 8 * 9 = $43,200.
I'm sure I've missed stuff- haven't included data storage, for one- but we're already at $120,700 and you'll probably want some new GPU's in a couple of years, and the total cost of servers over their lifespan winds up being around double what the initial cost was.
I wasn’t suggesting doing it this way. I said to use the equivalent of having a system at home: no virtualization, consumer GPUs. That way it’s an apples to apples comparison - you just get higher utilization and cheaper power. The cost of the switch and rack are minor when divided over 20 systems.
ALSO, all those servers? Say the fancy, high-speed switch you need to route all that traffic is $2000.
You wind up needing spend $5000 on auxiliary equipment and installation. This includes IE power cables, network cables, Velcro, and that darn cable you forgot you needed.
Now we're up to $10,000 including the colo costs, and we haven't even gotten to servers yet.
We need 9 4u-tall servers. The reason for the 4u height is because that's the size you want to be the most space-efficient with your full-height GPU's.
You'll want good base servers to slap your graphics cards into. I'm a little out of the loop on the latest and greatest in the server world, but we'll assume it's around $7,500 for a fairly moderate AMD EPYC system (EPYC because A] they wind up being cheaper than their Intel counterparts and B] have many more PCIE lanes).
9 $7,500 servers is $67,500 without any GPU's.
You don't want nVidia's consumer GPU's, because they're hostile to virtualization, so it's either AMD's consumer or enterprise stuff Or nVidia's enterprise stuff.
For GPU's you want to use, you'll be paying at least ~$600 per, and there's around 8-10 slots per server. $600 * 8 * 9 = $43,200.
I'm sure I've missed stuff- haven't included data storage, for one- but we're already at $120,700 and you'll probably want some new GPU's in a couple of years, and the total cost of servers over their lifespan winds up being around double what the initial cost was.
-Summer Glau