A couple years ago several companies introduced external GPUs. Portable boxes that held a top end GPU that could run the latest games on your small form factor PC or your otherwise under-equipped none gaming PC. They could be used on just about any PC to add in gaming or graphics power without the need for a new build or major upgrade. With the need for today's computers to keep up with the draw of AI computing will we soon see add-on or scalable external AI Accelerator Box becoming the new thing?
The fundamentals of on-device AI computing are easy enough to understand. You put an NPU, AI ready GPU ect into a device and you are reasonably set. Most new laptops and desktops will come with these chip integrated. However, the need to add AI computing power to older machines, without a large scale upgrade, might be the next big draw. Much like add-on GPUs having a stand-alone AI Accelerator NPU might be a great solution.
Think of it this way, you own a PC or laptop that is a few years old and instead of buying something brand new you add on an external NPU or even GPU box the size of a standard portable hard drive. You instantly boost your computing power and/or graphics power. Now add to that you could potentially be running them in tandem and multiply your base computing power exponentially based on whatever configurations you have! You become your own distributed computing system running everything from one machine. You can then take those same devices and run the same level of computing from your laptop, work PC or possibly even your phone!
Unlike a normal PC with an AI Accelerator Box you won't have to continuously upgrade. Rather you scale your computing power based on each new AI Box you add or swap out. Your base footprint would still be less than a single high end desktop, but with exponentially more compute power!
We do have a few solutions similar to what I'm suggesting today. ADLINK Tech has a portable NVIDIA RTX GPU and companies like Zotac still have their external GPUs. However, none of these offer the true scalability and raw AI processing power that I'm talking about. Eventually I believe well see some true standalone Intel or AMD based AI accelerator chips in external boxes setup for running as a daisy chained system.