The GPU is the single most critical component when building a system for machine learning.
It’s true that deep learning models can be run on a laptop, however the smaller your system the longer it will take to train your models. The first thing to consider is the amount of computation required. Knowing how intensive the task will be illuminates what kind of computational power you require.
Training deep learning models involves multiplying vectors and matrices. While CPUs have complex cores, they are unfortunately optimized for single-thread performance. GPUs on the other hand, can handle thousands of concurrent tasks.
The difference would be similar to having to choose between a traveling via fighter jet or a commercial airplane. Fighter jets are great. They are fast and can easily break sound barriers. However, if hundreds of people need to be send back and forth between destinations, it would take too many trips for the fighter jet to make it worth while.
Similarly, using a GPU allows hundreds tasks to occur at the same time whereas the CPU would be handling one thing at a time.
Depending on the complexity of the problem you are solving, you’ll need more computational power. If the problem being solved isn’t too complex, you can get away with just the CPU.
List Of GPUs
Here are some GPU recommendations for you to build a deep learning machine. There are a lot more powerful GPUs, but these are the best balance between price and performance.
One you have your GPU selected, you can then select the RAM you need. Doing so in the other order can lead to headaches and wallet-aches.