I’ve got an NVIDIA GeForce GTX1650 in my laptop, and for a while, I’ve been eager to harness its power for training deep learning models. Luckily, the GTX1650 supports CUDA, making it perfect for this task. This tutorial is specifically for Windows users (native).
PS : You can use it to train small models. Do not try to use it to train very big models.
First things first, let’s check if your GPU is recognized. Open your terminal and type:
nvidia-smi
If you don’t get something like above you have to install CUDA.
If you don’t see something similar to the output above, you’ll need to install CUDA.
Begin by checking if you have the necessary drivers installed. If you have the NVIDIA Control Panel on your laptop, you likely have the drivers already installed. If not, head over to the link below to download and install the latest drivers:
- Next, let’s set up a Conda environment:
conda create --name tf python=3.9
conda activate tf
Make sure environment is activated for the rest of the installation
2. Install CUDA and cuDNN with conda
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
3. Upgrade pip
pip install --upgrade pip
4. Install tensorflow
# Anything above 2.10 is not supported on the GPU on Windows Native
pip install "tensorflow<2.11"
5. Verify CPU setup
python -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
6. Verify GPU setup
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
If this message is there in the terminal that means installation was succesfull. Now you can train your model with GTX1650