
(005) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores Total amount of global memory: 1992 MBytes (2089091072 bytes)
deviceQueryĬUDA Device Query (Runtime API) version (CUDART static linking)ĬUDA Driver Version / Runtime Version 11.6 / 11.6ĬUDA Capability Major/Minor version number: 6.1 $ /gitrepo/cuda-samples/bin/x86_64/linux/release$. $ /gitrepo/cuda-samples/Samples/1_Utilities/deviceQuery$ make NVRM version: NVIDIA UNIX x86_64 Kernel Module 510.47.03 Mon Jan 24 22:58:
Now simply copy the code below and paste it into a file named test.py.Ī = tf.constant(, shape=, name='a')ī = tf.constant(, shape=, name='b'). Activate the conda environment and install tensorflow-gpu. Here gpu is the name that I gave to my conda environment. Use the following command and hit “ y“. Now open your terminal and create a new conda environment. Step 7 – Create a conda environment and install TensorFlow You can see in the top right corner, CUDA Version: 11.2. Run the nvidia-smi command in your terminal. Step 6 – Check the successful installation of CUDA Now open the start menu and type env and you will see an option “ Edit the System Environment Variables“. Now open the bin Folder and copy the path from the address bar.
Copy all the files from the cuDNN folder and paste them into C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 and replace the files in the destination.
It can restart while the installation process.
Agree and Continue > Express (Recommended). Once you have successfully downloaded CUDA and cuDNN, install the CUDA toolkit by double-clicking on it. It will ask to download workloads, just skip it and just install Visual Studio Core Editor. Login to Microsoft and then search Visual Studio 2019 and download the Community version. Step 4 – Download Visual Studio 2019 Community.