Anduin Xue
Anduin Xue

Anduin's Tech Blog

GPU


How to setup CUDA environment for Docker on Ubuntu?

Setting up a CUDA environment for Docker on Ubuntu involves a structured process to enable GPU acceleration within containers. The journey begins by verifying that the system recognizes the NVIDIA GPU, a critical first step to avoid configuration pitfalls. Installing the correct drivers—whether for desktop or server environments—requires careful selection from available versions, with options for automatic or manual installation ensuring flexibility. Once drivers are in place, Docker must be configured to leverage NVIDIA's container toolkit, a bridge between the host hardware and containerized applications. This integration demands precise repository setup and package installation to ensure compatibility. Testing the setup through commands like `nvidia-smi` within a Docker container confirms successful integration, while stress-testing tools like `gpu-burn` validate the GPU's performance under load. Advanced users can extend this configuration using Docker-Compose to define GPU resourc...--Qwen3

NVIDIA CUDA GPU Docker Nvidia Drivers Docker GPU

How to install CUDA and cuDNN on Ubuntu 22.04 and test if its installed successfully

本文系统梳理了在Ubuntu 22.04系统上搭建CUDA cuDNN深度学习环境的完整流程 从版本兼容性验证到最终测试的每个环节都暗含着开发者需要主动思考的关键点 比如当看到nvidia-smi显示的驱动版本时 你是否能准确对应CUDA支持表中对应的版本区间 这个对应关系背后反映了NVIDIA对硬件和软件生态的复杂兼容性设计 当安装cuDNN时 复杂的依赖链和文件路径映射提示我们 一个看似简单的库安装可能涉及多层系统权限管理 当PyTorch安装失败时 你是否意识到直接使用pip安装的包可能与特定CUDA版本存在隐式冲突 这些设计选择都值得深入思考 最后的测试环节中 从简单的hello-world并行执行到mnistCUDNN的矩阵运算验证 再到PyTorch的CUDA可用性检查 形成了完整的验证链条 但测试成功是否意味着环境就完美无缺 你的GPU利用率是否达到预期性能 这些都需要通过实际应用来验证 当看到Test passed!的提示时 你是否开始思考如何将这个环境迁移到生产环境 还是考虑如何优化代码利用GPU的计算能力 这些问题的答案或许就藏在你即将展开的实践中--Qwen3

Ubuntu NVIDIA vGPU CUDA cuDNN GPU

  • 1