Then to understand how/if it is related to your device, it's best to test
if we can use it to do something else. I'm just giving an example: you can
use your nvcc compiler to compile the attached script. If it works then
it's probably not your device, and if it gives an error you know where the
problem is. And if your device works all right, we have to know where the
bad_alloc happened to proceed.
Also, by that, I just meant the installation of CUDA might be the issue.
Thank you,
Ruochun
On Friday, October 11, 2024 at 11:10:55 AM UTC+8 [email protected] wrote:
> I think that i have the GPU devices and GPU driver. If the reason is that
> the GPU devices not accessible by the system, how can i solve this
> problem? Do you have any suggestion? By the way, i would like to know that
> what is the meaning of CUDA runtime installation.
>
> Thank you,
> Weigang
>
> On Thursday, October 10, 2024 at 8:40:09 PM UTC+8 Ruochun Zhang wrote:
>
>> The previous one appears to be a linkage problem with filesystem-related
>> libraries. If your compiler supports C++17, then I suggest you remove
>> everything in your build folder and try rebuilding, and in the ccmake
>> configuration, manually set the C++ standard to 17. If this does not help
>> then maybe there is a problem with your compiler installation...
>>
>> As for the bad_alloc thing, it is usually a failed memory allocation
>> attempt. Since this demo is small in scale, it is more likely due to that
>> you have no GPU devices/GPU devices not accessible by the system/no proper
>> GPU driver or CUDA runtime installation.
>>
>> Thank you,
>> Ruochun
>>
>> On Thursday, October 10, 2024 at 1:33:06 PM UTC+8 [email protected]
>> wrote:
>>
>>> when i run the demo, there is also a error as shown below.
>>> [image: 微信图片_20241010132851.jpg]
>>>
>>> On Thursday, October 10, 2024 at 10:53:23 AM UTC+8 Weigang Shen wrote:
>>>
>>>> Thank you! This problem has been solved. However, there is a new error
>>>> as shown below.
>>>> [image: 微信图片_20241010105032.jpg]
>>>>
>>>> Thank you,
>>>> Weigang
>>>>
>>>> On Thursday, October 10, 2024 at 9:46:36 AM UTC+8 Ruochun Zhang wrote:
>>>>
>>>>> You are probably using an old GCC version, as you'll need a
>>>>> C++17-compatible compiler. You may use GCC 8 or newer.
>>>>>
>>>>> Thank you,
>>>>> Ruochun
>>>>>
>>>>> On Wednesday, October 9, 2024 at 10:16:04 AM UTC+8 [email protected]
>>>>> wrote:
>>>>>
>>>>>> Hi Ruochun,
>>>>>>
>>>>>> I am now encountering such a problem. Could you give me a help?
>>>>>> [image: 微信图片_20241009101157.jpg]
>>>>>>
>>>>>> Thank you,
>>>>>> Weigang
>>>>>>
>>>>>>
--
You received this message because you are subscribed to the Google Groups
"ProjectChrono" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/projectchrono/25ce75e2-87d9-4459-bcce-4963fd6383ben%40googlegroups.com.
#include <iostream>
#include <cuda_runtime.h>
void printDeviceInfo(int device) {
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, device);
std::cout << "Device #" << device << ": " << deviceProp.name << std::endl;
std::cout << " CUDA Capability: " << deviceProp.major << "." <<
deviceProp.minor << std::endl;
std::cout << " Total Global Memory: " << deviceProp.totalGlobalMem / (1024
* 1024) << " MB" << std::endl;
std::cout << " Multiprocessor Count: " << deviceProp.multiProcessorCount
<< std::endl;
std::cout << " CUDA Cores/Multiprocessor: " << (deviceProp.major == 8 ?
128 : 64) << std::endl; // Assuming Ampere (128 cores/SM) or earlier (64
cores/SM)
std::cout << " Total CUDA Cores: " << deviceProp.multiProcessorCount *
(deviceProp.major == 8 ? 128 : 64) << std::endl;
std::cout << " Clock Rate: " << deviceProp.clockRate / 1000 << " MHz" <<
std::endl;
std::cout << " Memory Clock Rate: " << deviceProp.memoryClockRate / 1000
<< " MHz" << std::endl;
std::cout << " Memory Bus Width: " << deviceProp.memoryBusWidth << "-bit"
<< std::endl;
std::cout << " L2 Cache Size: " << deviceProp.l2CacheSize / 1024 << " KB"
<< std::endl;
std::cout << " Maximum Threads per Block: " <<
deviceProp.maxThreadsPerBlock << std::endl;
std::cout << " Maximum Threads per Multiprocessor: " <<
deviceProp.maxThreadsPerMultiProcessor << std::endl;
std::cout << " Maximum Grid Size: (" << deviceProp.maxGridSize[0] << ", "
<< deviceProp.maxGridSize[1] << ", " << deviceProp.maxGridSize[2]
<< ")" << std::endl;
std::cout << " Maximum Block Dimensions: (" << deviceProp.maxThreadsDim[0]
<< ", "
<< deviceProp.maxThreadsDim[1] << ", " <<
deviceProp.maxThreadsDim[2] << ")" << std::endl;
}
bool testMemoryAllocation(int device) {
cudaSetDevice(device);
float* d_ptr = nullptr;
cudaError_t allocStatus = cudaMalloc(&d_ptr, 1024 * sizeof(float));
if (allocStatus == cudaSuccess) {
std::cout << " Memory allocation test: SUCCESS" << std::endl;
cudaFree(d_ptr);
return true;
} else {
std::cout << " Memory allocation test: FAILED - " <<
cudaGetErrorString(allocStatus) << std::endl;
return false;
}
}
int main() {
int deviceCount;
cudaError_t error = cudaGetDeviceCount(&deviceCount);
if (error != cudaSuccess) {
std::cerr << "Error: " << cudaGetErrorString(error) << std::endl;
return 1;
}
std::cout << "Number of CUDA Devices: " << deviceCount << std::endl;
for (int i = 0; i < deviceCount; ++i) {
printDeviceInfo(i);
testMemoryAllocation(i);
std::cout << "---------------------------------------------" <<
std::endl;
}
return 0;
}