Forum MicMac

This forum is dedicated the the community of MicMac users


All times are UTC + 1 hour



Post new topic Reply to topic  [ 6 posts ] 
Author Message
Offline

Joined: Nov 2019
Posts: 5
Gender: None specified
Posted: 06 Sep 2020, 12:54 

Hello all,

I try to compile the most current version of MicMac from git on my Windows 10 workstation. I stuck to the instructions provided in the GitHub readme.md. Compiling without CUDA support works perfectly well; compiling with the Cmake CUDA_ENABLED option gives me the following error:
Code:
>D:\opt\MicMac_CUDA\micmac\include\GpGpu\GpGpu_Data.h(925): error : identifier "__m128i" is undefined
>          detected during instantiation of "__nv_bool CuHostData3D<T>::abMalloc() [with T=float]"
>
>1 error detected in the compilation of "d:/temp/tmpxft_000032a8_00000000-10_GpGpu_Cuda_Correlation.cpp1.ii".
>GpGpu_Cuda_Correlation.cu
>CMake Error at GpGpuInterfMicMac_generated_GpGpu_Cuda_Correlation.cu.obj.Release.cmake:280 (message):
>  Error generating file
>  D:/opt/MicMac_CUDA/micmac/build_VS/src/CMakeFiles/GpGpuInterfMicMac.dir/uti_phgrm/GpGpu/Release/GpGpuInterfMicMac_generated_GpGpu_Cuda_Correlation.cu.obj

The same error also happens in the TestCUDA project in GpGpu_Data.h in:
Code:
bool CuHostData3D<int2>::abMalloc(void)


Additionally, there are several warnings occurring in GpGpuOpt and GpGpuInterfMicMac. Does MicMac -- or its Cmake configuration -- expect different versions of the compilers/CUDA Toolkit, respectively?

Thanks for your help!
Florian

My configuration:

Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz

NVidia GeForce GT 640 - CC:3.0 - SMs:2@0MHz
driver R451.82 (r451_40-9) / 27.21.14.5182 (7-19-2020)

Microsoft Visual Studio Community 2019 version 16.7.2
nvcc: Cuda compilation tools, release 10.2, V10.2.89
boost 1.70.0


Top
  Profile 
 
Offline

Joined: Jul 2011
Posts: 1050
Gender: Male
Age: 31
Posted: 08 Sep 2020, 10:59 

Well, the Cuda functionalities are still very much not really working IIRC. In any case, your machine would probably not warrant the use of GPU acceleration...

_________________
Join the MicMac community on Reddit : /r/MicMac/
Don't forget to check the wiki : http://micmac.ensg.eu


Top
  Profile 
 
Offline

Joined: Nov 2019
Posts: 5
Gender: None specified
Posted: 08 Sep 2020, 13:22 

Luc,

thanks for your reply!

I am trying to compile MicMac on this testing machine but I have a more performant workstation with newer graphics cards available. This workstation would greatly profit from GPU computing. As an alternative, the newer graphics card also supports OpenCL.

Is CUDA or OpenCL available in any of the tools (especially the dense matching)? Or do you still have to implement it?

Cheers,
Florian


Top
  Profile 
 
Offline

Joined: Jul 2011
Posts: 1050
Gender: Male
Age: 31
Posted: 11 Sep 2020, 09:49 

Hei,

As far as I know, GPU utilization is at best quirky. For now, I consider MicMac to be a pure CPU program that LOVES high core count and very fast SSD drives.

_________________
Join the MicMac community on Reddit : /r/MicMac/
Don't forget to check the wiki : http://micmac.ensg.eu


Top
  Profile 
 
Offline

Joined: Mar 2018
Posts: 45
Gender: None specified
Posted: 21 Sep 2020, 15:15 

fsuj wrote:
Luc,

thanks for your reply!

I am trying to compile MicMac on this testing machine but I have a more performant workstation with newer graphics cards available. This workstation would greatly profit from GPU computing. As an alternative, the newer graphics card also supports OpenCL.

Is CUDA or OpenCL available in any of the tools (especially the dense matching)? Or do you still have to implement it?

Cheers,
Florian

Hi,
Adding my tuppence-worth for information (admin - please delete if felt irrelevant). Having tested and processed with the CUDA enhanced algorithms (on a linux platform), there is a performance gain with Malt, but the problem is GPU memory management appears to be incomplete. If you have a reasonable amount of CPUs I wouldn't bother as that will rapidly overload the GPU memory and seg fault (this is with an 11gb GPU). It is also worth noting the current implementation only leverages 1 GPU. The same can occur with large numbers of images/high-res images. To get it to work in these circumstances reduces the CPU count to the point it is slower than using CPUs only.
With PIMs, there appears to be little merit/speedup, with the above issues occurring more quickly.
As you seem to have relatively few cores (4 physical/ 8 logical right?) on your setup so it might be worth persevering with, but with much more (ie 8+) and/or large numbers of images it is probably not worth it.

Ciaran


Top
  Profile 
 
Offline

Joined: Nov 2019
Posts: 5
Gender: None specified
Posted: 23 Sep 2020, 17:34 

Thanks for the extensive explanation! Might be worth mentioning that in the wiki.

As I read it, CUDA enabled could lead to a performance gain in depth map generation with the Malt tool if
  • the CPU has a small number of cores; and/or
  • the number of images processed is *not too high*

In other cases, the trouble of getting to run CUDA support is not worth the effort.


Top
  Profile 
 

Who is online

Users browsing this forum: No registered users and 0 guests

Permissions of this forum:

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
Post new topic Reply to topic  [ 6 posts ] 


cron
Créer un forum | © phpBB | Entraide & support | Forum gratuit