Forum MicMac

Reducing Tapioca processing time by modifying dat files?
Page 1 of 1

Author:  ccastillo [ 13 Jul 2018, 13:17 ]
Post subject:  Reducing Tapioca processing time by modifying dat files?

Hi. I am coming back to this issue of trying to reduce Tapioca processing time since for me is the bottleneck in the processing workflow. I was wondering if, in a similar manner as Photoscan does, we could limit the maximum number of key-points to be used, which could reduce the amount of time for Ann algorithm to run. In my understanding, this to be performed would require:

1. Running Digeo to get the key points. I think this can be done, since Digeo appears as a separate executable in the bin folder.

2. Reading the key points in binary dat files to reduce them (I think they are in the Pastis file, starting with LBP*) and finally rewriting them again in dat. I found this thread with a java script to code and decode the key points:


The format to me is not very clear. I succeeded some time ago in coding from text files to dat files for Homol matches in Matlab with a code like this:
for k=1:size(C,1)

But with the dat files in key points not sure how to do it, because the code in the java script is not clear to me. I could use Matlab´s fread function but I need to know the format more precisely.

3. To run Ann independently on the reduced key points dat files. I´m not sure if this could be accomplished since I did not see a separate exe in the bin folder.

Can anyone bring some light on this crucial topic?

Thanks a lot in advance,


Author:  Lisein Jonathan [ 17 Jul 2018, 11:52 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Hi Castillo,

The topic of reducing tie points for the double aims 1) having more appropriate tie points set (well distributed on the image geometry, in the 3D model geometry, with high multiplicity (number of images that "see" the tie point) and low reprojection error) and 2) made the process less time consuming -- is under the spotlight for some time. you may try to get information about

-tool schnaps

-tool ratafia

- TiepTri and TaskCorrel pipeline (paper ... 5-2017.pdf )

Note that all these approaches are not focusing in the reduction of feature points but in tie points. If you want to reduce feature points, you can still adapt the size of the image on which you compute feature points.

all the best,

Author:  ccastillo [ 17 Jul 2018, 22:28 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Hi Jo. Thanks a lot for the reply! Yes, I use Schnaps for tie point reduction. But my problem is Tapioca, it takes a long time since it produces hundreds of thousands of features that have to be matched. This is by far my bottleneck. SIft algorithm in VisualSfm typically produces 10 times fewer points than Tapioca, it is way much faster.

Reducing image resolution is not a good approach for me because you are lowering the quality of your data and therefore the accuracy of your results.

That´s why I was asking if we could go directly to the source by reducing the features prior to the matching algorithm as PhotoScan allows.

Thank you for the response and the interest.



Author:  Lisein Jonathan [ 18 Jul 2018, 09:14 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Hi Castillo,

I am not sure how far you are lowering the quality of the data by the computation of tie point at low resolution. It is the resolution that is lowered , not the quality. Of course, I am sure that a resolution impact quality somehow but I do not know how much.

About the tiepTri and taskcorrel pipeline, the first iteration of tie point and orientation computation is indeed of low quality (but quick in term of computation time) , but the second iteration is of high quality and do not require tapioca/ sift/digeo.

all the best,

Author:  ccastillo [ 18 Jul 2018, 12:55 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Hi, Jo. In my view, when you resample, you are losing precision, since several pixels become 1. Photogrammetric precision is measured in pixels. If a pixel represents a larger area in the scene, pixels errors become larger spatial errors. That´s why I do not want to resample.

I hadn´t heard of the tiepTri and taskcorrel pipeline so far, thanks a lot for bringing that out. I´m gonna check it today. If that worked, it could be an alternative to the standard workflow.

Many thanks, Jo,


Author:  Anthony [ 18 Jul 2018, 18:48 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Are you using -1 (full resolution) in Tapioca ?
If yes it's not advised because of photograph are never pixel precision sharp (motion blur, bayer matrix etc...)
It's taking a while for computation and thus Tapas will be longer.

Are you using diego (which is slightly faster) ?
Otherwise you should try to modifying SIFT ratio, to decrease the number of points (keeping only the sharpest features) I know it's possible but i don't remember how).
It's still true and I agree that Tapioca is the weakest point of MicMac regarding computation cost.
Lately we have been trying to use OpenMVG and the velocity is impressingly better (from some hours to some minutes on large datasets).
OpenMVG is working like you intend (no subsampling of images) but playing with a ratio to decrease the number of features.
We developed an interoperative extension to convert OpenMVG features into Homol/Pastis micmac friendly hierarchy.
We might be OK to share this code if necessary ;)


Author:  ccastillo [ 18 Jul 2018, 19:53 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Thanks a lot, Anthony! A lot of new information for me here:

1. If full resolution is not recommended, which sampling ratio would be advisable? That piece of information has been surprising for me indeed!

2.Yes, I´m using Digeo, with NoMin=1. And Ann for matches.

3. I did not know about the Ratio option. Yes, I can see it in the manual now. I´ll try that also.

4. I´m starting to work with OpenMVG. I have some questions in the openMVG forum exactly for that, to produce a matches text file to be used in combination with micmac.

It only produces a binary matches file that I could not take advantage of. But so far, no response. So, yes I would love if you could help with that. My main limitation with OpenMVG is I´m not knowledgeable in C language, I only use matlab. If the exes are there, I can use them, but I was not even capable of compiling openMVG scripts modifications. I had some issues in cmake, another question put in the openMVG forum.

Thanks a lot for all this valuable information.

All the best,


Author:  Anthony [ 19 Jul 2018, 11:18 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Dear Carlos

1. Trying several dataset it appears that some projects fails using full-res while they were successful even in low resolution. PhotoScan is doing actually the same subsampling method (low, mid and high parameters) actually corresponds to scaling image ratio (/8, /4 and /2), but it is also limiting a maximum amount, of key points (4000 by default if I remember well). The question is, how are filtered/selected the point, in the PS black-box ? In MicMac you can select the method (heuristic method, spatial distribution, redundancy) by the tools mentioned by Jo. But it's an other (still effective) step, and not solving your velocity request. It's kind of hard to define a good ratio, but at the current stage of development of our fully automated platform (based on MicMac). A photogrammetric friendly dataset (overlap and texture) works in most of case with a minimal subsampling of 1000px wide. Usually increasing the ratio (like in PS) up to 75% of full-res is enough, the main goal is to have sufficient key-points (between 1000 and 4000), but still regardless to quality (reason why filtering is currently heavily developed).

To decrease your computation time, you might try this solution :
- Tapioca Line with only 1 (to 3) adjacent pictures
- Let Schnaps complete automatically the missing correspondences, I heard it worked ;)

2. Good for Digeo. Maybe the NoMax can limit the number of key point as I PS ?

3. Let us know ;)

4. I'll see what we can do to make our interoperative chain between Open-MVG and MicMac available to all ;)

Good MicMac


Author:  ccastillo [ 19 Jul 2018, 13:12 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Thanks a lot for the helpful tips. I´ll check the Line-Schnaps approach.

I´m not sure how NoMin and NoMax work. I always do NoMin and it speeds up the process. Both NoMin and NoMax cannot be combined together. Has NoMax a different effect to NoMin? I guess it´s the same, halfing the number of tie points?

Yes, the ratio option works, it´s pretty much what I was looking for!

I´ll wait for OpenMVG-micmac compatibility tools.



Author:  Yoann Courtois [ 23 Aug 2018, 16:25 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Hi everyone !

Nice topic I'm sad I didn't participate at that time !

I have indeed quite similar problems :
- My acquisition procedure, as well as the result aimed, require to work with high resolution (minimum 3000 pxl width from the 4000 of raw resolution images).
- From that, tie points number is enough for accurate relative orientation. BUT, at first, I have more than 200 000 key points per image, which takes a huge time to process...
- I deal with tie points filtering using Schnaps and Ratafia, but there is nothing to do with key points...

From a PhotoScan user point of view :
- I degree to myself to always relatively orient pictures at full resolution ("High" parameter in PhotoScan). As it's been said previously, errors are in pixel, so the smallest the pixels are during features detection, the smallest the real errors would be observed.
(Due to MicMac processing time and experts suggestions, I try to work at lower resolution (between 1/2 and 3/4))
- It exist parameters that coerce the number of key points, and then the number of tie points. And even using 0 (no limit), key point number is lower in PhotoScan than in MicMac (at equivalent resolution), whereas tie point numbers is really similar.
- By the way, large analysis have been done by PhotoScan user. It shows that after the 40 000 / 50 000 "best" key points per image, there is no real benefit. Remains to define what is the "best" key points.

If somebody have some tips to reduce the number of features before matching (every kind of filter is welcome), I would be glad to have a look !

As well, if somebody can give some informations on "NoMin" and "NoMac" role, I would like to know more about !


Author:  Sergio [ 19 Mar 2021, 15:58 ]
Post subject:  Re: Reducing Tapioca processing time by modifying dat files?

Anthony wrote:
Dear Carlos

4. I'll see what we can do to make our interoperative chain between Open-MVG and MicMac available to all ;)

Good MicMac


Good evening,
has there been any development on data exchange from openMVG / openMVS to MicMac?

I saw that, in "mm3d TestLib", there is the command "ConvHomolVSFM2MM". Maybe it is the conversion of TiePoint and Match to MicMac format? I have not found examples and documentation on this command.

When I try to run "mm3d TestLib ConvHomolVSFM2MM" I get the following error: "ERROR: Cannot find Conv_VSFM_MM tool"

How should it be used? With what parameters? Would you advise me how to use the data processed by openMVG in MicMac to calculate orientation and 3D model with MicMac commands?

Thanks in advance

Page 1 of 1 All times are UTC + 1 hour
© phpBB Group