(TIP) How to set your Block Size

Started by prehistory, February 02, 2009, 03:01:06 PM

Previous topic - Next topic

prehistory

You know in IMMerge, under the Advanced options, there is a Memory Management section with Block Size and Compression boxes?  The Block Size is the amount of memory (in Mb) that IMMerge will request for each merging job.  If the amount of memory is too small, it won't be enough to process the data, but if it is larger than what is currently available on your computer, IMMerge will crash.

You can improve your merging performance and avoid crashing by properly setting these values based on the computer you are using.  Here's the equation:

Take the amount of available RAM you have on your computer (e.g. 2 Gb), in megabytes (e.g. 2000 Mb).
Take the number of processors on your computer (e.g. 2).
Divide the Mb of RAM by the number of processors (e.g. 2000/2 = 1000)
Divide the result by 5 (e.g. 1000/5 = 200)
This is the value that should be in the "Block Size" box.

Divide the Block Size by 10 (e.g. 200/10 = 20) for the Compression value.

If you have 2 Gb and 1 processor, you will set Block Size to (2000 Mb / 1 processor / 5 = 400) 400, and your Compression to 40.

This is the number you can safely set, but you can boost it up if you get the warning "Block Size Too Small" during a merge.  Someone told me they successfully used the value 700, when 200 would have been suggested.  Just don't open any other apps then!

spike3d

I think it's worth noting that block size is the size of the bucket that is used to process each specific merge job iteration and is not specifically related to the amount of ram or number of processors you have. Although you will hit the wall by running out if memory if you don't adhere somewhat to the suggested calculation above, give or take a factor of 2 or 3 :-) The size of the required block size varies hugely depending on the type and density of data, the amount of overlap and the number of iterations used. Hopefully one of these days it will be calculated automatically by immerge

prehistory

True. The Block Size is the size of the bucket requested by IMMerge (fusion.exe) for each specific merge job.  The actual amount of resources necessary to process the job is dependent upon the number and density (and overlap) of data within the specific chunk.  Ideally, you need slightly less than requested.  If you need more, IMMerge will crash ("Block Size Too Small").  The 1/5 rule for Block Size is an attempt to set the resource request high enough to successfully process most merging jobs without exceeding the resources of the computer.  Set the Block Size at 20,000 and you will certainly be on thin ice, as it could request 20 Gb of resources.

PW User

Hello,

While your suggestions are indeed very good, I might add a few things.

The block size is the amount of RAM block that will be requested EVERY time IMMerge needs more memory to process. For one iteration, it may need to request more than once a block of the size you specified.

From experience, the main parameter is the "# Merging Jobs" in the "Subdivision" section.

When you have a memory error inside Polyworks, increasing the "# Merging Jobs" is THE parameter to start increasing first. It is a power of 2, so 2, 4, 8, 16, 32, etc... It is not uncommon for me to use 512 and 4096 or even 32768 (2 exponent 15). note that this will slow down the start of the processing and increase the end process of putting the pieces together.... but at least it will go through...

The only time this "Block Size" parameter had to be increased in my 4 years of usage of Polyworks is during a show, in Chicago, when a hardware manufacturer was showing off his device by scanning over and over again the same area of a car. So when I tried merging the resulting sets of scans that I aligned, the number of images of the same area was very large. The point density of a specific 3D zone was indeed very high. That's the only time the increase of the number of merging jobs was insufficient. I had to increase the block size to 1024 (I have 4Gb of RAM and 2 CPUs). But I still left my number of merging jobs to a high value of 4096.

The reason? The high density of scan in a specific area (let's call it a "cube of data") had to be processed in ONE block of memory. So no matter how small I made the cube of data by increasing the "# Merging Jobs" (that's what it does: separate the big problem into smaller ones), the specific cube of data was too large to fit into the block of 200Mb (the default on my computer). So I had to increase the block size to 1024.

If you scan relatively uniformly, you should not have to increase the block size. Increasing the number of merging jobs is the key to having a large scan go through in most cases. There is a very limited number of people that need to increase the block size. If you have an area of larger density for any reason, that's when you need to increase this value.

One problem related to increasing the block size to a large value: the IMMerge process will request that size of a CONTIGUOUS block of RAM to the operating system. If you set that value to a too large value, Windows may not have that size of a CONTIGUOUS block of memory available... and refuse the request to the IMMerge process. Since Windows can sometimes be inefficient at keeping the memory unfragmented, even if you have 2Gb of RAM left free, it may be in chunks of 300Mb of CONTIGUOUS memory... and the request for more than 300Mb be turned down. A fresh reboot is a remedy to the fragmented memory.

Weeew. Long explanation, isn't it?

Just remember that the number of merging jobs does the fix in 99% of the cases and increasing a bit (up to 1024) the block size is necessary only in rare cases of an overload of overlap (a new point cloud tongue twister... ;) )

Bernard

PW User

The newest release of PolyWorks (since V11.0.6) has new code that makes this discussion outdated. The mermory management is now completely dynamic, you do not need to worry about this issue anymore.

Admin

I just upgraded from 11.0.4 to 11.0.12 (yes, I was a little behind) and the memory managment is still set to what I was using in 11.0.4...

Is the memory management only automated when using cluster mode or in a macro???

PW User

Hello,

The memory management technique is available everywhere. Whether you use a cluster of PC in a network or only your own computer (locally).

The only thing you cannot do when using a local merge command is control the number of CPUs used for merging. When run loacally, IMMerge will use ALL processors. When you have 2 processors and 4Gb of RAM this is not an issue, but if you have a dual quad-core processor (thus 8 processors total) and 4Gb of RAM, there are 8 merge processes that will start at once and then each one of them can use 1Gb of RAM. Not very nice...

The operating system (Windows) will start using "SWAP" space which is hard disk space used as RAM. Hardware memory chips' speed is measured in nanoseconds (10exp-9 seconds) while disk speed if measured in milliseconds (10exp-6 seconds). And Windows is not very good at "swapping". A crash may occur.

With the cluster merge tool, the IMMerge Agent, you can control the number of processors being used in a merge job.So some people use the cluster merge even if they are on a local machine and only use that machine.

I will not reproduce the installation instructions here, but here is the place where you can find the configuration file:
C:\Program Files\InnovMetric\IMMergeAgent (64-bit)\bin_win32\

The file is called : "immerge_agent.config" and there is a section in it:

    <NBCPU>1</NBCPU>

Change the 1 to the number of CPU you want to use for the job.

This is rather complex for a single post, so if you need more details, let me know.