What is the motion vector format for RE:Vision Effects’ products?
We use pixel space motion and not 3D or 3D normalized coordinates. That is, each pixel value of the motion vector image should represent the motion of a pixel within one frame time, in image space.
In our coordinate system, the lower-left pixel represents the origin of the coordinate system, (0,0), with positive Y-axis pointing up. That means a motion vector value of (1.0,1.0) represents a motion of a 1.0 pixel in each of X and Y toward the top right corner of the image.
Working directly with pixel motion in screen space eliminates the need for RE:Vision products to need the 3D camera (perspective transform). This obviates the need to reconstruct the specifics of any particular 3D system. Of course it means that RE:Vision products are currently limited to processing in pixel (screen) space.
At any particular frame in time there are two sets of motion vectors to consider: motion vectors from the current frame to the previous frame and motion vectors from the current frame to the next frame. Most 3D systems only allow you to export backward motion (TO the previous frame), therefore have no capacity to make motion for the first frame of your sequence.
We expect motion vectors to be rendered with no antialising; much like zbuffers from a 3D system are normally rendered. For example, if a single sphere is being animated over a black background, the rendered COLOR image at the sihouette of the sphere go towards black when antialiasing is turned on (so you get a nice rounded-looking shape). Should the motion vectors go towards zero (the motion vector equivalent of black) at the silhouette? Most definitely not. As such, motion vector renders from 3D systems should be performed with antialiasing turned off.The Math
We presume at each pixel that the motion vector is represented by (dx,dy). (‘d’ for displacement). In a motion vector image we assume the x motion is stored in the red channel and the y value is store in the green channel. Each of our products that deal with motion vectors has a scaling factor, called MaxDisplace, that tells the product how to map the range of pixel values to motion vector values.
For example, in 8pc we map the range off pixel values (0, 254) to (-MaxDisplace, +MaxDisplace). For 16bpc images we map the range of pixel values (0,65534) to (-MaxDisplace,MaxDisplace). And for floating point images (where often the color channels are in the range (0,1)) we map (-1,1) to (-MaxDisplace,+MaxDisplace).What should the value of MaxDisplace be?
For 8 bpc (bits per channel) motion vector images: Note that 8 bit per channel images should not be used for motion vectors unless there is small motion (not more then +/- 32 pixels of movement in either x or y). As such, you can set the MaxDisplace value to 32 for 8 bpc images.
16 bpc motion vector images: On the other hand, 16 bpc images can fully represent motion with sufficient subpixel resolution for most practical purposes. For example,you can represent motion vectors that have 1024 pixels in x or y motion with 1/32 subpixel precision. A rule of thumb for 16 bpc motion vector files: you can almost always safely set ‘MaxDisplace’ to 2048. There is the following exception: For images with greater than 2048 pixels on a side, and with motion in the scene of more that 2048 pixels, you should use the following rule of thumb: use the greater of the width or height pixel dimension.
For floating point images: Set MaxDisplace to 2048 (although any positive value will do).How is the alpha channel of motion vector images used?
The alpha channel represents the coverage of motion; that is, the alpha channel designates the image area that has valid motion vectors. At pixels where there alpha is less than full-on (255 for 8 bpc, 65535 for 16pbc, 1.0 for float), the RE:Vision Effects plugins assume that the motion vectors are not defined.
Note that there is a significant difference between the following cases:
1. not knowing the motion amount a particular pixel (that is, there is no object represented at a particular pixel); and,
2. setting the motion vector to (0,0) (that is, no motion) at a particular pixel.
For example, ReelSmart Motion Blur takes advantage of knowing which pixels contain valid motion vectors. By knowing which pixels contain an object’s motion enables our motion blur process in ReelSmart Motion Blur to properly blur the motion outside of an object boundary. The blue channel is most often ignored by RE:Vision Effects products (see each product for details).