The Predict Engine texture component is a component designed to start a simulation manually or by script. It is best fitted to be included in VR applications or to be integrated in custom scripted behaviours. It works in the following way :
The component can be added anywhere in the scene. If the Predict simulation is to be displayed on a geometry in the scene (preferably a plane), the component must be added to this geometry.
The component settings are the following :
The Sensor and Resolution fields define which camera in the scene will be used for the Predict Engine simulation and its resolution;
If the component is placed on a geometry (a GameObject with a MeshRenderer component), the "Set Texture on this GO" field enables you to automatically create an Unlit material using the Predict simulation and set it on the renderer;
If the Persistency toggle is enabled, the Predict simulation will be saved and still be available after the Engine process is stopped;
If the Lock Transform toggle is enabled, the sensor transform will not be updated once the process is started, even if it moves in the Unity scene;
The Auto Action field enables you to define an action that will be performed after a given amount of time or samples per pixel (spp). The action can be to pause the simulation, stop the simulation, or save its output.
The start, stop, pause, reload and save buttons at the top enable you start (stop, reload,...) the simulation manually. You can also do so by script by calling the component's "StartSimulation()", "PauseSimulation()", "StopSimulation()", "ReloadSimulation()", "SaveLDRSimulation()", and "SaveHDRSimulation()" functions. These functions can also be called from a UI button, a VR joystick,...
Once the process is started, you can get its state in the Process section and a preview of the texture in the Preview section. The state can also be retrieved by script if necessary.
The simulation texture can be retrieved using the component's mOutputTexture variable.
This component first purpose is to be used in VR scenes :
Since the Predict simulation is not yet interactive enough to be used directly in a VR headset, we use Unity to get an interactive navigation in the scene. The user can then define a position where he/she wants to get the Predict Engine simulation and visualize it as if he/she was using an actual camera.
The Predict Engine simulation is then computed and displayed on a plane in the Unity scene.