Energy, Visual Testing
Robotic Technology brings 3D Digital Twin to PTZ Inspections
A persistent limitation of conventional visual inspection is not image quality, but traceability. Findings captured as still images or video are often difficult to relocate precisely, complicating repeat inspections, comparison over time, and data-driven assessment. This gap becomes more pronounced in confined or complex assets where spatial orientation and inspection coverage are difficult to reconstruct after the fact.
The Mentor Zoom PTZ inspection system addresses this challenge by integrating robotic control with in‑situ three‑dimensional asset modeling. As the camera is deployed, the system generates a 3D representation of the inspection environment and associates each indication with exact spatial location data. Findings are automatically documented within this context, enabling inspectors to reliably return to the same areas during follow‑up inspections and supporting consistent comparison across inspection intervals. By linking visual data to a digital twin, inspection results become more repeatable, auditable, and suitable for long‑term analysis without adding complexity to the field workflow.
Learn more about 3D-based PTZ inspection workflows.Pan-Tilt-Zoom Cameras Have Come A Long Way
Many industries operate equipment like pressure vessels, reactors, or storage tanks, and while even today there are still many internal inspections executed by professionals who take on the risk of entering these inherently dangerous spaces, asset owners and operators have been transitioning to remotely operated tools such as cameras for the last three decades.
While the use of cameras clearly increases safety and significantly reduces cost of visual inspections, the risk of missing critical flaws or defects drove the development of better PTZ camera systems over the years.
A long journey took available industrial PTZ cameras from “a few” flickering TV lines to Full HD imagers and from dim incandescent lights to several thousands of lumens installed LED illumination.
Current industrial PTZ camera systems deliver excellent visual information from hard-to-reach hazardous locations to the experts outside of the danger zone.
Improving Image Quality
Good practice requires verifying an inspection system’s visual performance with respect to spatial resolution and color trueness regularly using a USAF Test Target as shown in Figure 1.
Figure 1: USAF Test Target 1951to verify spatial resolution and color accuracy. – Included with the Mentor Zoom camera system.
The chart in Figure 2 shows how improvements in optics, illumination, signal transmission, and image display have, over the years, increased the achievable spatial resolution at a given inspection distance.
The Documentation Process
While resolution and illumination have improved, the process of documenting observations and findings and their locations remains diverse across PTZ camera deployments, from non-existing to unstructured to exhaustive and time consuming. The resulting paperwork and collected data range from a simple document with a signature to folders filled with digital images and sometimes hours of video material.
In all these scenarios, the documentation hardly allows for precise repeat inspections, trend analysis, or risk-based maintenance plans. Every new inspection almost starts fresh. Already identified and reported problem areas need to be searched for by slowly panning and tilting along the asset internals. Would it not be great to store those locations in such a way that finding them again automatically becomes possible?
Digital Twin from a Robotic PTZ Camera
Robots move around unknown areas by mapping out the environment using a combination of cameras and multiple, complex, and sometimes large sensors. This approach allows them to precisely remember a location within this map of the real world, to even recognize structures and objects, and to resume or repeat actions whenever their instructions call for it.
What if a PTZ camera, once placed inside a confined space, could generate a similar map? What if it could report findings reliably in three dimensions and even automatically return its focus to previously marked spots years after? What if the camera could generate and feed a digital twin? These were the questions the Waygate Robotics team was asked repeatedly.
The solution was not straightforward. Creating a three-dimensional model of a confined space surrounding a PTZ camera requires scanning technology. The simple idea of bolting a three-dimensional scanning LiDAR to a PTZ camera does not work due to space constraints and even more importantly because the add-on would impede the field of view and therefore restrict the one critical feature of any PTZ: good field of view and good image quality.
A Map from a Camera
A natural solution to the scanner integration problem is to make use of the fact that PTZ camera already is some kind of scanner. The camera can, by design, point in almost every direction; an integrated distance sensor could therefore also point everywhere and build up an environment model from the collected distance measurements.
However, industrial PTZ cameras have been developed to survive harsh conditions and deliver good images, not to be precise, to be automated or equipped with complex sensors. Those camera heads are controlled manually and moved by slow electric motors, responding to joystick-driven voltage changes over long wires, inside their thick tether cables. And getting accurate, repeatable pan- or tilt-angle values from them is impossible due to noisy signal transmissions and the tolerances in their analog circuits.
A Digital Robotic Integration Platform
Integrating scanning into a PTZ camera required a more radical approach: a complete redesign, a transformation towards digitally controlled “robotic” cameras with on-board closed-loop motor control and high bandwidth digital data transmission.
With digital communication in place and precise motor control available, a scanning sensor can be moved by the very same motors already used for panning and tilting. This design allows for in-situ creation of an environment map. The map is then populated with the inspection data—the digital twin is born.
Building the Map
Generating a pointcloud by repeatedly, and in all directions, measuring the time it takes infrared light pulses to reflect off a target surface is only the starting point. Once such a 3D pointcloud is captured, the software applies an algorithm to convert this data into a so-called surface triangulation, essentially creating a 3D model consisting of connected triangle patches.
After capturing photos in all directions, the images can be projected onto the individual triangles, creating the texture effect shown in Figure 4. This texture is greatly helping camera operators to efficiently work with the digital twin.
Extending Maps Based on On-board IMU
If an environment is not fully visible from a single camera placement, software can help to merge a second or third scan with the previously generated model. This process is called pointcloud registration. A well-established algorithm for point-cloud registration is Iterative Closest Point (ICP). ICP performs a step-by-step minimization of distances between currently closest points. ICP only works reliably if the two point-clouds contain sufficient overlap and are already roughly aligned; rotational alignment is especially crucial. Point-cloud overlap is usually less of a problem and can be controlled by the user; rotational alignment, however, is harder to control. An operator may rotate the camera or deploy it at a largely different angle to get the views they need for the inspection. A solution to the initial rotational alignment problem for ICP (or any other registration algorithm) is based on one more component used in robotic systems: The Inertial Measurement Unit (IMU).
A smartphone IMU inside the camera tracks changes in orientation to provide the algorithm with an initial alignment helping it converge. Once again, robotics technology makes all the difference.
Re-Localization
Furthermore, if a camera is moved to a different location, a fast scan movement, generating only a coarse pointcloud can make use of the same algorithm to determine the new position of the camera prior to resuming work. This also works when re-deploying a camera into an already-mapped space.
All these processes are computationally demanding and require a modern computer with a GPU in the control unit.
Working in Three Dimensions
With a 3D map of the environment, many new opportunities come to a PTZ inspection. The operator can determine areas of interest directly on the 3D view and aim the camera at them by a simple touch on the screen. Captured images show up as markers in 3D, indicating what areas and locations have already been inspected. And most importantly, the captured photos now “live” in 3D space. They are tagged with their location, and those locations can be used for reporting or reloaded for future inspections.
Reports on Paper, in the Cloud, and on Digital Twins
The remaining question was: “What should the output of a such a system be?” Conventional archives store collections of image and video files and report documents. Data handling and storage requirements vary across the industries and from company to company. How can the advantages of computerized data collection be leveraged to satisfy all current and future needs for data security and efficient sharing and storage?
The answer was quite boring: Simply do everything—it is automated anyway. The control unit converts structured data into “paper reports” in the form of PDF and office documents at the press of a button. The “old way” of exporting just the image and video files to a flash drive via USB is as available as wireless direct secure upload of the entire inspection context to a safe cloud solution. And once the data is safely in the cloud or on some other storage, the digital twin becomes the basis of future inspections.
Efficiency Gains
Some of the changes made to the system design result in additional benefits to the users. For instance, closed-loop on-board motor control allows for precise and faster motion of the pan and tilt axis. While this is helpful and speeds up the process when using the joysticks, the faster movements truly shine when introducing touchscreen controls.
Touch Gestures
Let an operator simply touch the location on a camera image, and the camera will quickly adjust its angles to bring this point to the center of the screen. This control is very fast and increases efficiency in most cases. An excellent example is inspecting a weld seam and keeping the weld in the center of all captured images.
Touch controls based on the new design also include combined zoom and motion operations. Let an operator draw a rectangle on the screen and the camera will move the area to the center and adjust the zoom to match the size of the selected region with the screen—all in one swift maneuver. And with all PTZ users accustomed to smartphone controls, it was only logical to also take pinch-zoom gestures to the level of controlling a PTZ camera.
Cable Handling
A not-so-digital, though probably impactful, consequence of the new hardware architecture are better cables. Instead of multiple thick coaxial cables required for reliable analog video transmission, along with separate wires for motor controls, lights and encoders, the cables only need to contain a high bandwidth data link and DC power transmission. The resulting cables are thin, flexible, and allow for lossless video signal transmissions over distances up to 800 ft.
What the Future Holds
The industry requires quality inspections, reliable data, and consistent execution. With robotic technology embedded within PTZ systems, further automating the inspection process and moving towards detection of features are natural next steps. The Mentor Zoom control unit is already fitted with an AI model accelerator card, allowing for real-time execution of proven classifier models.
Software features like inspection coverage lists showing if the actual features inside an asset (by type and location) have been inspected, are just around the corner. Automation programs like robotically capturing features such as weld seams at a defined resolution are also not far from reality anymore.
