logo

About Cube eye

Compact Design, Compact design and small form factor fit into various applications.

  • home
  • right
  • overview
  • right
  • About Cube eye
  • 1
    About CubeEye ToF, Time of F.. book_off
    meerecompany has developed a high performance ToF(Time of Flight)3D depth Camera “Cube eye” through many years of research and development. Cube eye can detect broader depth of movement and the distance of objects more precisely. Compact Design / Accuracy and Efficiency / Easy to Use / Convenient System Integration Option / Various product lineup. CubeEye’s iToF Platform Coverage    CubeEye is a depth camera powered by a Time-of-Flight (ToF) image sensor. While it operates similarly to a conventional RGB camera in capturing 2D images, it also provides highly accurate 3D distance information. Unlike RGB cameras, which collect red, green, and blue color data from a subject, ToF cameras detect synchronized light signals emitted from their own light source. By measuring the time it takes for light to travel to the object and back, CubeEye calculates precise distance information. (For detailed technical principles, please refer to section 1.1.4 of the technical documentation.)   The light signals received by the image sensor are internally processed in two main stages: - Depth Calculation - Image Post-Processing   These steps enable Cube Eye to generate more accurate and clearer depth data. All processing is performed within the camera module itself, which outputs the following data formats: - Amplitude - Depth Map - Point Cloud   These outputs can be used as input for host-side algorithms, allowing seamless development of various applications. Cube Eye also supports integration through a range of interfaces and development tools: - Driver: Supports AP (application processor) integration - Cube Eye SDK: Enables camera control and software development - Cube Eye Viewer: Offers real-time visualization and debugging   BASE LINE PRODUCTS (ToF module) - S.cube   S100D Product page link ∞ S111D Product page link ∞   S.Cube is a module based on Samsung LSI’s image sensor and is equipped with a proprietary depth-processing ASIC chip. The image sensor features a true VGA global shutter, enabling fast and accurate image capture. Thanks to its custom-designed processing chip, the module offers compact size and low-power consumption, ideal for embedded systems. - I.cube   I.Cube is a module built on an Infineon image sensor, known for its excellent performance under sunlight and longer detection range compared to other models. I200D page link∞   Principle of Technology  - What is Depth Camera ?  Advancements in machine learning, artificial intelligence, embedded vision, and processing technology have helped innovators build autonomous machines that have the ability to navigate an environment with little human supervision. Examples of such devices include AMRs (Autonomous Mobile Robots), autonomous tractors, automated forklifts, etc. And making these devices truly autonomous requires them to have the ability to move around without any manual navigation. This in turn requires the capability to measure depth for the purposes of mapping, localization, path planning, and bstacle detection & avoidance. This is where depth sensing cameras come into play. A camera that calculates the distance information of an object and outputs it in pixel units.   - Major Criteria of Performance. Main Performance: Since the basic attribute is the camera, the characteristics required by RGB cameras are similarly required.  1) Accuracy / Depth Resolution   2) Spatial Resolution  3) Frame rate   4) Field of View   5) Real time Response / Latency  6) Motion Blur  7) Sunlight Robustness - Output Data ToF cameras generate a variety of output data types that provide valuable information about the spatial characteristics of objects in a scene. From depth maps and point clouds to distance measurements and intensity images, these data outputs enable a wide range of applications in fields such as robotics, augmented reality, autonomous vehicles, and industrial automation.    1) Amplitude : The strength of the reflected light signal received from a scene.  2) Depth Map : Color image where each pixel represents the distance from the camera to the object in the scene.   3) Point Cloud : Set of 3D points that together represent the surface of an object or a scene.     Type of 3D Depth technology Depth-sensing means measuring the distance from a device to an object or the distance between two objects. A 3D depth-sensing camera is used for this purpose where it automatically detects the presence of any object nearby and measures the distance to it on the go. This helps the device or equipment integrated with the depth-sensing camera move autonomously by making real-time intelligent decisions. Time-of-Flight (ToF) cameras produce a depth image, each pixel of which encodes the distance to the corresponding point in the scene. These cameras can be used to estimate 3D structure directly. - Strongths  Accurate Depth Sensing / Real-time Performance Versatility / Robustness to Lighting Conditions / Compact and Low Power - Weakness  Limited Range  Interference from Reflective Surfaces / Complexity of Data Processing   ToF vs. LiDAR: Technical Comparison of 3D Depth Cameras ∞   Principle of Operation Pulsed ToF (direct ToF) VS AMCW(Amplitude Modulated Continuous Wave) Phase-Shift (Indirect ToF) The pulsed method is straightforward. The light source illuminates for a brief period (Δt), and the reflected energy is sampled at every pixel, in parallel, using two out-of-phase windows, C1 and C2, with the same Δt. Electrical charges accumulated during these samples, Q1 and Q2, are measured. In contrast, the CW method takes multiple samples per measurement, with each sample phase-stepped by 90 degrees, for a total of four samples. Using this technique, the phase angle between illumination and reflection. Emit a continuous wave of light, typically from a laser diode. Modulate the amplitude or frequency of the light wave. Light wave interacts with objects in the scene and is partially reflected back towards the camera. Measure the phase shift between the emitted and reflected light waves. Phase shift is proportional to the distance traveled by the light wave. Calculate the distance to the object based on the phase shift measured.   iToF With high accuracy, long-distance measurement capabilities, robust performance, high resolution, real-time operation, and versatility, these cameras are driving innovation across various industries. As technology continues to advance, we can expect AMCW ToF cameras to play a crucial role in shaping the future of depth sensing and spatial imaging. High Accuracy: AMCW ToF cameras offer high accuracy in depth sensing. By measuring the phase shift between emitted and reflected light waves, these cameras can provide precise distance measurements with minimal errors. This high accuracy makes them suitable for applications that require precise spatial information, such as industrial automation, robotics, and 3D mapping. Long-Distance Measurement: One of the notable features of AMCW ToF cameras is their ability to measure distances over a wide range. From short distances to several meters away, these cameras can accurately capture depth information across different spatial scales. This versatility makes them ideal for applications such as autonomous navigation, object detection, and surveillance. Robust Performance: AMCW ToF cameras are designed to perform reliably in various environmental conditions. They can operate effectively in indoor and outdoor settings, including environments with challenging lighting conditions or reflective surfaces. Their robust performance ensures consistent and accurate depth sensing even in complex scenarios, making them suitable for real-world applications. Versatility: AMCW ToF cameras are highly versatile and can be integrated into a wide range of devices and systems. They find applications in industries such as automotive, healthcare, gaming, and consumer electronics. Whether it's enhancing human-computer interaction, enabling 3D sensing in smartphones, or improving the accuracy of medical imaging systems, AMCW ToF cameras offer flexibility and adaptability to diverse use cases. Challenge for iToF Technology 1) MPI(Multipath Interference) in Indirect Time of Flight (iToF)  - What is Multipath Interference?An indirect Time of Flight (iToF) camera measures distance by emitting modulated light and calculating the phase difference between the emitted light and the reflected light received by the sensor. This phase difference corresponds to the travel time of light, which is then converted into depth information. Multipath Interference (MPI) occurs when light from the source reaches a single pixel via multiple paths of different lengths before being detected. Because iToF assumes that all received light has traveled along a single path, the presence of additional paths distorts the measured phase. This results in large depth errors or even completely incorrect distance readings.    - Common causes of MPI include* Inter-reflections  : Light bounces off intermediate surfaces before reaching the target (e.g., walls, ceilings, or nearby objects), causing indirect light to mix with the direct reflection.                           [ Reflection from the floor] * Translucent objects : Materials like glass, plastic, or water allow part of the light to pass through while scattering or reflecting some of it back, creating additional delayed reflections. * Diffuse inter-reflections : In areas like corners, light may bounce multiple times between surfaces before returning, creating several overlapping paths. * Lens flare or ghosting : Internal reflections and scattering inside the camera lens, often triggered by bright or near objects, can introduce extra light paths to the sensor.                                                  [Lens flare] - How to Minimize Multipath Interference MPI is scene-dependent and difficult to eliminate entirely, especially in complex environments. However, you can reduce its impact by following these strategies: A. Optimize camera placement : * Install the camera away from walls, floors, or other large reflective surfaces.* Avoid placing it directly facing highly reflective or translucent objects. B. Control the measurement environment * Remove or cover bright, shiny, or translucent surfaces in the scene.   C. Adjust camera settings  * Ensure the object of interest is positioned at an appropriate distance from the camera (not too close and not at extreme ranges where signal-to-noise ratio is low). a) Reduce integration time — Shorter integration times can help prevent strong indirect reflections from saturating the sensor, especially in high-reflectivity environments. b) Enable built-in filtering  * Scattering filters — Reduce influence from light scattered by translucent objects.  * Depth error filters — Identify and remove depth points with abnormal phase deviations likely caused by multipath effects.    - For more details, refer to the followings:        * Integration time setting : 9.1 Manual Light Intensity Control ∞            * Camera Settings: Properties ∞     2) MCI(Multi-camera interference) Multiple Camera Interferences occur when you have two or more TOF cameras operating in the same environment with the same frequency.  The light transmitted from one camera will be received by another camera – resulting in depth errors and affecting the TOF functionality. * Prevents Multi-Camera Interference (MCI): MCS Hub∞   Major Benefit  - Support resolution VGA.  - Fast 3D capturing up to 30fps.  - Improved depth calculation.  - High Image quality.  - Small form factor, low cost.  - Built-in self-designed companion chip. 1) Fast 3D Capturing – Up to 30fps Unlike other types of depth cameras, ToF cameras do not require complex depth calculations, significantly reducing processing loads on the host system. This allows for high-speed data processing and enables Cube Eye to capture 3D images at up to 30 frames per second, making it suitable for moving objects and real-time applications. 2) High Image Quality Cube Eye supports true VGA output, delivering high image fidelity and reliable depth data. While many depth cameras tend to blur object edges, Cube Eye employs an integrated image post-processing algorithm to extract sharp and clean object boundaries. Measurement accuracy is also outstanding, with depth error typically less than 1% of the measured distance. 3) Built-in Self-Designed Companion Chip Cube Eye features a custom-designed companion chip that enables efficient thermal management and allows for a compact form factor without compromising performance. The chip is optimized for depth processing, contributing to cost-efficiency and system integration flexibility.    Components of a ToF Camera   - Optical lens It is used to gather reflected light and image on the optical sensor. Unlike ordinary optical lenses, a band-pass filter is required to ensure that only light with the same wavelength as the illumination light source can enter.    - Imaging sensor The imaging sensor is the core of TOF camera. The structure of this sensor is similar to that of an ordinary image sensor, but more complex than that of an image sensor. It contains two or more shutters to sample reflected light at different times. Therefore, the pixel size of TOF chip is much larger than that of ordinary image sensors, generally about 100um.   - Control unit The light pulse sequence triggered by the electronic control unit of the camera is precisely synchronized with the opening/closing of the electronic shutter of the chip. It reads and converts sensor charges and directs them to the analysis unit and data interface.   - Calculation unit The calculation unit can record accurate depth map. The depth map is usually a grayscale image, where each value represents the distance between the light reflecting surface and the camera. For better results, data calibration is usually performed.  

Search