Sunday, September 24, 2017

FLIR Boson Teardown

SystemPlus publishes a reverse engineering report on FLIR Boson low-cost LWIR camera module:

"The FLIR Boson camera core occupies only 4.9cm3 without its lens, including a 320×256 pixel microbolometer and an advanced processor. The system is made very compact and easy for integrators to handle. It includes a new chalcogenide glass for the lens and a powerful Vision Processing Unit for the first time.

The thermal camera uses 12┬Ám pixels based on a vanadium oxide technology microbolometer, the ISC1406L, which features a 320×256 resolution and wafer-level packaging (WLP) to achieve a very compact design. The die is half the size of the one in the oldest ISC0901 model, but gives the same definition.
"

Saturday, September 23, 2017

Fly Vision vs Human Vision

BBC publishes an article "Why is it so hard to swat a fly?" comparing human vision with fly vision:

"...have a look at a clock with a ticking hand. As a human, you see the clock ticking at a particular speed. But for a turtle it would appear to be ticking at twice that speed. For most fly species, each tick would drag by about four times more slowly. In effect, the speed of time differs depending on your species.
This happens because animals see the world around them like a continuous video. But in reality, they piece together images sent from the eyes to the brain in distinct flashes a set number of times per second. Humans average 60 flashes per second, turtles 15, and flies 250.
"

Intel Project Alloy Cancelled

SlashGear: Intel work on Project Alloy "Merged Reality" headset featuring RealSense 3D camera has been stopped. In a statement to RoadToVR Intel says:

"Intel has made the decision to wind down its Project Alloy reference design, however we will continue to invest in the development of technologies to power next-generation AR/VR experiences. This includes: Movidius for visual processing, Intel RealSense depth sensing and six degrees of freedom (6DoF) solutions, and other enabling technologies..."

TechInsights Unveils iPhone 8 Plus Camera Surprises

TechInsights was quick to unveil few finding from iPhone 8+ reverse engineering:

The dual rear camera uses 1.22um pixel size in 12MP wide angle sensor and 1.0um pixel is 12MP tele sensor. The 7MP front camera has 1.0um pixel size.

"The dual camera module size is 21.0 mm x 10.6 mm x 6.3 mm thick. Based on our initial X-rays it appears the wide-angle camera uses optical image stabilization (OIS), while the telephoto camera does not (the same configuration as iPhone 7 Plus).

The wide-angle Sony CIS has a die size of 6.29 mm x 5.21 mm (32.8 mm2). This compares to a 32.3 mm2 die size for iPhone 7’s wide-angle CIS.

We do note a new Phase Pixel pattern, but the big news is the absence of surface artifacts corresponding to the through silicon via (TSV) arrays we’ve seen for a few years. A superficial review of the die photo would suggest it’s a regular back-illuminated (BSI) chip. However, we’ve confirmed it’s a stacked (Exmor RS) chip which means hybrid bonding is in use for the first time in an Apple camera!
"

Friday, September 22, 2017

Cameras with Black Silicon Sensors Reach the Market

It came to my attention that a number of Japanese camera companies started selling cameras with SiOnyx Black Silicon sensors. One of these companies is Bitran with CS-64NIR cooled camera based on XQE-0920 sensor. The company publishes a presentation with the application examples for the new camera sensitive up to 1200nm.


Another company is ACH2 Technologies selling ACH100-NIR camera, saying it's sensitive up to 1400nm:


Yet another company is Artray with two cameras: 1.3MP ARTCAM-130XQE-WOM and 0.92MP ARTCAM-092XQE-WOM.



It's very nice to see a new, radically different technology finally reaching the market.

Tractica Forecasts Rise of Enterprise AR

Tractica posts "Augmented Reality: The Rise of Enterprise Use Cases" article on its website. Few interesting statements:

"Smart glasses that replace or complement the desktop likely face a long-haul journey. There are a number of technical issues to overcome, including FOV, weight, ergonomics and comfort, and extended AR use.

The momentum for smart AR glasses has shifted toward mixed reality (MR) headsets, which offer a much more compelling user experience, using 3D depth sensing and positional tracking to immerse the user into a holographic world. Microsoft HoloLens is the first truly capable MR headset and is seeing rapid momentum in terms of trials and pilots. There are still questions about whether or not Microsoft has oversold the capabilities of the device, and if the enterprise market can scale to make it a commercially viable product. Tractica expects that Microsoft is likely to be committed to the enterprise market at least through the end of 2018 before it readies the HoloLens for consumer launch. If the pilots do not convert into meaningful volumes, Microsoft could find itself in an awkward place like Google did with Glass, eventually pulling the plug.

Tractica estimates that the monthly active users (MAUs) for smartphone/tablet enterprise AR will be 49 million by the end of 2022. In contrast, the installed base of enterprise smart glasses users at the end of 2022 will be approximately 19 to 21 million.
"

Thursday, September 21, 2017

Espros Presents its Pulsed ToF Solution

Espros CEO Beat De Coi presents the first results of his company pulsed ToF chip (pToF) at AutoSens 2017 in Brussels, Belgium. Performance of the sensors include a QE of 70% at 905nm, a sensitivity trigger level as low as 20 e- for object detection, 250MHz CCD sampling and interpolation algorithms to reach centimeter accuracy. The sensors will be operating in full sunlight without disturbance and function under all weather conditions. The presentation is available for download at Espros site.

Beat De Coi says: «This new generation of pulsed time-of-flight sensors will show a performance that will boost autonomous driving effort. I have been working on time-of-flight technology since 30 years and I am extremely proud that we reached this level with conventional silicon.»

Few slides from the presentation explaining Espros new chip operation:

Auger Excitation Shows APD-like Gains

A group of UCSD researchers publishes an open-access Applied Physics Letters paper "An amorphous silicon photodiode with 2 THz gain‐bandwidth product based on cycling excitation process" by Lujiang Yan, Yugang Yu, Alex Ce Zhang, David Hall, Iftikhar Ahmad Niaz, Mohammad Abu Raihan Miah, Yu-Hsin Liu, and Yu-Hwa Lo. The paper proposes APD-magnitude gain mechanism in by means of 30nm-thing amorphous Si film deposited on top of the bulk silicon:


"APDs have relatively high excess noise, a limited gain-bandwidth product, and high operation voltage, presenting a need for alternative signal amplification mechanisms of superior properties. As an amplification mechanism, the cycling excitation process (CEP) was recently reported in a silicon p-n junction with subtle control and balance of the impurity levels and profiles. Realizing that CEP effect depends on Auger excitation involving localized states, we made the counter intuitive hypothesis that disordered materials, such as amorphous silicon, with their abundant localized states, can produce strong CEP effects with high gain and speed at low noise, despite their extremely low mobility and large number of defects. Here, we demonstrate an amorphous silicon low noise photodiode with gain-bandwidth product of over 2 THz, based on a very simple structure."

Wednesday, September 20, 2017

Yole on iPhone X 3D Innovations

Yole Developpement publishes its analysis of iPhone X 3D camera design and implications "Apple iPhone X: unlocking the next decade with a revolution:"


"The infrared camera, proximity ToF detector and flood illuminator seem to be treated as a single block unit. This is supplied by STMicroelectronics, along with Himax for the illuminator subsystem, and Philips Photonics and Finisar for the infrared-light vertical-cavity surface-emitting laser (VCSEL). Then, on the right hand of the speaker, the regular front-facing camera is probably supplied by Cowell, and the sensor chip by Sony. On the far right, the “dot pattern projector” is from ams subsidiary Heptagon... It combines a VCSEL, probably from Lumentum or Princeton Optronics, a wafer level lens and a diffractive optical element (DOE) able to project 30,000 dots of infrared light.

The next step forward should be full ToF array cameras. According to the roadmap Yole has published this should happen before 2020.
"

Luminar on Automotive LiDAR Progress

OSA publishes a digest of Luminar CTO, Jason Eichenholz, talk at 2017 Frontiers in Optics meeting. Few quotes:

"Surprisingly, however, despite this safety imperative, Eichenholz pointed out that the lidar system used (for example) in Uber’s 2017 self-driving demo has essentially the same technical specifications as the system of the winning vehicle in DARPA’s 2007 autonomous-vehicle grand challenge. “In ten years,” he said, “you have not seen a dramatic improvement in lidar systems to enable fully autonomous driving. There’s been so much progress in computation, so much in machine vision … and yet the technology for the main set of eyes for these cars hasn’t evolved.”

On the requirements side, the array of demands is sobering. They include, of course, a bevy of specific requirements: a 200-m range, to give the vehicle passenger a minimum of seven seconds of reaction time in case of an emergency; laser eye safety; the ability to capture millions of points per second and maintain a 10-fps frame rate; and the ability to handle fog and other unclear conditions.

But Eichenholz also stressed that an autonomous vehicle on the road operates in a “target-rich” environment, with hundreds of other autonomous vehicles shooting out their own laser signals. That environment, he said, creates huge challenges of background noise and interference. And he noted some of the same issues with supply chain, cost control, and zero error tolerance.

Eichenholz outlined some of the approaches and technical steps that Luminar has adopted in its path to meet those many requirements in autonomous-vehicle lidar. One step, he said, was the choice of a 1550-nm, InGaAs laser, which allows both eye safety and a good photon budget. Another was the use of an InGaAs linear avalanche photodiode detector rather than single-photon counting, and scanning the laser signal for field coverage rather than using a detector array. The latter two decisions, he said, substantially reduce problems of background noise and interference. “This is a huge part of our architecture.


Wired UK publishes a video interview with LiDAR CEO Austin Russell: