Wednesday, October 18, 2017

5 Things to Learn from AutoSens 2017

EMVA publishes "AutoSens Show Report: 5 Things We Learned This Year" by Marco Jacobs, VP of Marketing, Videantis. The five important things are:
  1. The devil is in the detail
    Sort of obvious. See some examples in the article.
  2. No one sensor to rule them all
    Different image sensors, Lidars, each optimized for a different sub-task
  3. No bold predictions
    That is, nobody knows what the autonomous driving arrives to the market
  4. Besides the drive itself, what will an autonomous car really be like?
  5. Deep learning a must-have tool for everyone
    Sort of a common statement although the approaches vary. Some put the intelligence into the sensors, others keep sensors dumb while concentrating the processing in a central unit.

DENSO and Fotonation Collaborate

BusinessWire: DENSO and Xperi-Fotonation start joint technology development of cabin sensing based on image recognition. DENSO expects to significantly improve the performance of its Driver Status Monitor, an active safety product used in tracks since 2014. Improvements of such products also will be used in next-generation passenger vehicles, including a system to help drivers return to driving mode during Level 3 of autonomous drive.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Understanding the status of the driver and engaging them at the right time is an important component for enabling the future of autonomous driving,” said Yukihiro Kato, senior executive director, Information & Safety Systems Business Group of DENSO. “I believe this collaboration with Xperi will help accelerate our innovative ADAS product development by bringing together the unique expertise of both our companies.

We are excited to partner with DENSO to innovate in such a dynamic field,” said Jon Kirchner, CEO of Xperi Corporation. “This partnership will play a significant role in paving the way to the ultimate goal of safer roadways through use of our imaging and facial analytics technologies and DENSO’s vast experience in the space.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Tuesday, October 17, 2017

AutoSens 2017 Awards

AutoSens conference held on Sept. 20-21 in Brussels, Belgium publishes its Awards. Some of the image sensor relevant ones:

Most Engaging Content
  • First place: Vladimir Koifman, Image Sensors World (yes, this is me!)
  • Highly commended: Junko Yoshida, EE Times

Hardware Innovation
  • First place: Renesas
  • Highly commended: STMicroelectronics

Most Exciting Start-Up
  • Winner: Algolux
  • Highly commended: Innoviz Technologies

LG, Rockchip and CEVA Partner on 3D Imaging

PRNewswire: CEVA partners with LG to deliver a high-performance, low-cost smart 3D camera for consumer electronics and robotic applications.

The 3D camera module incorporates a Rockchip RK1608 coprocessor with multiple CEVA-XM4 imaging and vision DSPs to perform biometric face authentication, 3D reconstruction, gesture/posture tracking, obstacle detection, AR and VR.

"There is a clear demand for cost-efficient 3D camera sensor modules to enable an enriched user experience for smartphones, AR and VR devices and to provide a robust localization and mapping (SLAM) solution for robots and autonomous cars," said Shin Yun-sup, principal engineer at LG Electronics. "Through our collaboration with CEVA, we are addressing this demand with a fully-featured compact 3D module, offering exceptional performance thanks to our in-house algorithms and the CEVA-XM4 imaging and vision DSP."

Monday, October 16, 2017

Ambarella Loses Key Customers

The Motley Fool publishes an analysis of Ambarella performance over the last year. The company lost some of its key customers GoPro, Hikvision and DJI, while the new Google Clips camera opted for non-Ambarella processor as well:

"Faced with shrinking margins, GoPro needed to buy cheaper chipsets to cut costs. It also wanted a custom design which wasn't readily available to competitors like Ambarella's SoCs. That's why it completely cut Ambarella out of the loop and hired Japanese chipmaker Socionext to create a custom GP1 SoC for its new Hero 6 cameras.

DJI also recently revealed that its portable Spark drone didn't use an Ambarella chipset. Instead, the drone uses the Myriad 2 VPU (visual processing unit) from Intel's Movidius. DJI previously used the Myriad 2 alongside an Ambarella chipset in its flagship Phantom 4, but the Spark uses the Myriad 2 for both computer vision and image processing tasks.

Google also installed the Myriad 2 in its Clips camera, which automatically takes burst shots by learning and recognizing the faces in a user's life.

Ambarella needs the CV1 to catch up to the Myriad 2, but that could be tough with the Myriad's first-mover's advantage and Intel's superior scale.

To top it all off, Chinese chipmakers are putting pressure on Ambarella's security camera business in China.
"

Pikselim Demos Low-Light Driver Vision Enhancement

Pikselim publishes a night-time Driver Vision Enhancement (DVE) video using its low-light CMOS sensor behind the windshield of the vehicle with the headlights off (sensor is operated in the 640x512 format at 15 fps in the Global Shutter mode, using an f/0.95 optics and off-chip de-noising):

Sunday, October 15, 2017

Yole on Automotive LiDAR Market

Yole Developpement publishes its AutoSens Brussels 2017 presentation "Application, market & technology status of the automotive LIDAR." Few slides form the presentation:

Sony Announces Three New Sensors

Sony added three new sensors to its flyers table: 8.3MP 2um pixel based IMX334LQR and 4.5um global shutter pixel based 2.9MP IMX429LLJ and 2MP IMX430LLJ. The news sensors are said to have high sensitivity and aimed to security and surveillance applications.

Yole Image Sensors M&A Review

IMVE publishes article "Keeping Up With Consolidation" by Pierre Cambou, Yole Developpement image sensor analyst. There is a nice chart showing the large historical mergers and acquisitions:


"For the source of future M&A, one should rather look toward the decent number of machine vision sensor technology start-ups, companies like Softkinetic, which was purchased by Sony in 2015, and Mesa, which was acquired by Ams, in 2014. There are a certain number of interesting start-ups right now, such as PMD, Chronocam, Fastree3D, SensL, Sionyx, and Invisage. Beyond the start-ups, and from a global perspective, there is little room for a greater number of deals at sensor level, because almost all players have recently been subject to M&A."

Saturday, October 14, 2017

Waymo Self-Driving Car Relies on 5 LiDARs and 1 Surround-View Camera

Alphabet Waymo publishes Safety Report with some details on its self-driving car sensors - 5 LiDARs and one 360-deg color camera:

LiDAR (Laser) System
LiDAR (Light Detection and Ranging) works day and night by beaming out millions of laser pulses per second—in 360 degrees—and measuring how long it takes to reflect off a surface and return to the vehicle. Waymo’s system includes three types of LiDAR developed in-house: a short-range LiDAR that gives our vehicle an uninterrupted view directly around it, a high-resolution mid-range LiDAR, and a powerful new generation long-range LiDAR that can see almost three football fields away.

Vision (Camera) System

Our vision system includes cameras designed to see the world in context, as a human would, but with a simultaneous 360-degree field of view, rather than the 120-degree view of human drivers. Because our high-resolution vision system detects color, it can help our system spot traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles. Waymo’s vision system is comprised of several sets of high-resolution cameras, designed to work well at long range, in daylight and low-light conditions.


Half a year ago, Bloomberg published an animated gif image showing the cleaning of Waymo 360-deg camera:

Chronocam Partners with Huawei

French sites L'usine Novelle, InfoDSI, Chine report that Chronocam partners with Huawei. Huawei is said to cooperate with Chronocam on face recognition technology in its smartphones, similar to Face ID in iPhone X.

Friday, October 13, 2017

Hynix Proposes TrenchFET TG

SK Hynix patent application US20170287959 "Image Sensor" by Pyong-su Kwag, Yun-hui Yang, and Young-jun Kwon leverages the company's DRAM trench technology:

Omron Improves Its Driver Monitoring System

OMRON driver monitoring system uses three barometers to judge whether the driver is capable of focusing on driving responsibilities: (1) whether the driver is observing the vehicle's operation (Eyes ON/OFF); (2) how quickly the driver will be able to resume driving (Readiness High/Mid/Low); and (3) whether the driver is behind the wheel (Seating ON/OFF). Additionally, the company's facial image sensing technology, OKAO Vision, now makes it possible to sense the state of the driver even if wearing a mask or sunglasses - something that had previously not been possible.

Magic Leap Seeks $1b Funding on $6b Valuation

Reuters reports that AR glasses startup Magic Leap files in SEC that it's seeking to raise $1b on $6b valuation. The filing does not indicate the amount that Magic Leap had so far secured from investors. It may end up raising less than $1b.

Thursday, October 12, 2017

Compressed Sensing Said to Save Image Sensor Power

Pravir Singh Gupta and Gwan Seong Choi from Texas A&M University publish an open access paper "Image Acquisition System Using On Sensor Compressed Sampling Technique." They say that "Compressed Sensing has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23%-65%."

The proposed sensor architecture implementing this claim is given below:


"Now we demonstrate the reconstruction results of our proposed novel system flow. We use both binary and non-binary block diagonal matrix to compressively sample the image. The binary block diagonal(ΦB) and non-binary block diagonal(ΦNB) sampling matrix are mentioned below."

EI 2018, "Image Sensors and Imaging Systems" Preliminary Program

Electronic Imaging 2018, "Image Sensors and Imaging Systems" Symposium is about to publish its preliminary program. I was given an early preview:

There will be five invited keynotes:
  • "Dark Current Limiting Mechanisms in CMOS Image Sensors"
    Dan McGrath, BAE Systems (California)
  • "Security imaging in an unsecure world"
    Anders Johanesson, AXIS COMMUNICATIONS AB (Sweden)
  • "Quantum Efficiency and Color"
    Jörg Kunze, Basler AG (Germany)
  • "Sub-Electron Low Noise CMOS image sensors"
    Angel Rodriguez Vasquez, University of Sevilla (Spain)
  • "Advances in automotive image sensors"
    Boyd Fowler, OmniVision Technologies (California)
The regular papers are grouped into several sessions with the following themes (the exact names are still under discussion):
  • QE curves, color and spectral imaging
  • Depth sensing
  • High speed and ultra high speed imaging
  • Noise, performance and characterization
  • Technology and design for high performance image sensors
  • Image sensors and technologies for automotive and autonomous vehicles
  • Applications
  • Interactive posters
It is a program over two days within the 5 days of the Electronic Imaging symposium. It is held at the same time as Photonics West and the week after the P2020 meeting.

Intel Unveils D400 Realsense Camera Family

Intel publishes an official page of D400 camera family, currently consisting of D415 and D435 active stereo cameras. Reportedly, the earlier Realsense cameras SR300, R200 and F200 are being discontinued, while D400 series will be expanded to include passive and active stereo models:

Wednesday, October 11, 2017

Velodyne More Than Quadruples LiDAR Manufacturing

BusinessWire: Velodyne has more than quadrupled production for its LiDAR sensors to meet strong global demand. As a result, Velodyne LiDAR’s sensors are immediately available via distribution partners in Europe, Asia Pacific, and North America, with industry standard lead-times for direct contracts.

To support that expansion, Velodyne has doubled the number of its full-time employees over the past six months. These employees operate across three facilities in California, including the company’s new Megafactory in San Jose, its long-standing manufacturing facility in Morgan Hill, and the Velodyne Labs research center in Alameda.

Velodyne leads the market in real-time 3D LiDAR systems for fully autonomous vehicles,” said David Hall, Velodyne LiDAR Founder and CEO. “With the tremendous surge in autonomous vehicle orders and new installations across the last 12 months, we scaled capacity to meet this demand, including a significant increase in production from our 200,000 square-foot Megafactory.

Velodyne Megafactory in San Jose, CA

Looking at GM autonomous driving fleet one can understand why Velodyne needs so much production capacity:

Samsung Announces 0.9um Pixel Sensor for Smartphones, More

BusinessWire: Samsung introduces two new ISOCELL sensors: 1.28 μm 12MP Fast 2L9 with Dual Pixel technology, and ultra-small 0.9μm 24Mp Slim 2X7 with Tetracell technology.

The Fast 2L9 features reduced pixel size from the previous Dual Pixel sensor’s 1.4μm to 1.28μm.

At 0.9μm, the Slim 2X7 is said to be the first sensor in the industry with pixel size below 1.0μm. The pixel uses improved ISOCELL technology with deeper DTI that reduces color crosstalk and expands the full-well capacity to hold more light information. In addition, the small 0.9μm pixel size enables a 24Mp image sensor to be fitted in a thinner camera module.

The Slim 2X7 is also features Tetracell technology. Tetracell improves performance in low-light situations by merging four neighboring pixels to work as one to increase light sensitivity. In bright environments, Tetracell uses a re-mosaic algorithm to produce full resolution images. This enables consumers to use the same front camera to take photos in various lighting conditions.

Samsung ISOCELL Fast 2L9 and ISOCELL Slim 2X7 are new image sensors that fully utilize Samsung’s advanced pixel technology, and are highly versatile as they can be placed in both front and rear of a smartphone,” said Ben K. Hur, VP of System LSI Marketing at Samsung.

In an earlier news, Samsung Tetracell technology received Korea Multimedia Technology Award:

ON Semi Announces Two 1MP Sensors

BusinessWire: ON Semi announces 3um pixel-based AS0140 and AS0142 1/4-inch 1MP sensors with integrated ISP for automotive applications. The new sensors support 45 fps at full resolution or 60 fps at 720p. Key features include distortion correction, multi-color overlays and both analog (NTSC) and digital (Ethernet) interfaces. Both SoC devices achieve enhanced image quality by making use of the adaptive local tone mapping (ALTM) in order to eliminate artifacts that impinge on the acquisition process while achieving DR of 93 dB.

Both new devices are said to have class-leading power efficiency; when running at 30 fps in HDR mode, they consume just 530 mW. Operating temperature range is -40°C to +105°C. Engineering samples are available now. The AS0140 will be in production in 4Q17, and AS0142 will be in production in 1Q18.

AS0140 ISP pipeline

Tuesday, October 10, 2017

Image Fusion in Dual Cameras

Corephotonics publishes a presentation on image fusion in dual cameras:

Eldim Supplies iPhone X Face ID Components

VentureBeat reports that Apple CEO Tim Cook visited France Eldim optical component maker. A local reporter said the two companies had been working together for almost a decade, mostly in an R&D capacity. It was only with the release of the iPhone X that the facial recognition system is being baked into a product.

Eldim CEO Thierry Leroux told reporters that working with Apple was “an incredible adventure,” but added that there have also been huge technical challenges over the years. “For us, it was a little like sending someone to the moon,” Leroux told reporters. Cook responded, “It’s great what you have done for us.


Thanks to JB for the link!

GM Acquires Strobe

Reuters quotes GM autonomous driving head Kyle Vogt, CEO of Cruise, saying that the company has acquired Pasadena, CA-based coherent LiDAR startup Strobe:

"LIDARs (sensors that use laser light to measure the distance to objects) are currently the bottleneck.

...we’ve acquired Strobe, a company that has quietly been building the leading next-generation LIDAR sensors. Strobe’s new chip-scale LIDAR technology will significantly enhance the capabilities of our self-driving cars. But perhaps more importantly, by collapsing the entire sensor down to a single chip, we’ll reduce the cost of each LIDAR on our self-driving cars by 99%.

Strobe’s LIDAR sensors provide both accurate distance and velocity information, which can be checked against similar information from a RADAR sensor for redundancy.

Our new sensors are robust to interference from sunlight, even in extreme cases, which means they can continue to operate in situations where camera-based solutions fail. When the sun is low in the sky and reflects off wet pavement, camera systems (and humans) are almost completely blinded. And when a person in all black is walking on black pavement at night, even the human eye has trouble spotting them soon enough.

Our acquisition of Strobe is a significant step toward our mission to deploy self-driving cars at scale. The founders, Julie Schoenfeld and Dr. Lute Maleki, and their team bring decades of sensor development experience to Cruise. Strobe, Cruise, and GM engineers will work side by side along with our optics and fabrication experts at HRL (formerly Hughes Research Labs), the GM skunkworks-like division that invented the world’s first laser. Together we’ll significantly reduce the time needed to create a safer and more affordable form of transportation and deploy it at scale.
"

Strobe LiDAR early prototype

Update: from Strobe web site:


Update #2: Ars Technica publishes its analysis of Strobe LiDAR technology based on the published papers. The Strobe LiDAR has chirp laser beam scanning based on optical phase array in one dimension and optical grating in other dimension. The detector is coherent type that determines both the target distance and speed.

Monday, October 09, 2017

Omnivision Announces Nyxel NIR Sensing Technology

PRNewswire: OmniVision introduces Nyxel NIR technology that increases QE of up to 3x at 850nm and 5x at 940nm, when compared with our legacy NIR-capable sensors, while maintaining all other image-quality metrics. Nyxel technology is aimed to a wide variety of applications, including: surveillance, machine-vision, and automotive applications.

"Conventional approaches to NIR rely solely on thick silicon to improve NIR image-sensor sensitivity. However, this results in crosstalk and reduces the modulation transfer function (MTF). Attempts to overcome this by introducing deep trench isolation (DTI) often lead to defects that corrupt the dark area of the image," explained Lindsay Grant, VP of process engineering at OmniVision. "We have worked to overcome these challenges in an exclusive engagement with our foundry partner, leveraging technologies in its 300mm wafer fab. Initial results are very promising, and have generated a great deal of interest with our OEM customers."

OmniVision's approach to NIR imaging combines thick-silicon pixel architectures with careful management of wafer surface texture to improve QE, and extended DTI to help retain MTF without affecting the sensor's dark current.


The company video demos Nyxel NIR advantages:

DiffuserCam 3D Imager

UCB researches publish an open-access paper "DiffuserCam: Lensless Single-exposure 3D Imaging" by Nick Antipa, Grace Kuo, Reinhard Heckel, Ben Mildenhall, Emrah Bostan, Ren Ng, and Laura Waller (Ren Ng is Lytro founder). From the abstract:

"We demonstrate a compact and easy-to-build computational camera for single-shot 3D imaging. Our lensless system consists solely of a diffuser placed in front of a standard image sensor. Every point within the volumetric field-of-view projects a unique pseudorandom pattern of caustics on the sensor. By using a physical approximation and simple calibration scheme, we solve the large-scale inverse problem in a computationally efficient way."

Sunday, October 08, 2017

Apple Face ID Ignites 3D Imaging Interest

MacRumors quotes KGI securities analyst Ming-Chi Kuo saying that Apple Face ID has "tilted interest in the mobile industry away from under-display fingerprint recognition, and instead towards camera-based 3D sensing technologies as the ideal user authentication solution.

While under-display optical fingerprint recognition is only a spec upgrade from capacitive solutions, 3D sensing embodies a revolutionary user experience and warrants a premium on gross margin.
"

Currently, the solutions for Android phone makers are said to come from Qualcomm-Himax, Orbbec, and Mantis Vision, with Qualcomm-Himax one attracting the most attention.

Bloomberg reports a drop in sales forecasts for fingerprint device makers. Sweden's Fingerprint Cards AB warned that its Q3 revenue will be much lower than analysts' estimates. Synaptics fingerprint business also showed "softness," executives said during the company's earnings call.

AI Co-Processors Coming to Most Flagship Smartphones in 2018

InstantFlashNews compiles a nice table showing that most application processor makers have AI co-processors integrated on their chips either now or plan that for near future. This will greatly increase vision processing capabilities of the flagship smartphones:


Update: WSJ too runs an article on that.

Saturday, October 07, 2017

Wire Bond Engineering

Omnivision patent application US20170280075 "Flare-reducing imaging system and associated image sensor" by Chao-hung Lin, Hong Jun Li, Ping-hsu Chen, Denis Chu proposes careful bonding wire angle engineering to avoid flare by the light reflecting from the wires:

Sony Proposes Moisture Collecting Holes Between Microlens

Sony patent application US20170278889 "Solid-state imaging device, method of manufacturing the same, and electronic apparatus" by Takashi Nakashikiryo, Yoshiaki Kitano, Yuuji Nishimura, Kouichi Itabasi, Ryou Chiba, Yosuke Takita, Mitsuru Ishikawa, Toyomi Jinwaki, Yuichi Seki, Masaya Shimoji, Yoichi Ootsuka, and Takafumi Nishi says that sensors with AR coating on top of micoelens have a problem:

"In a case where an antireflection film is provided on the surfaces of the microlenses, however, if BSIs are left in a high-temperature, high-humidity condition for a long period of time, the moisture generated in part of the regions of the interfaces between the microlenses and the antireflection film might not permeate through the antireflection film but remain therein, resulting in generation of water droplets. In this case, the captured image is stained by the water droplets, and the quality of the image is degraded."

So, Sony proposes the moisture collecting holes in the corners between the microlenses: