A DVP-Based Bridge Architecture to Randomly Access Pixels of High-Speed Image Sensors
© T. H. Khan and K. A.Wahid. 2011
Received: 14 October 2010
Accepted: 17 January 2011
Published: 26 January 2011
Skip to main content
© T. H. Khan and K. A.Wahid. 2011
Received: 14 October 2010
Accepted: 17 January 2011
Published: 26 January 2011
A design of a novel bridge is proposed to interface digital-video-port (DVP) compatible image sensors with popular microcontrollers. Most commercially available CMOS image sensors send image data at high speed and in a row-by-row fashion. On the other hand, commercial microcontrollers run at relatively slower speed, and many embedded system applications need random access of pixel values. Moreover, commercial microcontrollers may not have sufficient internal memory to store a complete image of high resolution. The proposed bridge addresses these problems and provides an easy-to-use and compact way to interface image sensors with microcontrollers. The proposed design is verified in FPGA and later implemented using CMOS 0.18 um Artisan library cells. The design costs 4,735 gates and 0.12 mm2 silicon area. The synthesis results show that the bridge can support a data rate up to 254 megasamples/sec. Its applications may include pattern recognition, robotic vision, tracking system, and medical imaging.
In recent years, image sensors have increased in quality and capability and at the same time decreased in price, making them desirous to include in small electronic devices and systems. However, these image sensors are difficult to interface with most commercial microcontrollers (MCUs) as these high-speed image sensors produce data at such a high rate that cannot be processed in real time. As a consequence, most high-speed image sensors are difficult to use in low-power and low-speed embedded systems. There is no buffering provided inside the image sensors. Most MCUs have limited internal memory space and may not be able to store a complete frame unless external memory is provided. Moreover, these image sensors send image data in a row-by-row fashion; as a result, the data cannot be accessed randomly; the first row must be read prior to the second row to avoid data loss. Many image processing algorithms, such as transform coding using the Discrete Cosine Transform (DCT) and pattern recognition for robotic vision, need to access pixel values in a random access fashion. Besides, a high-speed clock must be provided to operate the image sensors properly.
It should be noted that the commercial high-speed image sensors may be interfaced with more advanced MCUs (such as AT91CAP7E, AT91SAM7S512 from Atmel ). However, these microcontrollers contain many additional features (such as six-layer advanced high-speed bus (AHB), peripheral DMA controller, USB 2.0 full-speed device, and configurable FPGA Interface) that may not be required for simple imaging applications. Besides, programming such microcontrollers and implementing the required protocols increase the design cycle time. The purpose of the proposed bridge hardware is to provide a compact, ready-made, and easy-to-use solution that enables interfacing of commercial general-purpose image sensors with simple microcontrollers that are low-cost and easy-to-program (such as 8051 [2, 3], AVR , and PIC ). Thus the bridge hardware helps to shorten the design/development cycle time and facilitates rapid system level prototyping.
In [6–10], presented are some VLSI designs on CMOS image sensors with random access. In [11, 12], the authors have presented two different designs of a random access image sensor based on a data-address bus structure. The work in  presents a low-power full-custom CMOS digital pixel sensor array designed for a wireless endoscopy capsule . The proposed architecture reduces the on-chip memory requirement by sharing pixel-level memory in the sensor array with the digital image processor. A dental digital radiographic (DDR) system using a high-resolution charge-coupled device (CCD) imaging sensor was developed and its performance for dental clinic imaging was evaluated in . The work in  presents a novel smart CMOS image sensor integrating hot pixel correcting readout circuit to preserve the quality of the captured images for biomedical applications. In , an image sensor with an image compression feature using the DCT is presented. In , a CMOS image sensor has been designed to perform the front-end image decomposition in a Prediction-SPIHT image compression scheme. In , an image sensor unit with sensor to detect the gravity direction and a built-in image rotation algorithm is presented. The system rotates the captured image in the direction of gravity for better viewing that can be used in rescue robots. The paper in  discusses a range image sensor using a multispot laser projector for robotic applications. In , a pointing device using the motion detection algorithm and its system architecture is presented. The proposed motion detection pointing device uses just binary images of the binary CMOS image sensor (BCIS). In , a smart image sensor for real-time and high-resolution three-dimensional (3D) measurement to be used for sheet light projection is presented. A facial image recognition system based on 3D real-time facial imaging by using correlation image sensor is discussed in . The differential geometry theory was employed to find the key points of face image. A design of an image sensor focusing on image identification by adjusting the brightness is presented in . It has GPRS connectivity and can be used in vehicle surveillance system. In , a single-chip image sensor for mobile applications realized in a standard 0.35 um CMOS technology is presented. In , a solution to reduce the computational complexity of image processing by performing some low-level computations on the sensor focal plane is presented. An autonomous image sensor for real-time target detection and tracking is presented in . In , the authors describe and analyse a novel CMOS pixel for high-speed, low-light imaging applications. An 8.3-M-pixel digital-output CMOS active pixel image sensor (APS) for ultra-definition TV (UDTV) application is discussed in . In , a hardware accelerator for image reconstruction in digital holographic imaging is presented that focuses to maximize the computational efficiency and minimize the memory transfer overhead to the external SDRAM.
There are some commercial image sensors such as MT9V011 from Aptina  and OVM7690 from OmniVision  that support partial access of image segments known as "windowing". By configuring the control resisters, the top-left and bottom-right corners of the desired area can be specified. The image sensor then captures and sends an image of the specified rectangle. However, it is not possible to access (and capture) other segments of the same frame with this feature which is required in several image coding applications such as transform coding. There are two more disadvantages of such approach: firstly, the internal control registers need to be reconfigured every time an image capture request is sent, which is an extra overhead; secondly, because of the time taken for this reconfiguration, the sensor will capture a frame that is different in the time instant. Besides, the "windowing" is limited to rectangles only; the image data cannot be accessed in any other shapes.
In summary, the works mentioned above discuss different designs of image sensors targeted to specific application; however, they are not available for general-purpose use. In this paper, we present a novel concept—the design of a bridge architecture that connects the commercial MCUs to any commercial DVP-based general-purpose image sensors. The bridge needs to be configured once with a set of addresses (provided by the manufacture as found in the datasheet) in order to communicate with the image sensor, which makes the design universal and for general-purpose use.
Considering the application types (i.e., robotics vision, imaging, video, etc.) and availability of commercial microcontrollers (MCUs), in this work, we have set the following design objectives to facilitate the interfacing of high-speed image sensors with low-performance MCU.
(i)The bridge hardware should operate at very high speed (over 200 MHz) so that the image pixels can be accessed in real time through high-speed image sensors. As a result, the MCUs (or image processor) using the bridge need not be high performance and high speed.
(ii)The bridge should contain sufficient memory space to store image frames of different resolutions, such as CIF, QVGA, VGA, full HD, and UHDV. Thus, an MCU with limited on-chip memory may be able to access image pixels from the buffer memory of the bridge at the desired speed. Moreover, because of the memory buffer, any image segments of the same frame can be accessed without having to reconfigure the image sensor, which is required in many video coding applications. An example of such application is the Discrete Cosine Transform-based image coding, where several blocks of image segments of the same frame are required.
(iii)The bridge should provide an efficient way to access the image pixels randomly. A more convenient way is to access the 2D pixel arrays using parallel interfacing with row and column positions. This will be a significant improvement over the designs with typical data-address structure [11, 12].
(iv)The usage of the bridge should be robust. As a result, it should provide efficient and easy ways to access image pixels in virtually any shapes, such as rectangles, circles, oval, and points. This facilitates fully random access in any random shapes.
(v)Commercial image sensors from different vendors have unique device parameters along with internal control registers for proper configuration (such as, to configure frame size, colour, and sleep mode). The bridge should be able to communicate with most available image sensors. As a result, the design should be universal so that it can be configured at the beginning with the proper set of parameters for a particular image sensor.
(vi)Commercial image sensors use I2C protocol and DVP interfacing. Hence, the desired bridge hardware must have I2C protocol already configured as well as support DVP interfacing.
(vii)Most commercial image sensors require high-speed external clock for its operation. It is desirable that the bridge supplies that clock so that the clock can be efficiently controlled during operation (i.e., full clock rate during regular operation, reduced rate during sleep or inactivity, etc.). At the same time, the bridge should be able to detect any inactivity and automatically enable the sleep mode of the image sensor—this will result in power savings.
The VD (or VSYNC) and HD (or HSYNC) pins indicate the end of frame and end of row, respectively. Pixel data bytes are available for sampling at the DOUT(0 : 7) bus at the positive edge of the DCLK signal. The EXTCLK is the clock input for the image sensor. The frequency of DCLK is half or quarter of the frequency of EXTCLK depending on the configuration of the image sensor. The initialization and configuration of the image sensor is done by the 2-wire (SCL and SDA) I2C protocol. In the context of image sensor, it is often called as Serial Camera Control Bus (SCCB) interface . The frame size, colour, sleep mode, and wake up mode can be controlled by sending I2C commands to the image sensor. The RESET is an active low reset signal for the image sensor. Some image sensors have a pin (PWDN) to control the active-sleep mode. Some HD image sensors may contain additional control pins (as shown as dotted line in Figure 2), which are used in special modes; however, these extra pins may be tied to VDD or GND or left unconnected in normal operation.
The DVP interface is widely used in most commercially available SD CMOS image sensors, such as TCM8230MD from Toshiba , OVM7690 from OmniVision , MT9V011 from Aptina , LM9618 from National , KAC-9630 from Kodak , and PO6030K from Pixelplus .
Most native HD (720p and 1080p) image sensors such as OV10131 from OmniVision  and MT9P401 from Aptina  use the DVP interface. Some higher-resolution HD image sensors such as OV9810  use an additional interface, called the mobile industry processor interface (MIPI) along with the typical DVP interface. The data output bus DOUT is generally wider than 8 bits in these HD image sensors.
Configuration register mapping.
CfgAdr(3 : 0)
CfgData(7 : 0)
CfgAdr(3 : 0)
CfgData(7 : 0)
Cmd2 Reg. Adr.
Cmd2 Reg. Data
Sleep Reg. Adr.
Cmd3 Reg. Adr.
Sleep Reg. Data
Cmd3 Reg. Data
Wake Reg. Adr.
Cmd4 Reg. Adr.
Wake Reg. Data
Cmd4 Reg. Data
Cmd1 Reg. Adr.
Cmd1 Reg. Data
where is the time required for the image sensor to wake up from sleep mode, is the time required to store a complete frame in memory from the image sensor, is the required number of bits that need to be sent to write in the image sensor's internal registers, is the frequency of the SCL pin of the I2C interface, is the number of blank bytes sent by the image sensor at the beginning of a new frame, is the number of pixel bytes sent by the image sensor for one row, is the number of blank bytes sent by the image sensor for one row, is the number of rows sent by the image sensor for one frame, and n is a constant for dividing the frequency of DCLK.
where is the time needed to access the required pixel bytes from the iBRIDGE's memory, is the time required for implementing any desired image processing algorithm by the image processor, is the number of random pixels that need to be accessed (in the worst case, , where and are image width and height, namely), is the number of bytes per pixel, is the number of clock cycle required by the image processor to read a byte from iBRIDGE's memory, and is the frequency of the image processor's clock.
Data and blank bytes sent by Toshiba image sensor.
The iBRIDGE is also compatible with the HD image sensors that use the parallel DVP interface. The procedure to configure the iBRIDGE is similar to that of the SD sensors as discussed above; however, a full-precision Data(9:0) is used to access the pixel data. The following sections describe briefly the internal architecture of the iBRIDGE.
This module is used to generate the I2C protocol bits in single master mode . This protocol allows communication of data between I2C devices over two wires. It sends information serially using one line for data (SDA) and one for clock (SCL). For our application, the iBRIDGE acts as master and the image sensor acts as the slave device. Only the required subset of the I2C protocol is implemented to reduce the overall logic usage.
A single port random access memory module is used to store a frame. Depending upon the application's requirement, a different memory size can be chosen. In the iBRIDGE, one multiplexer for address bus and two tristate buffer for data-bus are used for proper writing in and reading from the memory.
Synthesis results on Xlinx FPGA.
Xilinx FPGA device
Max freq. of DCLK(MHz)
Registers (% utilization)
Logic cells (% utilization)
Virtex E, XCV50ECS144
Synthesis results in ASIC.
0.18 um CMOS
Die dimension ( )
Core dimension ( )
Number of cells
Number of gates
Max DCLK frequency
Core power consumption
13.8 mW @ 3.0 V
Hardware comparisons with other sensors.
Pixel array (resolution)
Chip area (mm2)
Zhang et al. 
3.6 @1.8 v
S (Wireless endoscopy)
Nishikawa et al. 
S (Cosine transform)
Lin et al. 
S (Lossy wavelet)
Yoon et al. 
20 @3.3 v
S (Mobile communication)
Elouardi et al. 
S (Retina based)
Ji and Abshire 
S (Low light)
Takayanagi et al. 
Teman et al. 
Oi et al. 
6.8 @3.3 v
S (3D viewing)
Yadid-Pecht et al. 
Scheffer et al. 
Decker et al. 
Chapinal et al. 
6.5 @5 v
Proposed iBridge (without sensor)
13.8 @3 v
iBridge with OV HD sensor (OV2710) 
363.8 @3 v
Performance advantage of iBridge with commercial image sensors.
Performance advantage of iBRIDGE with high-performance MCU.
Low-performance MCU + iBRIDGE
Cost of programmer
Not available (one needs an adaptor to mount them on white boards; requires circuit design on PCB)
Available (easily mounted on white boards)
Firmware development (program complexity)
Relatively difficult to program; more control registers to configure; longer development time
Simpler to program; less configuration registers; shorter development time
Low (many advanced features such as six-layer advanced high-speed bus (AHB), peripheral DMA controller, USB 2.0 full-speed device, and FPGA Interface may not be used for simple imaging application)
Medium (as some advanced features such as full-speed USB 2.0 and Real-time Timer (RTT) may not be used for simple imaging application)
High (the features are simple and may be adequate for simple imaging application)
High (since large MCU is running at high clock speed at all times)
Low (since the small MCU is running at low speed at all times, however, the tiny size iBRIDGE is running at a higher speed)
Image sensor configuration
Fixed (160 KB)
Fixed (64 KB)
Variable (the memory capacity can be varied depending on the application)
Real-time random access of pixels
Simple (Row-Column addressable)
Power saving mode
Needs to be configured
Maximum speed (at which image sensor can be interfaced)
Types of image resolution supported
SubQCIF, QQVGA, QVGA
Any resolution (SubQCIF, QQVGA, QVGA, VGA, Full HD, UHDV, etc.)
In this work, the design of a bridge architecture, named as iBRIDGE, is proposed to overcome the speed gap between commercially available CMOS image sensors and microcontrollers. The iBRIDGE can be configured to work with any DVP-based SD and/or HD image sensor. By using the proposed bridge, a slow and low-power microcontroller (or image processor) with little memory capacity can communicate with high-speed image sensors to capture images of large size. The pixel data can also be accessed in a random access fashion through a parallel memory access interface at a desired speed. The control and status registers provide a comprehensive control of the image sensor. The I2C communication protocol is built into the iBRIDGE core. The design is power-efficient as the iBRIDGE forces the image sensor to sleep mode when in data-access mode. An integrated clock generator provides the necessary clock signals eliminating the need for external clock source. When implemented using CMOS 0.18 um Artisan library cells, the design costs 4,735 gates and 0.12 mm2 silicon area. The synthesis results show that the iBRIDGE supports a data rate up to 254 MHz and suitable for rapid prototyping in different high-speed and low-power embedded system applications.
The authors would like to acknowledge the Natural Science and Engineering Research Council of Canada (NSERC) for its support to this research work. The authors are also indebted to the Canadian Microelectronics Corporation (CMC) for providing the hardware and software infrastructure used in the development of this design.
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.