Skip to main content
  • Research Article
  • Open access
  • Published:

A Platform-Based Methodology for System-Level Mixed-Signal Design

Abstract

The complexity of today's embedded electronic systems as well as their demanding performance and reliability requirements are such that their design can no longer be tackled with ad hoc techniques while still meeting tight time to-market constraints. In this paper, we present a system level design approach for electronic circuits, utilizing the platform-based design (PBD) paradigm as the natural framework for mixed-domain design formalization. In PBD, a meet-in-the-middle approach allows systematic exploration of the design space through a series of top-down mapping of system constraints onto component feasibility models in a platform library, which is based on bottom-up characterizations. In this framework, new designs can be assembled from the precharacterized library components, giving the highest priority to design reuse, correct assembly, and efficient design flow from specifications to implementation. We apply concepts from design centering to enforce robustness to modeling errors as well as process, voltage, and temperature variations, which are currently plaguing embedded system design in deep-submicron technologies. The effectiveness of our methodology is finally shown on the design of a pipeline A/D converter and two receiver front-ends for UMTS and UWB communications.

1. Introduction

Modern electronic systems are becoming increasingly complex and heterogeneous. Telecommunication and multimedia applications require highly integrated, high-performance systems, where analog, RF, and digital components must be efficiently packaged into a single chip. Emerging sensor and actuator swarm applications, as well, demand customized mixed-domain systems to be embedded into a myriad of extreme physical environments to provide a variety of personal or broad-use services. On the other side, manufacturing technology is evolving deeper into the nanometer era, where leakage power, increasing process variations, reducing supply voltage, and worsening signal integrity conditions make it daunting even to assess the required performance specifications. To build future integrated systems, designers need to face several challenges, at all levels of abstraction, from system conception to physical implementation. Design complexity is indeed rising while, at the same time, time-to-market constraints are becoming tighter, and dependable systems need to be built out of increasingly unreliable components. Addressing the above challenges requires innovative solutions not only in manufacturing technologies and circuit architectures, but also in design methodologies and tools.

A disciplined design style that reduces iterations in the flow should be based on a rigorous formalism leveraging accurate and robust performance modeling techniques to guarantee that performance variables of each component are correctly propagated across the design hierarchy. Moreover, fast, global optimization techniques need to be deployed to provide the best design options, for a given application, within a well-constrained and characterized search space. Finally, a practical framework should promote design reuse, and the separation of design concerns to reduce system complexity and boost designers' productivity.

In this paper, we present a system-level design methodology for mixed-signal electronic circuits, which is inspired by the above principles, and leverages the platform-based design paradigm (PBD) [1, 2] as the natural formalization framework. In PBD, a platform is expressed as a collection of components and composition rules. A design is obtained by composing components of the platform in a platform instance. The refinement process consists of mapping a functional description into a set of interconnected components. The design space is systematically explored through a meet-in-the-middle approach in which top-down design constraints of the system are mapped onto bottom-up performance characterizations of the components in the platform library. Based on this paradigm, we provide a unified framework to assist designers at all levels of abstractions. At the system level, a global optimization technique provides the best design options by leveraging tradeoffs among all components, rather than composing systems using locally optimized components. At the component level, designs in different domains (e.g., RF, analog, or digital) can be concurrently characterized to provide an interface that offers smooth system integration while, at the same time, hiding implementation details. This orthogonalization of concerns allows making design decisions at the system level, where system tradeoffs can be evaluated across all RF, analog, and digital components. Moreover, the design process can be significantly shortened, because of the hierarchical approach enabled by our methodology, which progressively reduces the number of design variables. To ensure that reliable systems are produced, accurate and robust circuit performance models are crucial in our methodology, since high-level models should directly correspond to feasible physical implementations. Designs should therefore be robust to both modeling errors and process, voltage, and temperature variations (PVT), increasingly important as process parameters (minimum channel length, device threshold, supply voltage, etc.) decrease. As presented in [3], we include into our formulation techniques from design centering, traditionally adopted for digital design. With respect to [3], we add details on our performance models, in comparison with other modeling approaches, as well as on the mathematical derivation of the performance margin evaluation algorithm used in our robust optimization. Moreover, we apply our methodology to an additional example.

This paper is organized as follows. Section 2 gives an overview of the PBD methodology applied to the analog and mixed-signal domains. In Sections 3 and 4, we discuss the robust system-level design problem and provide its mathematical formulation within the PBD paradigm. In Section 5, we illustrate our methodology using three case studies, namely, a pipeline A/D converter and two RF front-ends, for UMTS and UWB receivers. Finally, we draw some conclusions in Section 6.

2. Analog Platform-Based Design

Performing system-level design space exploration and optimization in a systematic way can have a great impact on system performance and cost. In a wireless receiver, for example, it allows distributing design requirements (e.g., gain, NF, linearity) among the chain building blocks, and early evaluation of several tradeoffs, such as preselect filter selectivity and power consumption versus front-end linearity, or base-band filter selectivity versus ADC resolution.

In traditional analog and mixed-signal design flows, experienced architects conduct system-level design, and system specifications are empirically partitioned among the various functional blocks that circuit designers have to implement. In fact, since an effective system-level optimization is not achievable without accurate knowledge of the achievable performance of the several building blocks, final system performance may largely deviate from the expected one, which can result in silicon respins. To simultaneously achieve high-quality system integration starting from accurate circuit characterizations, analog-PBD (APBD) has been proposed and formulated in [4–7] as a meet-in-the-middle recursive process consisting in top-down optimization (platform mapping) and bottom-up characterizations, exporting the feasible design spaces of platform components to higher levels of abstraction.

Optimization is performed on behavioral models, that is, mathematical representations of electronic circuits, capturing their functionality as a function of a set of input, output, and configuration parameters. To allow information hiding and intellectual property (IP) protection, a feasible performance model is also provided for each circuit block, which exports the performance achievable by any available implementation of the block (in the platform library), without propagating implementation details. Performance models are built in a characterization process, as described in Section 2.1. Both models are accompanied by validity laws, that is, a set of constraints and inequalities delineating the validity regions of all component models and their compositions. An Analog Platform (AP) is therefore a library of components, each one decorated by the above set of models and laws. A design is a platform instance, that is, a correct composition of elements, implementing the desired function and, at the same time, optimizing a set of quality metrics.

2.1. Analog Performance Models

Performance models play a critical role in analog system-level design and particularly in platform-based design. Performance models are used to constrain the optimization process to achievable performances within the considered architecture space. Therefore, system-level design approaches have to consider the nature of performance models explicitly during system optimizations.

In the recent few years, a number of papers have appeared on the generation of performance models [8, 9] and even direct modeling of the feasibility region [10, 11]. The latter set of works aims at providing a classifier that separates feasible n-tuples of performances from unfeasible ones, without recurring to a regression-based approach. From the system-level perspective, feasibility models allow casting exploration problems in a more intuitive performance space rather than mapping down to implementation parameters. The number of variables in the optimization problems is consequently reduced (at least in nondegenerate cases) and architecture selection becomes readily available as different implementation topologies may share common performance spaces.

There are two basic model generation schemes, equation based and simulation based (Figure 1). The first approach requires deriving analytical expressions to estimate performance from configuration (regression case) or to model the performance space (classification case). The second approach is based on statistical approximation techniques, where a set of performance samples is evaluated and exploited to build a performance model approximation. In order to compare the different schemes, it is useful to introduce some figures of merit for performance models. The cost of generating a performance model can be decomposed into different contributions: model setup, model generation, and model retargeting. In particular, the first two contributions are usually at odds and need to be traded off in real models. Another fundamental figure of merit is accuracy. Accuracy is usually assessed through some function (e.g., average or maximum) of the estimation error. We can further distinguish between two different kinds of error, the error on the training data and the generalization error. The last figure of merit we consider is generality of the approach, both in terms of classes of circuits and of the performance figures that can be captured. Analytical- and simulation-based models are at opposite ends of the spectrum of performance model schemes.

Figure 1
figure 1

Equation-based and simulation-based approaches for generating performance models compared with multiple metrics. The 0–10 axes encode the difficulty of each figure of merit.

Analog platform performance models rely on Support Vector Machines (SVMs) as a way of approximating the classifier discriminating the feasible performance space. Given a set of simulated performance vectors (as detailed in [10]), SVM training selects a subset of vectors (support vectors) and corresponding weight coefficients so that the classifier function is obtained as

(1)

where is a biasing term (also determined during training) and is an SVM parameter. As schematically shown in Figure 2, a set of design configurations (e.g., transistor size for the input differential pair and its bias current) is generated as points in the configuration parameter domain . Electrical simulation maps these points into vectors in the performance space . SVMs are then used to classify the simulated points and generate a feasible performance model. Performance vectors are obtained through simulation, so that maximum generality is available in terms of allowable circuits and performance figures. Moreover, SVMs can be generated so as to minimize the impact of false positives, that is, unfeasible performances classified as feasible. In fact, several case studies have shown that the approximation around support vectors is usually restricted in small regions, so that optimal predicted performances are very close to some actually simulated performance vectors. This is an amenable feature to enable effective hierarchical design with minimum risk of incurring in iterations and redesign.

Figure 2
figure 2

Schematic view of the performance model generation for a simple differential amplifier.

3. Robust System-Level Design

Robust design and optimization have traditionally been closely related subjects. In fact, it is almost impossible to consider an aggressive optimization scheme without considering the robustness of the achieved solutions. System-level design should embrace robust approaches for two separate reasons. From the system level, mixed-signal design has to cope with model inaccuracies that are intrinsic to the behavioral models exploited in design explorations. The more complex the system, the larger the hierarchical structure of the design and the higher the risk when performing nominal design optimizations. In fact, composition of high-level models may provide results whose accuracy is not easily bounded, so either a costly iterative scheme between top-down system-level design and bottom-up verification or relaxed (robust) constraint propagation is adopted. From the implementation level, any performance model is subject to two kinds of inaccuracies: intrinsic modeling errors and process variability. While some control is available on the former source (even if potentially very expensive or restrictive), the latter cannot be solved with deterministic approaches.

Early approaches to computer-aided design centering in an analog context date back to the early 80s [12–14]. All the approaches have a common dependency on the model used to estimate performance degradation on design parameters and, if yield is actually considered, on joint probability functions used to compute yield expectations. However, robust optimization for analog design has not been developed at the same level as nominal optimization. The largest obstacle on the way is represented by the complexity of the resulting optimization problem, which is usually captured as a semiinfinite programming problem. In [15], a circuit optimizer based on simulation is enriched with robust design features, showing significant improvements albeit constrained with scaling issues for complex circuits. The lesson learned from early attempts of including process variations and mismatch in automated circuit design is the tremendous complexity of the resulting problem.

Models generated with classic approaches based on Response Surface Methodology (RSM) [16] can become too expensive to build because of the number of primal parameters and the complexity of the necessary simulations. Instead, we propose an alternative approach, based on approximate models to be developed at the system level.

3.1. Previous Approaches

Several robust approaches to analog design have been proposed during the past few years. Far from being exhaustive, we review those ones that we consider more relevant to the approach presented in this paper (Figure 3). Initially, relaxation of system constraints during top-down optimizations was exploited as an attempt to overcome poor architecture models. We can date back the first rigorous attempt in this direction with the top-down constraint-driven methodology presented in [17] and demonstrated in [18, 19]. Since in pure top-down approaches no detailed information is available on implementation as architectures have not been selected in the first design steps, the methodology formulates the optimization problem (constraint propagation problem) as the maximization of a set of flexibility functions. Flexibility functions are introduced to capture the complexity of implementing a specific set of performances. Therefore, in place of optimizing for power or area, the optimization problems maximize the "flexibility" of achieving the optimum set of performances (i.e., minimize the "effort" of implementation). Albeit rigorously formulated, the methodology was rather limited in performing aggressive optimizations because of the halo inherently inserted by the heuristic flexibility functions.

Figure 3
figure 3

Main approaches to robust system-level analog and mixed-signal design in the last few years.

More recently AMGIE [20] proposed to carry out hierarchical design via a set of optimization problems where, at each abstraction level, component performances are bounded to predefined ranges. A robust approach is achieved inserting margins on all performances, so as to compensate for modeling inaccuracies. However, has to be determined a priori so that its final value is not the result of an optimization problem. In particular, the cost of meeting the margin on performances is not traded off with the potential improvements in system performances, that is, the sensitivity of the goal function on is not evaluated at all, leaving a wide discretionality in determining performance margins.

Recent advances in convex optimization [21] have revitalized analytical approaches to analog design and, consequently, robust design. ROAD [22] introduces a robust optimization approach based on posynomial performance models. To improve accuracy, a simulator-in-the-loop approach is selected and local posynomial models are generated around design points. It is then possible to deal with nonconvex design spaces exploiting the possibility of exactly solving large-scale convex programs. OPERA [23] introduces a robust geometric optimization problem to maximize yield over statistical variations. Design process variations are captured with confidence ellipsoids and approximated to yield a convex problem. The robust design formulation computes optimal design parameters to meet a predetermined yield target. Convex optimization approaches, however, tend to limit designers in selecting cost function and formulating their problems. The efficiency achieved in actually solving the problem may be then counterbalanced by the effort required to model the system and validate the analytical expressions used to set the problem. Moreover, classic approaches to system design with convex optimization are based on generating a flat optimization problem, where all circuit topologies have been selected, thus setting a challenging problem as system complexity grows and mixed-signal designs are approached.

Recently, a hierarchical approach to robust system-level analog design has been presented [24]. Performance centering is sought through concurrent maximization of system-level flexibility based on behavioral models and implementation-level performance margins based on performance models. A possible limitation of the approach is the requirement of posynomial models to capture both system-level and implementation-level constraints. While this assumption is certainly acceptable for some classes of analog systems, it may be in practice a hard one to satisfy as it becomes increasingly difficult to guarantee (or even assess) model convexity as design hierarchy becomes deeper and high-level behavioral models are exploited in mixed-signal design space explorations.

In our framework, we extend the hierarchical approach by removing the posynomial constraints on design formulation. As in [23, 24], robustness is achieved through maximization of margins with respect to system specifications. Extending the approach to analog platforms, we obtain a two-fold advantage. First, very accurate performance models (not constrained to be convex, posynomial, or even in explicit form) can be exploited to estimate implementation margins. It is then possible to accurately weigh implementation margins since model inaccuracies are kept to minimum levels. Second, arbitrary system behavioral models and constraints can be used to formulate the optimization problem since analog platform-based design relies on global stochastic optimization approaches to find optimal implementations. Designers can then specify their systems without recurring to posynomial approximations and capturing arbitrary nonconvex constraints.

4. Mathematical Formulation

The essence of APBD in its general formulation is pictorially represented in Figure 4 and consists of a bottom-up platform generation phase, where architectural constraints are characterized and exported to higher levels, and a top-down optimization phase, where system constraints are intersected with architectural constraints and the system cost is minimized. At the end of the optimization, system specifications are mapped on the available platform library and the process is repeated.

Figure 4
figure 4

Platform mapping optimization process from level (denoted as application in this case) to level (denoted as architecture ).

4.1. Nominal Optimization

In a nominal formulation, the optimization process mapping platform onto platform is mathematically captured as

(2)

where is a set of system performance indices, is a set of platform configuration parameters, is the behavioral model used to map into , represents the set of constraints imposed on by system specifications, and captures the set of constraints on the configuration parameters imposed by the architecture space. The set of constraints in (2) can be visualized defining two sets in the optimization space. The system constraints define the set of feasible performances from the system perspective. The architectural constraints define, through the behavioral model , the set of achievable performances with the current architecture (platform). Figure 5 shows a pictorial representation of the two sets and how mapping is the minimization of the cost function on . Nominal design optimization computes the vector that produces the minimum cost in (2). At optimum, the Karush-Kuhn-Tucker conditions require for active constraints that and , which means that the optimized system is, in general, at the "edge" of implementability on several constraints from both a system and an architecture perspective. However, any modeling error in may translate in actual performances (computed with accurate models) to violate . Similarly, any modeling error in may translate in platform performances being unfeasible. When similar events occur, system design needs either to be iterated or degraded performances have to be accepted. Degradation may be rather severe and force costly redesigns when aggressive specifications are addressed. Even accurate models may fail if performance degradation is due to process parameter dispersion or temperature variation. In general, it is deemed unfeasible to export this information with performance models as for each circuit configuration a function has to be provided which computes the probability density function of performance given the circuit sizing . As the approximation of usually relies on expensive Monte Carlo simulations around , the generation of over the entire configuration space is hardly doable.

figure 5

Figure 5

4.2. Robust Optimization

To address this problem, an alternate formulation of the optimization problem is required. The sets of constraints and have to be satisfied with some margin so as to compensate for modeling inaccuracies. We can write the new set of constraints as and . Margins have an intuitive interpretation, defining a sphere (as defined by the norm adopted) for system constraints and for performance constraints around the optimal pair . The objective of the optimization problem is then changed so as to maximize margins and , which corresponds to the maximization of the volumes of the spheres around the optimum configuration and performance points. The original cost function is inserted as an added constraint with a dedicated . Given a minimum cost target , at optimum is maximized constrained on the other margin variables, so that a tradeoff is evaluated between cost value and robustness during the optimization. Therefore, problem (2) becomes

(3)

System-level constraints are usually available in explicit form; therefore can be immediately written as

(4)

and included in the optimization problem. Additional constraints may be inserted to set specific relations on , for example, . The problem is more involved with performance models, as analog platforms provide in implicit form with a nonlinear function . In this case, we interpret the margin in the following way. For a performance model , its frontier defines the boundary of the feasible region. Given a configuration point satisfying performance constraints , its margin can be obtained finding the closest configuration to and computing the norm of . If all components of have the same weight, then (the performance constraint is consistent with the formulation in (2) as it is equivalent to the argument of sgn in (1) being after a sign change). In this case, minimizing is equivalent to maximizing the volume of the sphere around that is enclosed in the feasible space (within its boundary ). The general case of different weights on different performance components can be immediately obtained adopting a different norm when computing . Since the different performances in the performance vectors used to generate can differ in orders of magnitude, they are all preconditioned to be normalized in the interval . In the following paragraph, we show how to compute based on the SVM representation of .

4.3. Performance Margin Evaluation

The problem of finding given and is analogous to the problem of finding the largest hyperellipsoid enclosed by . Initially we start solving the case of hypersphere enclosure, extending to the general case at the end of this paragraph. By definition, is the point on the boundary which shows minimum distance from . To simplify notation, we set and . Therefore, we can obtain solving the following optimization problem:

(5)

where is implicitly defined from (1) as

(6)

The optimization problem obtained substituting (6) into (5) is evidently nonlinear and can be interpreted as vector projection onto a nonconvex set. In fact, while the cost function in (5) is strictly convex, the equality constraint in (6) is nonlinear (and nonconvex). At optimum, the Karush-Kuhn-Tucker conditions require that

(7)

where is the Lagrange multiplier, the first equation states the feasibility condition for , and the other equations enforce that the gradient of the Lagrangian function vanishes at any optimal point. System (7) originates from the equivalent problem obtained by (5) after squaring the cost function. For each , the th component of can therefore be computed as follows:

(8)

By substituting (6) into (8), for each , we finally obtain the equations in (7).

The nonlinear system (7) can be solved with Newton-Raphson (NR) providing quadratic convergence if is "close" to . is therefore the radius of the largest hypersphere enclosed in . However, the nonlinear nature of (7) generates two problems. First, a multitude of solutions may exist, so we could achieve convergence on a point on which is not the closest to ; second, NR may not converge at all if a sufficiently good initial guess is not provided. To cope with the above problems we first adapt to our problem a more sophisticated implementation of the NR method, similar to the damped Newton's method [21], which tries to improve on basic NR poor global convergence. Then we add some ad hoc heuristics to generate a good initial guess.

Solving for one of the equations in (7) and substituting the result into the other equations, we obtain an -dimensional system in the unknown vector , which can be denoted as . We then combine NR method with the minimization of the function , in the sense that we accept the solution provided by each NR step only if the step considerably reduces . If this does not happen, we backtrack along the NR direction starting from the old point until we have an acceptable new point (). Since the NR step is a descent direction for , we are guaranteed to find an acceptable point by backtracking. The backtracking routine is based on the line minimization rule [25, 26] and consists in defining , as the restriction of along , and finding so as to minimize . To save on the number of function evaluations, a cubic approximation of is actually computed based on available information on and its derivative. Since the improved NR method can still occasionally fail converging on a local minimum of , we can try a new starting point according to the following heuristics:

(i)we compute the distance along reference axes in using bisection-based monodimensional methods. It is then possible to bound the distance of . We observed that in practical cases whenever this bound is smaller than some (whose actual value depends on normalization of ) convergence is always achieved and the correct is returned by Newton-Raphson,

(ii)we set to start iterations as we expect to define a relatively "thin" feasible space. Whenever the previous heuristics is not satisfied, we run NR iterations perturbing the initial point in the direction of the axis where the minimum distance has been found in the previous point (iterations are aborted after a predetermined number) until the minimum distance solution is reached. We observed that is generally sufficient to achieve convergence,

(iii)in case of nonconvergence, we return the bound computed in the first point. In practice, there is no consequence in doing this because it always happened for points deep in in our tests.

The above procedure can be extended to hyperellipsoids enclosure by scaling with a unitary matrix to obtain and extending the previous approach on . Margins found in this way need to be scaled back to the initial space through . This allows selecting different margins on different performances.

The overall algorithm complexity has been computed to be where is the number of performance figures in , is the number of performance vectors, and is the cost for evaluating the exponential function as in (1).

5. Examples

In this section we apply the previous results to the case studies reported in [4, 27, 28]. The original designs are reformulated according to (3). The selection of good cost functions is a crucial issue in system-level optimization, with implications that may become subtle when maximizing robustness. In our experiments, we used the following cost prototype:

(9)

A few considerations may help explain the form of (9). First, the volumes of the ellipsoid and the hypercube increase with number of dimensions for constant margin; therefore an overall normalization is achieved with the powers and of and products. As far as architecture margins are concerned, we can partition , where refers to the single platform component. Elements of are strongly related describing an ellipsoid embedded in . Therefore a single element is sufficient to describe the margin of the th component. If we consider that the composition of blocks is as robust as the weakest block, we can obtain a different cost function considering . The function is used to saturate the sensitivity on as margins too wide may cause degenerate robustness/performance tradeoffs. Finally, if we analyze the Pareto optimal curves as a function of and , we can easily obtain that the relative importance of two parameters is controlled by

(10)

so that sets the relative impact of variations of and . When and are considered, we obtain (for small )

(11)

which makes it clear how the parameter can be used to control sensitivity on without recurring to exponent ranges that may generate numerical issues during optimization. Equations (10) and (11) can be used as guidelines to set parameters in (9), as exemplified in the following case studies.

As a final remark, we notice that architecture performance margins are taken on lower-dimensional models than the corresponding platform ones. In fact, some parameters are simply "ancillary" parameters required for correct composition of platform models, and as such not related with the robustness of the solution. One other parameter, which we did not include when computing margins at the component (architectural) level, is power. Power may be considered as an annotation on circuit performances. In fact, in our case studies, if a given circuit exhibits a larger (or smaller) power consumption with respect to the estimated one, it does not affect circuit performances (which is obviously not true if gain is not met, for example). We remark that this is an arbitrary design choice and is not related to the presented methodology. On the other hand, in our examples we introduce margins on power at the system-level to trade the global power consumption with the robustness of the solution. Also, area has not been exploited as a robustness criterion, but this can be seamlessly introduced in the robust optimization scheme to export at the system level area penalties involved in topology selection.

5.1. Pipeline ADC

In [4] we performed design space exploration of a 14-bit, 80 MS/s pipeline analog-to-digital converter (ADC) in 0.13 m, 2.5 V analog supply CMOS technology. The simplified block diagram of the system is represented in Figure 6. The ADC is made up of 4 multibit stages and includes digital calibration circuits to enhance performance. In particular, the digital-to-analog subconverter (DAC) errors are canceled with the DAC Noise Cancelation (DNC) technique [29] and the first-stage Sample-and-Hold Amplifier (SHA) errors are corrected through a Gain and Distortion Error Correction (GDEC) algorithm as in [30]. The SHA gain and third-order distortion coefficients, and , are first estimated from the digital back-end by a PolyEstim circuit. At the same time the distorted SHA characteristic is effectively inverted (rectified) by the PolyInv circuit. As shown in Figure 7, the interstage residue amplifiers are fully differential switched capacitor systems (FB C in Figure 6) implemented with a telescopic Operational Transconductance Amplifier (OTA). Loading effects and switches nonidealities are also included in the model. The OTA optimization needs to be performed under the hypothesis of operation of digital calibration circuits, as detailed in [5]. In order to perform efficient high-level exploration across the analog/digital boundary while reducing the complexity of the problem we provided characterizations and feasible performance models for the main blocks, that is, the digital calibration logic and the first-stage residue amplifier. The remaining part of the converter was considered ideal. Indeed, the first stage in a multibit pipeline ADC is the most critical block since the accuracy required in terms of gain and linearity is maximum; the remaining stages have been lumped into one block in our macromodel. Since the first stage provides the first 4 bits, a nominal gain of 8 is required to the SHA. However, the presence of the digital correction circuit relaxes this constraint enabling power savings. In the nominal optimization, the cost function aims at minimizing power consumption of the overall ADC subject to performance models and minimum system requirements on DNL, INL, and the signal-to-noise ratio (SNR) due to thermal noise. The architectural space includes four correction algorithms to invert the polynomial nonlinearity corresponding to different accuracy and power consumption levels, based on [5]. Performances are evaluated through the behavioral model of the mixed-signal platform library, in which each component is embedded.

Figure 6
figure 6

Pipelined converter simplified block diagram—feasible performance models have been generated for the blocks in green: the SHA and the gain error digital calibration block (GDEC).

Figure 7
figure 7

Single-ended equivalent (simplified) circuit of the switched capacitor SHA.

The extension to the robust approach of the optimization problem has been achieved through the following formulation, based on the cost template in (9):

(12)

where , , , and are system margins on power, DNL, INL, and SNR, respectively. , the architecture margin, is normalized in and is computed by exploiting an ellipsoid in which weight for the OTA bandwidth (BW) and open-loop gain () is 2 times the other performance indices. The parameter controls cost function sensitivity on , hence the architecture margin on the optimum.

Several optimizations with different cost parameter values were efficiently performed through simulated annealing, with an average time of 13 hours per run. Three meaningful results are reported in Table 1 to demonstrate how the tradeoffs between system margins (especially on power) and architecture margins (especially on gain and bandwidth) can be thoroughly explored within our methodology. In () more emphasis has been given to the architectural constraint margins, setting , , and thus obtaining higher values (e.g., up to 27% on bandwidth). On the other hand, in () and () focus is more on system margin maximization. For example, in () by setting , , , and we got a 17% margin on bandwidth. This lowers down to 0.8% in () where we set , , , and thus obtaining the overall minimum power solution.

Table 1 Performance of optimal ADC, OTA, and GDEC circuit for 3 different cost functions. denotes the system and architecture margins.

We notice how in lower-power designs the system-level margin on SNR tends to decrease as well. Moreover, the unity gain frequency (and the bandwidth), which is the key parameter influencing the settling behavior of the SHA, tends to decrease thus impacting the accuracy of the system (i.e., INL, DNL, and ) and mandating more accurate and power expensive calibration circuits. We finally compare results in Table 1 with the optimal design reported in [4]. Using a nominal optimization technique, we obtained 52.5 mW ADC power consumption with approximately 9% margins. This implies that, in the nominal formulation, it was still possible to obtain reasonable architectural and system margins by acting both on optimization constraints and feasible performance model generation constraints as viable safety margin knobs. However, we had not chances to quantitatively explore and efficiently control the involved performance/margin tradeoffs as we have demonstrated here in the robust formulation.

5.2. UMTS Front-End

In this and the following subsections, we demonstrate our methodology on RF systems. We start with robust optimization of the UMTS receiver front-end presented in [27]. The receiver consists of a Low-Noise Amplifier (LNA) and a mixer for a direct conversion UMTS receiver. All components were characterized and embedded in a platform library. In the nominal optimization, the cost function aims at minimizing power consumption of the overall receiver subject to compliance of standard UMTS tests and performance models. The architecture space is formed by two LNA topologies and one direct-conversion mixer, as reported in Figures 8, 9, and 10. The system-level constraints (directly derived from UMTS specifications) are compactly formulated with

(13)

where and are the output-referred second- and third-order distortion powers, respectively, the output-referred noise power, the output-referred power due to reciprocal mixing, and the front-end gain. The standard specifies the conditions in which system performance has to be assessed. All quantities are evaluated through the receiver behavioral model , described in [27]. Exploiting the robust formulation (3) and the cost function template (9), the following robust optimization problem has been obtained:

(14)

The parameter has been used to control the amount of margin on and thus the architecture margin at optimum. The term has been set as to saturate at margins larger than 15% ( is normalized in ). determines power consumption margin and its weight is controlled by the parameter . and set the margin on minimum interference requirements. Since in a direct conversion receiver second-order terms are crucial, we increased its weight squaring . Finally, measures the mismatch on the interface capacitance between LNA and mixer, and has to be minimized, as detailed in [27] in order to guarantee correct platform composition.

Figure 8
figure 8

Schematic for the n-type input stage LNA used in the UMTS receiver front-end.

Figure 9
figure 9

Schematic for the np-type input stage LNA used in the UMTS receiver front-end.

Figure 10
figure 10

Schematic for the mixer used in the UMTS receiver front-end.

An optimization trace projected onto the Power-NF plane for the LNA is reported in Figure 11. The robust approach is able to perform architecture selection between the LNA topologies, as shown in Table 2. Larger values for the parameter allow more aggressive optimizations, as shown by lower-power consumption levels. Moreover, it is evident that the optimal point does not lie on the Pareto optimal curve of the LNA performances, as was the case in the nominal design in [27]. In this example, area occupation is not directly traded with system robustness against variations. Table 2 shows the performances at optimum together with the main performance indices and corresponding margins. In this case, since direct conversion architectures are extremely sensitive to second-order distortion, we exploited an ellipsoid to compute so that the second-order distortion coefficient weight is 3 times the other performance indices. Overall, compared to the optimal nominal design, a significant increase in power is observed (+32% for the case ), but the final system allows for wide margins to compensate modeling inaccuracies and layout effects.

Table 2 UMTS receiver robust optimization results as a function of . Larger values of decrease the sensitivity on in (9). Note that the LNA topology is also affected by robust optimizations.
Figure 11
figure 11

Optimization results compared with the nominal optimization trace (projections on the LNA NF-Power space). Red dots correspond to npMOS instances, blue dots to nMOS instances. Robust results do not lie on the Pareto optimal curve.

5.3. UWB Front-End

In this subsection, we proceed with optimization, under robustness constraints, of a UWB receiver based on the architecture in [28]. The RF front-end includes two main building blocks, which were both characterized and embedded into a platform library. The first block, shown in Figure 12, consists of the Tx/Rx switch ( and ), the wideband (3.1–4.8 GHz) input matching network (, , , and ), and the LNA, which features a stagger tuning technique to achieve gain flatness over the wideband of interest. The second block, represented in Figure 13, includes a passive mixer ( and ) and a low-noise buffer amplifier () to boost the mixer gain.

Figure 12
figure 12

Schematic of the LNA used in the UWB receiver front-end together with its input matching network.

Figure 13
figure 13

Schematic of the passive mixer and buffer used in the UWB receiver.

In the nominal optimization, we aimed at minimizing power consumption () of the RF front-end while meeting system constraints on , total gain , and noise figure (). Similar to (14), the robust optimization problem is formulated as follows:

(15)

where the system performance figures (power, , gain, and ) are calculated from the and using RF cascade equations, as follows:

(16)

The behavioral model is then built out of (16), with some additional validity laws enforcing correct block composition. As in the UMTS optimization problem, has been used to control the amount of margin on , hence the architecture margin at optimum. For this particular application, power, , and , the most critical performance parameters, have been given the same relative weight. dictates the power consumption margin, while robustness with respect to gain and second-order distortion variations is less of a concern for our UWB communication system. and set the system-level margins on noise figure and of the optimum system design. As in (14), the function saturates when margins become too large (15%).

In Table 3, we report the optimal performance as a function of , together with the main performance figures and their margins. As in the UMTS case, larger values of imply more aggressive optimizations, better performance, and lower margins. However, this does not necessary translate into lower-power consumption, in this case, since the system-level margin with respect to power has the same weight as the other margins. Since the system performance is more sensitive to and , the ellipsoids used to compute and were selected so that the LNA noise figure and the mixer have a weight which is twice the one of the other performance indices. Overall, the final system allows for much wider margins with respect to the nominal solution, albeit at the cost of increased power consumption (30% for the case ).

Table 3 UWB RF front-end receiver robust optimization results as a function of . Larger values of decrease the sensitivity on in (15).

As a final comment on the results, we could not perform a Monte Carlo analysis on the actual circuits for any design since the complexity of our systems rules out the possibility of performing any reasonable number of simulations to get meaningful results. In fact, this was an important motivation to introduce robustness early in the design cycle starting from the system level.

6. Conclusions

Platform-Based Design (PBD) is a promising methodology for embedded system design, aiming to improve design productivity by encouraging design reuse, orthogonalization of concerns, and system-level optimization. In this paper, we have illustrated the extension of PBD to mixed-signal systems. Furthermore, to ensure robustness with respect to both model and design uncertainties, we have proposed the application, within the PBD framework, of design-centering techniques. The proposed approach allows robust hierarchical design without any assumption on the mathematical properties of the system models, leading to a general formulation that can be used for robust automatic design-space exploration.

To demonstrate the effectiveness of the proposed design methods in different domains, we presented three case studies: a mixed-signal pipeline ADC and two RF front-ends, respectively, for UMTS and UWB receivers. In all cases, designs were efficiently composed from precharacterized components, as well as optimized at the system level, demonstrating the flexibility of the approach and significant improvements in terms of robustness.

References

  1. Sangiovanni-Vincentelli A: Quo vadis, SLD? Reasoning about the trends and challenges of system level design. Proceedings of the IEEE 2007,95(3):467-506.

    Article  Google Scholar 

  2. Sangiovanni-Vincentelli A: Defining platform-based design. EE-Design 2002.

    Google Scholar 

  3. De Bernardinis F, Nuzzo P, Sangiovanni-Vincentelli A: Robust system level design with analog platforms. Proceedings of IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '06), November 2006, San Jose, Calif, USA 334-341.

    Google Scholar 

  4. De Bernardinis F, Nuzzo P, Sangiovanni-Vincentelli A: Mixed signal design space exploration through analog platforms. Proceedings of the 42nd Design Automation Conference (DAC '05), June 2005, Anaheim, Calif, USA 875-880.

    Google Scholar 

  5. Nuzzo P, De Bernardinis F, Sangiovanni-Vincentelli A: Platform-based mixed signal design: optimizing a high-performance pipelined ADC. Analog Integrated Circuits and Signal Processing 2006,49(3):343-358. 10.1007/s10470-006-9067-8

    Article  Google Scholar 

  6. De Bernardinis F: System level mixed signal design with analog platforms, Ph.D. dissertation. University of California, Berkeley, Calif, USA; 2005.

    Google Scholar 

  7. Rabaey JM, De Bernardinis F, Niknejad AM, Nikolić B, Sangiovanni-Vincentelli A: Embedding mixed-signal design in systems-on-chip. Proceedings of the IEEE 2006,94(6):1070-1087.

    Article  Google Scholar 

  8. Liu H, Singhee A, Rutenbar RA, Carley LR: Remembrance of circuits past: macromodeling by data mining in large analog design spaces. Proceedings of the 39th Design Automation Conference (DAC '02), June 2002, New Orleans, La, USA 437-442.

    Google Scholar 

  9. Kiely T, Gielen G: Performance modeling of analog integrated circuits using least-squares support vector machines. Proceedings of Design, Automation and Test in Europe Conference and Exhibition (DATE '04), February 2004, Paris, France 1: 448-453.

    Article  Google Scholar 

  10. De Bernardinis F, Jordan MI, Sangiovanni-Vincentelli A: Support vector machines for analog circuit performance representation. Proceedings of the 40th Design Automation Conference (DAC '03), June 2003, Anaheim, Calif, USA 964-969.

    Google Scholar 

  11. Stehr G, Graeb H, Antreich K: Analog performance space exploration by Fourier-Motzkin elimination with application to hierarchical sizing. Proceedings of IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '04), November 2004, San Jose, Calif, USA 847-854.

    Google Scholar 

  12. Brayton R, Hachtel G, Sangiovanni-Vincentelli A: A survey of optimization techniques for integrated-circuit design. Proceedings of the IEEE 1981,69(10):1334-1362.

    Article  Google Scholar 

  13. Director SW, Hachtel GD: The simplicial approximation approach to design centering. IEEE Transactions on Circuits and Systems 1977,24(7):363-372. 10.1109/TCS.1977.1084353

    Article  MathSciNet  MATH  Google Scholar 

  14. Low KK, Director SW: A new methodology for the design centering of IC fabrication processes. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 1991,10(7):895-903. 10.1109/43.87599

    Article  Google Scholar 

  15. Ochotta E, Rutenbar R, Carley R: Synthesis of High Performance Circuits. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1996.

    Google Scholar 

  16. Box G, Draper N: Empirical Model-Building and Response Surfaces. John Wiley & Sons, New York, NY, USA; 1987.

    MATH  Google Scholar 

  17. Chang H, Charbon E, Choudhury U, et al.: A Top-Down Constraint-Driven Design Methodology for Analog Integrated Circuits. Kluwer Academic Publishers, Dordrecht, The Netherlands; 1997.

    Book  Google Scholar 

  18. Chang H, Liu E, Neff R, et al.: Top-down, constraint-driven design methodology based generation of n-bit interpolative current source D/A converters. Proceedings of the Custom Integrated Circuits Conference (CICC '94), May 1994, San Diego, Calif, USA 369-372.

    Chapter  Google Scholar 

  19. Vassiliou I, Chang H, Demir A, Charbon E, Miliozzi P, Sangiovanni-Vincentelli A: Video driver system designed using a top-down, constraint-driven methodology. Proceedings of the IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '96), November 1996, San Jose, Calif, USA 463-468.

    Chapter  Google Scholar 

  20. Van Der Plas G, Gielen G, Sansen W: A Computer-Aided Design and Synthesis Environment. Kluwer Academic Publishers, Dordrecht, The Netherlands; 2004.

    Google Scholar 

  21. Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, Cambridge, UK; 2004.

    Book  MATH  Google Scholar 

  22. Gopalakrishnan P, Xu Y, Pileggi LT: Robust analog/RF circuit design with projection-based posynomial modeling. Proceedings of IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '04), November 2004, San Jose, Calif, USA 855-862.

    Google Scholar 

  23. Xu Y, Nausieda I, Hsiung K-L, Boyd S, Li X, Pileggi L: OPERA: optimization with ellipsoidal uncertainty for robust analog IC design. Proceedings of the 42nd Design Automation Conference (DAC '05), June 2005, Anaheim, Calif, USA 632-637.

    Google Scholar 

  24. Wang J, Pileggi LT, Chen T-S, Chiang W: Performance-centering optimization for system-level analog design exploration. Proceedings of IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '05), November 2005, San Jose, Calif, USA 421-428.

    Google Scholar 

  25. Press WH, Teukolsky SA, Vetterling WT, Flannery BP: Numerical Recipes in C. 2nd edition. Cambridge University Press, Cambridge, UK; 1992.

    MATH  Google Scholar 

  26. Bertsekas DP: Nonlinear Programming. Athena Scientific, Nashua, NH, USA; 1995.

    MATH  Google Scholar 

  27. De Bernardinis F, Gambini S, Vincis F, Svelto F, Castello R, Sangiovanni-Vincentelli A: Design space exploration for a UMTS front-end exploiting analog platforms. Proceedings of IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers (ICCAD '04), November 2004, San Jose, Calif, USA 923-930.

    Google Scholar 

  28. Li Y, Wu C-C, Sangiovanni-Vincentelli A, Rabaey JM: Design and optimization of an MB-OFDM ultra-wideband receiver front-end. Proceedings of the 4th IEEE International Conference on Circuits and Systems for Communications (ICCSC '08), May 2008, Shanghai, China 502-506.

    Google Scholar 

  29. Galton I: Digital cancellation of D/A converter noise in pipelined A/D converters. IEEE Transactions on Circuits and Systems 2000,47(3):185-196. 10.1109/82.826744

    Article  Google Scholar 

  30. Murmann B, Boser BE: A 12-bit 75-MS/s pipelined ADC using open-loop residue amplification. IEEE Journal of Solid-State Circuits 2003,38(12):2040-2050. 10.1109/JSSC.2003.819167

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierluigi Nuzzo.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Nuzzo, P., Sun, X., Wu, CC. et al. A Platform-Based Methodology for System-Level Mixed-Signal Design. J Embedded Systems 2010, 261583 (2010). https://doi.org/10.1155/2010/261583

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/261583

Keywords