HOME :: JOB LISTINGS :: DEMOS :: ARCHIVES :: MEDIA KIT :: SUBSCRIBE




In 1995, the future was clear. During the prior decade, digital ICs had grown from thousands of gates to hundreds of thousands, and thus design had outgrown the schematic diagram. Fortunately, with the advent of hardware description languages and tools to simulate and synthesize them, our level of abstraction had been raised. We could now describe in a few hundred lines of VHDL or Verilog what had previously taken hundreds of pages of painstaking graphical schematic entry. Life was good. Leverage was gained. Obviously, then, by the end of the next decade, we’d all have chased Mr. Moore to the next level of abstraction with another 10X boost in productivity.

Well, there’s still the guy with the sandwich sign on the corner reading “Behavioral Synthesis is Nigh!” but the rest of us have grown more skeptical over the past few years. Where is the promised land of 100K gates per line of code?

Despite promises that both rival and resemble those of cold-fusion, no company has yet successfully marketed a production-worthy design tool flow that can produce and verify high-quality designs from behavioral descriptions. The design community is a pragmatic one that won’t leap to a new methodology based on promises alone. There must be a clearly demonstrated track record of improved productivity to justify a difficult and expensive methodology switch, and high-level language design has yet to accomplish that.

What exactly are the benefits from high-level design? Most proponents agree on three key advantages of raising the level of abstraction. First, far fewer lines of code are required to specify any given level of functionality. Typical ratios range from 10:1 to 100:1. It would be reasonable to expect that a design requiring 10,000 lines of VHDL or Verilog at the RTL level might be described in only 100-200 lines of algorithmic C or C++ code. Obviously fewer lines of code means fewer lines to debug and considerable time saved in the specification and debug phase.

Second, higher levels of abstraction simulate faster. While much is made of the myth that one language has some superiority in speed over another, the real difference in simulation performance is in the level of detail of the model. A high-level description has less functional detail and can be simulated much faster, almost without regard to what simulation technology is used. Typical gains claimed in simulation speed range from 10X on the conservative side to over 1,000X in the extremes. These gains are particularly important in system-on-chip designs, where hardware models must be executed at very high speeds to allow system-level transactions to be simulated.

Third, high-level models are more technology independent. In the pure hardware domain, this means that the lack of architectural and structural information in a behavioral hardware model allows it to be seamlessly re-targeted to a variety of implementation technologies such as ASIC or FPGA without re-work. Normally, an architecture (and its associated RTL) optimized for register-rich FPGA will not work well on ASIC and vice-versa. With behavioral high-level design, the architecture is created during the implementation phase, so the original model is free of technology-specific information. In the best case, this applies to independence at the hardware-software level as well. Certainly the ultimate goal of co-design is to have specifications in some high-level language such as C or C++ that can be targeted to software or to any hardware implementation technology. This implementation independence affords the maximum flexibility in reconfiguring and reusing modules to meet a wide variety of performance, cost, and power goals.

So why hasn’t all this benefit found its way into our design environment? Let’s examine the causes behind the delayed migration to the next generation of hardware description languages and the next level of hardware-design abstraction. Then, let’s take a look at the current entries on the market that are attempting to change the status quo.

First on our list of causes is designer reluctance. As we’ve mentioned before, the driving factor in most ASIC designers’ lives is risk avoidance, or, more specifically, blame avoidance. If the ASIC prototype comes back with a bug, and another design spin is required to fix it, it can mean hundreds of thousands of dollars of direct costs and weeks of schedule slip (which can be even more expensive in terms of missed market opportunity). No one wants to be blamed for this. As a result, those choosing the methodologies for ASIC design teams tend to stick to wide, paved, well-traveled roads. Trying a new methodology might give you a leg up, but if anything goes wrong, the full burden of failure will surely land on the hapless adventurer who suggested the process change. The potential reward of a few weeks gained is not worth the risk of blame in the highly probable scenario where something goes tragically wrong.

In our happy world of FPGA design, however, we do not have this ubiquitous Sword of Damocles dangling over our heads. There is no massive cost or schedule penalty for an additional design turn, and a week saved is truly a week earned. Design teams often choose FPGAs precisely because they want the shortest possible schedule. As a result, FPGA designers have demonstrated that they are much more willing than their ASIC counterparts to try new tools and methodologies in order to optimize time-to-market.

The willingness of FPGA teams to try new methodologies creates fertile ground for behavioral design advocates. The problem until recently has been that FPGA designs were generally too simple to demand the higher productivity promised by behavioral design. If the original content portion of an FPGA design required no more than a few thousand lines of RTL description, the RTL phase of the design was reasonably short. Even a huge productivity gain in the language was rather insignificant in the grand scheme of a system design project.

Now that FPGA designs are reaching millions to tens of millions of equivalent gates, however, the pain is increasing. This increased pain puts smiles on the faces of those who develop and market new design tools and methodologies.

The second problem plaguing adoption of high-level language design is lack of a complete solution. While it seemed reasonable that you could accelerate the implementation portion of your design by creating your language-based specification at a higher level of abstraction, the verification folks were left scratching their heads. A behavioral description is missing precisely the kind of architectural and structural information needed by verification engineers to validate the design through its various phases. Changing the design entry paradigm would demand a completely fresh approach to verification, and no one was volunteering a solution.

Slowly, the tools industry has responded with a variety of design verification flows that attempt to capture design intent at a high level along with the behavioral specification, and automatically or semi-automatically generate more detailed test benches as the architecture is created and refined.

The third, and perhaps most important, reason for slow adoption of high-level design languages is implementation tool inadequacy. In RTL-land, creating a suite of tools to synthesize and realize a design from an HDL description is a daunting task requiring a huge investment in software engineering and placing a premium on hard-to-find experts who could straddle the line between experienced logic designer and master software developer. While the algorithms for logic synthesis may be difficult, those for behavioral design are considerably more subtle and complex.

In RTL-based design, the input specification already contains the notion of the micro-architecture of the design. The synthesis tool is left primarily to optimize combinational logic between registers within the confines of a micro-architecture created by a talented engineer. In high-level design, however, only the behavior of the desired circuit is specified. It is up to the synthesis software to understand the desired behavior and to then create a suitable hardware architecture to realize that behavior.

In RTL, the design goals are easy to characterize. The synthesis tool needs to create the smallest, most efficient design possible that still meets all of the timing constraints supplied by the designer. Behavioral design tools, however, must create an architecture to satisfy a fuzzy, unspecified set of constraints in a tradeoff between overall throughput, power consumption, and cost.

Also, in RTL, the range of variation in quality of results (QoR) from various optimization strategies and tools is comparatively small. 20-50% from specification would be considered a large deviation. In behavioral design, however, it is relatively trivial to create variations of thousands of percent in performance, area, and power by altering tradeoffs involving latency, degree of parallelism, pipelining, partitioning, and memory and control architectures. This means that the challenge faced by a tool to create an architecture that performs as well as a “hand-coded” architecture is substantial, and the penalty for failure could easily be a design that’s 100X too large or 10X too slow.

A final obstacle to the widespread migration to high-level design is progress in RTL design itself. In a continuing effort to make designers more productive, FPGA and EDA vendors have dramatically improved the efficiency of RTL design over the past decade. Sophisticated inferring of high-level constructs such as memories and arithmetic operators from RTL descriptions, along with rich libraries of IP, have made the traditional design flow considerably more productive and have lessened the pain and thus the need to migrate to higher-level design languages. “We have worked with our EDA partners for years to improve the efficiency and productivity of the HDL-based design flow,” says Altera’s James Smith, Director of EDA Partnerships. “Advancements in FPGA tools in inferring IP and blocks and leveraging modes of logic structure have deferred the problem.”

Today, a number of companies are claiming to have made significant progress in resolving the issues with high-level design. Typically, their solutions are constrained to a specific design domain so that the problem is more bounded. Control-dominated designs, for example, are more challenging architecturally than datapath-dominated designs. For this reason, many companies have focused on datapath-intensive applications areas such as digital signal processing (DSP).

Other entrants address the QoR challenge by choosing application domains that sidestep the performance issue somewhat. Areas such as reconfigurable computing and hardware/software prototyping are much more lax in their efficiency and performance standards than a high-performance, high-volume hardware application. A design module that, for example, performs only marginally well in software might get a welcome performance boost from even a mediocre to poor hardware implementation.

Synopsys has definitely been the leader among large EDA suppliers in introducing new languages, tools, and methodologies to attack high-level design. Beginning with their introduction in the mid-1990s of Behavioral Compiler (a tool which synthesizes behavioral-level HDL), Synopsys has continued for almost a decade to press the sometimes glacial migration toward higher levels of abstraction.

Most recent, and arguably most successful, among those efforts is their SystemC language, which bridges the gap between traditional HDL-based hardware design and behavior by allowing designs to be described at the more familiar (to hardware designers) register-transfer level as well as the more powerful behavioral level. This allows hardware designers to reap some of the benefits of high-level language while designing at a familiar level of abstraction, then to migrate to more abstract specifications, as both the designer and the design environment grow more sophisticated. “Over 50% of SystemC adopters start out at the register-transfer level,” says Aaik van der Poel, product manager for SystemC at Synopsys, “but the biggest benefits kick in at higher levels of abstraction where system-level simulation becomes more feasible. Simulation will run only as fast as its lowest-level model, so behavioral and transaction-level models of hardware blocks can accelerate system-level simulation by several orders of magnitude.”

Synopsys has pushed SystemC as an industry standard in an effort to accelerate its adoption. Many other vendors support at least a subset of the SystemC standard in a variety of tools and design flows.

Approaching the problem from a different angle is Accelera’s SystemVerilog. Some argue that SystemVerilog is a complement to higher-abstraction languages like C/C++ and SystemC, providing a more robust RTL platform to connect with higher abstraction models. Others point out that much of the incremental capability gained with SystemVerilog simply brings Verilog on a par with VHDL in terms of its high-level expressive capability.

There is considerable debate about the efficacy of general purpose high-level languages such as SystemC and SystemVerilog focused on the issue of the target audience. Nay-sayers claim that only system-architects have an interest in these languages. Since system-architects today represent only a tiny fraction of hardware designers, they don’t represent a substantial market for tool vendors. On the other side of the argument, proponents claim that the architectural exploration capabilities of high-level language environments can eliminate hundreds of hours of design and re-design at the register-transfer level. HDL designers, much of whose expertise is in the art of architecture creation in RTL, are understandably more resistant to new methodologies that threaten to obsolete and replace much of their hard-earned expertise, even when those methodologies may offer ultimately higher productivity. Like the legend of John Henry, they may continue with their HDL hammers in their hands until the steam-drill of high-level languages unequivocally proves itself a more productive and reliable substitute.

In the domain-specific arena, several companies have products and methodologies that bring high-level design to a select audience by narrowing the problem to a manageable size. The most obvious problem to attack with high-level language is digital signal processing. DSP designs typically revolve around optimizing a datapath for maximum efficiency in a tradeoff between latency, throughput, fidelity, hardware cost, and power. Behavioral design tools are very good at the resource utilization, loop optimization, and quantization steps required to affect this tradeoff. In addition, DSP is a lucrative segment of the design market with designers that are not already wedded to RTL-based design methodologies.

AccelChip is attacking the DSP on FPGA market with a design flow that takes DSP designers from Matlab’s M language directly to FPGA. AccelChip automates the quantization and architectural synthesis steps and removes the requirement for DSP designers (whose expertise is traditionally more in the software domain) to learn details of hardware implementation and description in HDL at the register-transfer level. This strategy avoids the pitfalls of other high-level design methodologies by not requiring the designer to migrate to a new way of expressing the design. DSP designers already commonly use Matlab to create their designs, so they are not required to migrate to a completely unfamiliar methodology to take advantage of the high-level design flow.

Both Altera and Xilinx offer solutions with their design kits to smooth the path from DSP to hardware. As discussed in our “Beyond Processors” article, both companies offer flows based on Mathworks’ Simulink product. These flows allow assembly of high-level IP blocks to create datapaths for signal-processing applications, then directly map those IP blocks to optimized FPGA hardware.

Also in the arena of DSP design, Mentor Graphics has reportedly developed new technology in the area of high-level synthesis. Mentor’s product, in early use at some large customer sites, uses high-level design constraints to synthesize untimed C++ into an RTL implementation. Mentor’s approach allows the designer to apply interactive constraints and perform algorithm analysis. Mentor claims improved design quality and automatic RTL generation along with up 20x productivity improvement over conventional RTL methods. They say their approach often produces results better than hand-coded RTL due to their ability to optimize across the algorithm and hardware implementation.

Celoxica has focused their attention on the embedded systems development process. Embedded systems developers typically deal in the higher-leverage tradeoff space of software versus hardware implementations of various performance-critical modules of their design. Celoxica provides an environment for system-level co-design and co-verification that allows the system designer to evaluate the performance and functionality of the system while experimenting with tradeoffs between various hardware and software options. Jeff Jussel, VP of Marketing for Celoxica, says their solution has seen considerable acceptance in systems companies developing applications such as high-end imaging systems. These complex and demanding applications can reap considerable benefit from the architectural flexibility offered by a co-design methodology. Celoxica’s solution focuses on embedded systems utilizing FPGAs and supports the popular system and platform FPGA solutions. By targeting FPGA hardware directly from C code, Celoxica’s co-design methodology is an excellent platform for quickly evaluating and optimizing embedded designs at the system level.

Altera’s SOPC Builder approaches the embedded design problem with an IP-directed solution that lets the hardware designer work in block-mode, assembling a design from bus architecture peripherals. “We see this technology as useful in profiling a design to see where hardware acceleration is needed,” says Joe Hanson, Director of Marketing for System-Level Tools at Altera. Solutions like Altera’s raise the effective level of abstraction by leveraging the power of pre-designed plug-and-play style IP blocks, without relying on high-level languages. The effects on the design process both in reducing development time and debug complexity are similar. SOPC Builder is included with Altera’s Quartus II development tools.

Will high-level languages change your life in the next two years? The answer is probably “no,” but chances are your design team will want to give them a look soon, as design complexity rises, schedules shrink, and tool capabilities improve.

Kevin Morris, FPGA and Programmable Logic Journal

November 18, 2003

[back to top]

Comments on this article? Send them to comments@fpgajournal.com

All material on this site copyright © 2003-2005 techfocus media, inc. All rights reserved.
FPGA and Programmable Logic Journal
Privacy Statement