DAC attendees this year were treated to an informative panel moderated by Adapt-IP’s John Sanguinetti titled, “The Great Simulation/Emulation Faceoff,” a lively debate about whether the simulation/formal verification flow works more effectively than a simulation/emulation flow. Panelists were Stephen Bailey of Mentor Graphics, Dave Kelf from OneSpin, IBM’s Ronny Morad, Frank Schirrmeister at Cadence and Alex Starr of Advanced Micro Devices.
Right out of the chute, Steve Bailey, Mentor’s director of emerging technologies and first panelist to present his position, rejected the premise of the panel, noting that there is no faceoff between emulation and simulation or anything in verification. The most important problem, he said, is the rising cost of doing verification. He posited that the most expensive aspect of creating a chip today is functional verification. Design teams need every engine possible and every capability possible to get to the productivity.
The belief that hardware acceleration, which covers emulation and FPGA prototyping, is taking simulation cycles from software simulators is a fallacy –– or, there's a fixed pie and we have to divvy it up, Steve continued. The pie for verification cycles is not fixed, but growing exponentially. The number of software simulation cycles is growing, though perhaps slower than the number of cycles going into hardware acceleration or formal. This needs to be addressed to bring the cost of verifying chips under control, he concluded.
Ronny Morad, manager of post-silicon validation technologies and verification analytics group at the IBM Haifa Research Lab, remarked that simulation, emulation, formal are all important for verification, proven methods and tools for many years. He believes there is synergy between emulation and silicon, and emulation plays a bigger role today than it ever did in the past. This trend will continue.
His position is that the silicon itself is essential for verification. He pointed out that the Wilson Research Group and Mentor Graphics did a study in 2014 and found that 70 percent of designs don't make it to first silicon success and require a respin. Other sources indicate that at least one percent of bugs escaped to silicon, which gives some motivation for post-silicon validation. Silicon is not just a part of a product, but a platform to leverage for verification and finding the bug that escaped pre-silicon. Post-silicon validation is a mature discipline and comes with its own set of tools and methodologies, Ronny added.
Next up, Frank Schirrmeister, senior group director, product management at Cadence, described the core engines –– formal, dynamic with static pieces to it –– and noted the intent to make things faster. That's where hardware acceleration comes in. Yes, he said, it's acceleration, emulation and FPGA, and another dimension: smarter that’s not all about speed. Hardware acceleration is brute force –– put a rocket on it and make it faster for testbenches, but add the ability to make it smarter with metrics for verification. As a result, different verification flows are emerging between verification for IP, subsystems and the full SoC with its environment, all requiring different combinations of engines. Within the context of smart, design teams can combine engines, different abstractions and use software as an instrument for verification.
Formal, simulation, acceleration, emulation and FPGA based prototyping all have individual advantages, Frank observed, and there is no real faceoff. Each has its specific ability and design teams need the combination to be successful.
Alex Starr is an AMD fellow and took a few minutes to describe how AMD uses simulation and emulation. He freely offered that AMD has done emulation for quite a long time, starting with post-silicon. He echoed what Ronny said as well –– AMD works in real silicon with JTAG debuggers, platform-level validation workloads then pre-silicon debug using an emulator. The team puts the SoC in the emulator and, he stated, it is tremendously successful. AMD uses emulation on every product it makes.
What has been interesting, commented Alex, is seeing the evolution, from post-silicon shifting left into more verification-style emulation that has expanded into two dimensions. One is IP-level or subsystem-level emulation. The other is SoC emulation where AMD is doing difficult corner cases and long-running tests. The other angle is software and the time-to-market pressure –– the faster the product is released, the faster the company makes money. And all of the product features are effectively part of verification, he added.
Dave Kelf is OneSpin’s vice president of marketing and quickly began by saying everyone knows about verification performance. Simulation is running out of steam and emulation needs to be taken from a performance viewpoint. He questioned what is going on in verification and whether the industry has identified the right resources to solve those problems. Simulation, emulation and FPGA prototyping is a nice continuum. Adding formal to handle some of the complexity issues in verification could complete the story and make a difference with real problems.
Verification performance is an issue, but performance requirements seem to be slowing down, Dave stated. The real problem is the complexity of testing. And it’s not just the chip, safety, power, security, cross team issues and all the other problems associated with design development. More performance is necessary. Yes, emulation for software/hardware co-verification, but he appealed to the industry to get smarter and figure out how to tackle these problems in a slightly different way with a slightly different technology. Formal is the answer, he confided, and will become part of the approaching three-legged stool of verification.
While the discussion didn’t turn out to be a faceoff on simulation, emulation or formal after all, John Sanguinetti and his panel agreed that they are key pieces to any verification strategy and methodology. That’s not to say the verification challenges have been solved. Each panelist pointed to a challenge yet to be tackled. They include verification cost, performance and complexity, and recognition that silicon is essential for verification and post-silicon validation. The need to work smarter and continue growing deployment modes for emulation was highlighted as well, as did moving formal into the three-legged stool of verification. Certainly, attendees walked out of the meeting room at the Austin Convention Center with a host of ongoing verification challenges to think about.
About Lauro Rizzatti
Lauro Rizzatti is a verification consultant and industry expert on hardware emulation. Previously, Dr. Rizzatti held positions in management, product marketing, technical marketing, and engineering.
In our last edition we've asked you to submit answers to a Formal Verification FAQ - now the answers keep coming in.
This set is from Ankit Jain an avid contributor to FormalWorld.org
What is Formal Verification?
Ans. Formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
In other words, formal verification is a systematic process that uses mathematical reasoning to verify that design intent (specification) is preserved in implementation (RTL)
What is the difference between Formal and simulation based verification?
Ans. In simulation, it is not possible to simulate all possible states in a design. But formal verification explores all possible states in the design. Simulation focuses on scenarios while formal verification focuses on the correctness of the intent of functional behaviour.
In simulation, we must have idea of all the scenarios and vectors and a strong testbench in order to verify the design adequately, while formal verification does not require testbench or stimulus.
We may miss the corner case bugs in simulation while formal verification will catch all the corner case bugs.
Testbench requires time to be created while formal verification starts early in the design cycle.
Simulation must propagate bugs to output pin or we have to write assertions local to the testcase to find the bugs. In formal verification automatically isolate the root cause of the bugs.
Simulation will require mass number of engineers to be employed. On the other hand, we can train engineers to write assertions and teach them the formal concepts. After that verification will become automatic.
What is the difference between Combinational and Sequential Equivalence checking?
Ans. Combinational synthesis changes only the combinational logic and keeps the PO (Primary Output) and NS (Next State) functions (as functions of PI and CS) unchanged, although the network computing these functions can be completely restructured. In this case, equivalence checking consists of just checking the equivalence of the combinational parts of the two circuits.
In contrast, sequential synthesis, additionally uses information about the states unreachable from the initial state to modify the combinational logic further. This can consist of (a) using the unreached states as don’t cares, (b) using the fact that two states are equivalent, (c) using a different encoding for the reachable states, (d) retiming the FFs, or (e) merging two signals in the circuit if in all reachable states the signals have the same values.
How do you do formal verification in physical design (or post layout)?
Ans. The logical functionality of the post-layout netlist (including any layout-driven optimization) is verified against the pre-layout, post-synthesis netlist.
What is global clocking in Formal Verification?
Ans. A clocking block may be declared as the global clocking for all or part of the design hierarchy. The main purpose of global clocking is to specify which clocking event in simulation corresponds to the primary system clock used in formal verification (see F.3.1). Such a specification may be done for a whole design, or separately for different subsystems in a design, when there are multiple clocks and building a single global clocking event is challenging.
What is ‘witness’ in Formal Verification?
Ans. A ‘witness’ is a sequence of input vectors that makes the property true while satisfying all the constraints. ‘Witness’ makes sure that the property is reachable at least once before all the possible state space have been reached by the formal tool. If the property hits first scenario, satisfying the property fully, we can say that a ‘witness’ is found.
‘Witness’ doesn’t depend on the status of the property. Even if the property gets falsified, witness can still be reached or found.
If witness is unreachable then it means that the property is always falsified.
What is ‘counterexample’ in Formal Verification?
Ans. When a model does not satisfy a given specification, a counterexample is produced by the model checker to demonstrate the failure. A user must then examine the counterexample trace, in order to visually identify the failure that it demonstrates.
There are a number of events in the Formal Verification calendar where the latest thinking on this technology gets highlighted. The Test & Verification Services (TVS) Formal Day is particularly interesting as it combines hard-core formal technology advances with practical usage examples from both Formal vendors and users alike. It has something for the experts as well as new adopters, and everyone in between.
It’s not surprising, given its heritage. Mike Bartley, the founder and CEO of TVS, is one of the leading promoters of this approach and his knowledge of verification in general, together with his insights into Formal applications makes him a perfect host for this one-day event.
The annual TVS Formal Day has been running for four years now, and every year it gets a little larger and attracts more people. The 2016 Formal Day did not disappoint, with presentations from all the main vendors and a good group of user experts. All the presentations may be found on the TVS Formal Day website.
John Colley from the University of Southampton in the UK, kicked off the event with a deep dive into Model based Safety and Security Analysis in, as he put it, “High Consequence” System Development. John, who is well known in EDA circles after holding senior positions at TransEDA and Imperas, has spent time researching Formal Methods for use in Aeronautical Systems.
After some useful definitions including a look at the various standards governing this electronic industry sector, he examined a model-based approach for specifying safety requirements, considering them as a control rather than a reliability problem. He considered a real life example of this and dug into the actual specification of the example problem (landing gear door subsystem).
He went on to describe System Theoretic Early Concept Analysis and how a model-based approach may be used to specify aircraft trajectory and other factors. This was a fascinating presentation, the ideas of which might be applied in a number of areas of semiconductor and electronic system design as we continue to consider how requirements might be described in a more machine readable, verifiable fashion.
John’s keynote was followed by another notable Formal expert, Elchanan Rappaport of GilaLogic, who has consulted for multiple companies on Formal Verification solutions including IBM, ARM, and TI. Elchanan penned an article for FormalWorld just a few months ago on evaluating Formal tools. At this event he talked about Security issues and the use of Formal to exhaustively test vulnerabilities in devices. He used a string of case studies to examine how sensitive information propagation may be analyzed using model-checking techniques.
Holgar Busch, the well-known formal verification expert from Infineon, gave a very interesting talk around safety compliance verification, an area in which he is particularly expert. In this presentation he focused on hardware safety mechanisms for registers and register files, and how these could be verified under ISO 26262 conditions. He described various techniques to add error correction to registers and then demonstrated how formal might be used, along with fault injection, to ensure any fault could be trapped and handled up to the ASIL D level describe in the ISO standard.
Ashish Darbari is another well-known Formal expert, this time from Imagination. He followed with a presentation on work that he with the rest of his team at Imagination had performed on the use of model abstractions to improve their Formal-based methodology. Ashish discussed various forms of model abstractions, starting with the relatively simple case of an abstract FIFO model, but then moving to more complex examples. He was able to demonstrate how abstraction, together with techniques such as assume-guarantee, could provide exhaustive proofs of a range of design types.
Finally Alex Orr of Broadcom gave an excellent talk on his experience getting to grips with Formal. At last year’s event Alex talked about his experience as a new user of the technology in one of the most entertaining presentations on the subject that this author has seen. Alex, as a self-described “DAVE” (Design And Verification Engineer), talked about his experiences one year on, and did not disappoint. In a highly realistic, practical fashion Alex dug into his use the tools, leveraging them on smaller as well as bigger tasks, helping the designers reduce implementation bugs, and other Formal perspectives in general.
Throughout the day the sponsoring EDA Vendors (OneSpin, Cadence Synopsys and Mentor) presented ideas around their own tools and experiences. OneSpin’s Christian Burisch discussed layering formal into simulation environments and new techniques to help general verification engineers use the tools, including advanced new coverage techniques. Asa Ben-Tzur of Cadence talked about deep state bug hunting, explaining various bug-hunting strategies and how to apply them. Hans-Jorg Peter from Synopsys discussed formal closure and mechanisms available to achieve it. Abdelouahab Ayari from Mentor Graphics provided a primer on formal engines.
TVS continues to host one of the most informative Formal events, freely available to engineering teams. A look at the presentations and videos on the website is time well spent for anyone involved in verification. See you at Formal Day, 2017.
Security has become a major concern across a number of classes of hardware design, and formal verification is proving to be an effective technology for ensuring that these systems are fully secure.
Security in this context can refer to protection against malicious attack or unintended operation brought about by an external influence. It may relate to the reading of some secret data within a device by a non-authorized third party (confidentiality) or a change in device execution through an unintended input (integrity). Whatever the effect, testing for hardware vulnerabilities is complex and error prone.
Security protection is provided through the use of a number of mechanisms, from the simple encryption of a secret key to protected regions within a device, and linked to trusted virtual environments in software, such as with ARM’s TrustZone® technology. While various mechanisms have been exploited to target software vulnerabilities, hardware security verification is relatively new and requires exhaustive code analysis.
Modern semiconductor designs have been found vulnerable from invasive physical access (for example, de-packaging or deep probe insertions) and logical access (for example, extracting information using misconfigured on-chip debug ports, scan paths, electromagnetic radiation or power consumption). With the growing complexity of modern SoCs operating multi-access software platforms, multiple stakeholders with mixed trust levels share often resources on the same SoC. Providing this functionality while also maintaining platform integrity is often the core issue for security systems and their verification.
A few examples of common vulnerabilities might include:
For many of these situations, the general verification approach is to identify the vulnerable area and its trusted accessibility options and then to ensure, in an exhaustive fashion, that no other access mechanisms exist.
An easy example is a protected key within a device that should be read in an encrypted form when addressed through one specific output port. This detail needs to be specified and the design analyzed for sources of vulnerability. Then a full verification of the design, to see if the key may be accessed through any of these vulnerabilities, needs to take place.
Formal techniques are particularly good at tracking signals exhaustively through a design, and understanding all the possible paths a signal might take. It is also easy with Formal to simply ask the question of the form: “can this data register be read through any port except this one, on any sequence of control signals?” and have it test every possible option without an engineer having to specify a test for every option. Formal is an ideal technology for this purpose and, not surprisingly, security solutions are emerging that leverage formal platforms.
Some of these security solutions make use of data tagging techniques and, indeed, it is possible for a Formal knowledgeable engineer to create a small set of security properties that can tag a security key held in a specific register and then check that there is no unexpected leakage of information of the key to any output. Leakage can occur, for example, through a specialized debug interface, by a memory read for software access, or down a scan path in an unexpected fashion.
Of course security experts can create a more refined set of security properties based on methods used to attack a device, and their knowledge is invaluable when looking for new methods in which a chip may be hacked.
On June 5th, DAC once again descends on a semicon-centric city, this time Austin, TX. With a focus on IoT, IP and embedded systems in addition to traditional topics, this year’s show looks pretty cool! And Formal Verification is at the forefront.
Not surprisingly, areas of interest such as security, automotive electronics and others feature heavily at DAC, and quick check of papers and panels suggest information of value for all engineering specializations. Formal has a significant role in all these areas.
The Design Track is bursting with details on new verification techniques and ideas, so in an effort to inform attendees and save valuable time, we went through all the papers, panels, discussions and exhibits to extract the ones that appear to feature Formal Verification in some way, shape or form.
Here, for your perusing pleasure, are the must-see happenings at DAC around Formal Verification. If we have missed anything, or you would like to comment, contact us.
Firstly a FormalWorld theater at OneSpin’s booth 1249 will have presentations from various experts on subjects related to Formal Verification. Check back on FormalWorld.org to see a complete schedule of presentations throughout the show.
DESIGNER TRACK SESSIONS FOCUSED ON FORMAL
Simulation And Formal: The Best Of Both Worlds
Almost every paper in this session is formal related, and cover some of the toughest design verification topics. A must see for those looking at using both manual and automated formal techniques with some well-known formal experts.
Simulation And Formal: The Best Of Both Worlds
Almost every paper in this session is formal related, and cover some of the toughest design verification topics. A must see for those looking at using both manual and automated formal techniques with some well-known formal experts.
How To Verify The Gordian's Knot Of System Complexity
Chaired by Ashish Darbari, a well known formal expert and proponent, and featuring two noted verification consultants and a top Intel verification engineer, this session promises an exploration of some interesting Formal angles.
Decoding Formal Training Day: “Achieving Formal Sign-Off”
Oski Technologies has arranged a full day primer on formal verification, well worth attending for anyone looking at get a taste of the technology.
FORMAL VERIFICATION RELATED SESSIONS
Tutorial 8: How Portable Stimulus Addresses Key Verification, Test Reuse, And Portability Challenges - Part 1 (and Part 2 later in the week)
Cooley's DAC Troublemaker Panel
Tutorial 7: The Continuing Arms Race: A Journey In The World Of Runtime Exploits And Defenses
Slice And Dice - Advances In Verification And Test
Steering Safety Innovation In Autonomous Car Electronics
IP Track: Evolving IP Interconnects & Verification
Hardware Verification Of Security Aspects
IP Track: System IP Configuration And Verification
Design Automation On The Road Towards Automated Driving
Design/IP Track Poster Session
FORMAL TOOLS ON SHOW IN THE EXHIBITS
Leading Formal suppliers will feature their tools in action:
Cadence, Booth 107
Mentor, Booth 949
OneSpin, Booth 1249
Real Intent, Booth 527
Synopsys, Booth 149
Check out the floorplan here.
DAC is shaping up to provide a lot of great information for engineers to take advantage of Formal for many applications and purposes. If you are going to the show, enjoy! If not, check back to this site after the show and we’ll give you the scoop on we saw!
I need to vent a little.
I’m a big believer in Formal. I’ve been using it productively since the tools were barely strong enough to fight their way out of a paper bag. And we’ve come a very long way since then.
So I find it incredibly frustrating when I ask colleagues who aren’t using Formal why they're not, and for the umpteenth time I hear “Well we did an evaluation and we found it wasn’t worth the overhead.”
And I never understood it, because I KNOW what Formal can do. How are the design shops missing the boat, and how are the vendors letting it happen?
Recently, I got a close view of an evaluation performed at a colleague’s company.
I was torn between laughing and crying, at the irony of the good effort and intentions juxtaposed with the result. I did some of both, then decided to write about it, in hopes that I might provide a little productivity insight for future evaluations.
Here’s my story.
About 3 months ago, I was contacted by friend of a friend whose company was interested in Formal. I was invited to visit by her manager, and prepared to give them a basic vendor-agnostic understanding of the concepts. Why? Well let’s face it, Formal is a major paradigm shift, which is often counter-intuitive. Not every able Verification Engineer is capable of making the shift easily.
I wanted them to know what they were getting into.
When I got there, they weren’t interested in the concepts. This was a confident group of self-starters, who didn’t have time for that. The manager had a list of questions on which of their designs would be appropriate for an evaluation, and I tried to answer as well as I could. What I really should have done was to insist that they understood the concepts better. This was the first warning sign, which I ignored. Mea culpa.
So here is my Observation #1: There’s a bit of a Catch 22 here.
The groups that are brave enough to try Formal are by definition the ones who are self-confident enough not to be too intimidated to try it. But that same self-confidence leads them to underestimate the mental preparation required to be effective at using the technology.
They ended up picking a bridge with an industry standard interface on one side and a proprietary interface on the other. This is a classic design for Formal, and I agreed that it would make an excellent test case. Rather than asking, I just assumed (yes, I know) that their method of attack, and success criteria, would magically spring into existence somewhere between them and the Formal vendor. Mea culpa #2.
Their next step was to contact one of the Formal vendors (no, I’m not going to say which one) and arrange the licensing, as well as FAE support, for the evaluation.
Observation #2: Here’s where we dig a deeper Catch 22 hole.
The vendor’s sales guy has worked very hard to get his foot in the door.
The very last thing he’s going to do is to put off the customer by warning him of challenges in getting Formal to be effective. He’s going to promise the sun, the moon, and the stars, and minimize any fears of being able to use Formal.
If he does have any educational material, it’s all going to be devoted to the features that differentiate his tool from his competitors, nothing deep on the concepts of how to use the technology.
So the FAE came, and they spent a few months doing the evaluation. As a professional courtesy, they let me take a look at the Formal environment and properties they had written during this period. It was clear a lot of work went into it, mostly by the Verification Engineer, with some supervision from the FAE.
I would argue that a long evaluation is a good thing. It gives the engineers a chance to get some real education. With Formal there is a definite point of critical mass where enough expertise is acquired to do some really good work with the technology, inspiring confidence and a willingness to do more. However, evaluations are expensive, so inevitably time limited. This issue needs to be solved.
Observation #3: An FAE’s life is not an easy one.
I should say that I’ve worked with some true supermen in this position, and some who didn’t quite achieve this level. Their primary training, and purpose in life, is to show the customer how to use the tool, and to resolve issues when they get stuck.
Conceptual training for Formal is both beyond the scope of what they’re allotted time for, as well as possibly beyond the scope of their training.
They had written tens of constraints, and tens of assertions.
They even found a few bugs using those assertions.
All of their properties either passed, or timed out, which justifiably concerned them.
Success, right? Wrong.
So I dug into the property files, almost causing me to fall off my chair. Here are some of the things that I saw just on the surface:
Observation #4: Verification Engineer on the ball
I should note that the last thing I’m doing here is faulting the Verification Engineer. I’ve got independent confirmation that she’s very capable, and very sharp. So what went wrong?
In the absence of close, conceptual guidance, she forged ahead with the methods that worked for her in her non-Formal past, and DID achieve results. So why was I bothered by those results?
Let's take a look at why these items are issues for a Formal environment:
So to best exploit Formal, you should start off with NO constraints, then only add them as necessary to deal with false failures (i.e. scenarios which, by definition, can never occur). If you implement a testbench (e.g. a state machine which drives the stimulus to the DUT) then you're reducing your search space (i.e. the scenarios that will be checked) to what simulation would execute, SO YOU MIGHT AS WELL BE RUNNING SIMULATION!
But there are a number of gotcha’s with this:
- This can result in much longer run times
- This can result in MASKING REAL BUGS.
‘Forever’ is a really long time. In the case of this evaluation, the property was identifying and accepting the response to the next request, and not flagging cases where the current request never received a response. Once we replaced the $ with a reasonable finite number, a bug jumped right out.
As an example, assume we are working on a 64-wide by 32-deep FIFO. As such, there are 2K bits for the Formal tool to handle. To reduce the flops to a manageable number, we could reduce the depth of the FIFO to 4, say, leaving only 256 flops. But at a depth of 4, some of the problem scenarios we're hoping to catch may never be examined. On the other hand, if we can identify that all columns of the FIFO are treated identically, end-to-end, in our DUT (as is often the case), we can cut our width to 1 and leave the FIFO depth at 32. This allows deeper scenarios to be examined, while still only having a FIFO size of 32 bits. Note that this is a gross over-simplification that avoids issues such as sequential depth, but I use it to illustrate the point.
So back to the story… The customer’s perspective after the evaluation was somewhat lukewarm. They still had hopes of using Formal for their next project, but didn’t quite see it as an indispensable wonder tool. Only time will tell if that’s enough to get it introduced into their process, given all the other resource pressures.
From my perspective, this was a golden opportunity wasted.
After much thought, here are my criteria for a successful Formal evaluation:
- Positive steps in the user’s “HOW to use Formal” education. While absorbing the concepts of Formal is a large “ask,” and I wouldn't expect the user to get it all in just one evaluation, I would hope that the user would have picked up the correct methods for writing SVA assertions for Formal, writing SVA constraints for an interface, etc. This clearly wasn’t done here. Worse, they were taught some bad habits. Almost like teaching BASIC to a fresh programmer.
- Ending up with a clean design. If the design picked for the evaluation is reasonable in character and scope for Formal, I'd expect us to end up with a set of Formal constraints and properties which guarantees that the design is clean. We weren’t even close in this case. It might help if there was agreement beforehand as to the design functionality that should be verified as part of the Formal evaluation.
- Positive steps in the user’s “WHEN to use Formal” education. I’ll admit it’s not reasonable from a short evaluation to expect the user to see ALL the benefits of Formal, or to be able to judge EXACTLY when to apply Formal. These both come with time and experience. But certainly the user should have come away with the answers for these as relates to THIS design. And that didn’t happen either.
- Ensure a rousing success. This was a design that was well suited to Formal. The user should have come away from it with the feeling that Formal is a wonder tool, that it cleaned up their design in a way that simulation could never have done, and that the overhead required to get there was small enough that they could never live without Formal again. But they ended up still on the fence.
Coming back to my question above, everyone involved in the effort were sincerely motivated professionals. So what went wrong?
There’s probably material here for a good “Game Theory” paper, but at the simplest level, everyone went for short-term goals, without considering any long terms opportunities. The customer was worried more about the limited time and resources in doing the evaluation, than about getting the necessary training for long-term success in Formal. So they didn’t prepare themselves for the mindset intrinsically required for Formal. The vendor was worried more about losing the sale, and his FAE’s time, than about making sure the customer would achieve long term success in Formal. So they skipped a few steps. And naturally we'll skip talking about my mea culpa's ;-)
Even if the sale is made, the odds of the customer continuing to use Formal effectively enough to justify its existence, let alone receiving the full advantages it can offer, are small.
And that’s how these opportunities go by the boards.
Someone in the process, be it the vendor, the customer, or likely both, need to make sure that evaluations of this tool also address what it takes for the customer to be successful at using Formal long-term, not just running it for a few weeks.
I'm told by one of my astute vendor contacts that while many evaluations historically have been mismanaged, it's beginning to change. The vendors are learning from experience and making sure that evaluations are set up properly for success. They are also engaging in training programs for the entire team. Sadly, this change isn't commonplace [yet].
One answer that seems to have worked for Formal, as well as other new technology introductions, is a healthy level of expert consulting help to get the first project moving in the right direction. This could start at evaluation time and continue through the first project.
The vendor could provide this, or better still an independent contractor with a deep knowledge of formal can help train the verification team in the right direction, bridging the learning curve gap while also accelerating the curve itself. Some vendors have tried to employ their own consulting teams to fulfill this purpose, whereas others have formed a consulting partners program with experts that are trained on their tools.
I've re-read the previous paragraph several times, hoping for a way to make it sound less self-serving. Sadly, I wasn't able to. In my defense, I've really seen this work, at customers of mine who started with zero experience in Formal, and today are successfully using it as an integral part of their methodology, at multiple development sites, using their own in-house engineers, with no outside help.
To sum up, Formal can provide a huge benefit if applied correctly. While it is becoming easier to use, some knowledge of how it may be effectively applied turns it from a useful addition to simulation into a wonder tool. Investing in a wise use of time and effort during the evaluation can pay off in spades, providing long-term value and benefit to the customer, as well as to the rest of the eco-system.
Elchanan Rappaport is the founder of Gila Logic, Inc. He has 30 years experience in both simulation and formal-based verification, having worked with a number of semiconductor companies, including IBM, TI, National, Microsoft and others. For the last 15 years he has focused on high-end formal verification projects and is a common speaker at formal conferences and events. He may be contacted at firstname.lastname@example.org.