Tuesday, May 27, 2008

Constrained Random Verification

After stewing for a bit on constrained random verification, it's beginning to loose a bit of its sheen. Let me explain...

The first question is: What gets randomised? Well, there are two types of inputs to our chips: control and data, ignoring supplies. So let's think about what randomizing control and data inputs might entail.

Randomizing Control Inputs

For control input we can randomise timing, order or address-data pairs. Randomising the timing between control writes caught bugs for us in the past, so we find this useful. Randomising the order of control writes doesn't make sense for us as we give customers specific powerup sequences to avoid various unwanted transients.

Throwing constrained random address-data pairs at the chip seems like A Good Thing, but there's a lot of infrastructure needed to get at the full benefits. At the minimum you'll need a high-level model of your chip against which you can check the behaviour of your chip. But the very point of high-level models is that they are not as complicated as the chip itself. I worry in this case that we'll end up designing each chip twice - once in RTL and once as a model. I may be getting confused here, so I should try to gather my thoughts on high-level modelling at a later time.

Randomizing Data Inputs

I'm failing to see the benefits of randomizing the input to datapaths. I've issues with the high-level modelling again, and anyway truely random data is nonsense when piped through filters! (GIGO). So how would constrained random data look like? Usual signals with noise on top? 'Usual signals' is what we're trying to do away with though... (Could I use that trick where you can set a maximum dx/dt?)

I'm not sure what constrained random input signals would look like in our case. And I'm not sure what type of errors they could catch in the datapath (assuming we already stress them with types of signals that we know can over-range our sums).

Random Chip Configurations

Maybe I'm thinking about this at too low a level. Maybe we should be randomizing the configurations of our chips. For example, our serial data ports can work in a variety of modes: I2S, LJ, RJ etc. We've sims to check the correct functionality of each of these serial formats. But when it comes to other sims, for example, checking out the DAC signal chain, we usually feed it with data in the default serial format (I2S). Maybe it's things like serial formats and number of DACs powered up that should be randomised? Maybe that's a bad example as the interfaces between our serial ports and the rest of the chip are well defined?

Conclusion

I haven't come to one, really - the jury's looking to get put up in a plush hotel. I might explore the randomisation of our chips' configurations and maybe make sure we're stressing our datapaths. And I haven't even touched upon functional coverage, which if I'm not careful, could fall prey to the same traps as code coverage.

No comments: