Tuesday, June 3, 2008

Drawing Circuit Diagrams

If you've been browsing some of my previous posts, you'll know that I'm interested in writing an open source tool to generate schematics from some Verilog RTL. And you'll also probably remember that I was trying to come up with the layout & routing algorithms for the schematics myself.
I'm also failing miserably, you may remember. This is as far as I got with the genetic algorithm layout before I decided to abandon it on speed and reproducability grounds:


So, I've honoured the pragmatic promise I made to myself, and I've turned to the interwebs for help.


Vocabulary

Drawing automated pictures of relationships in Computer Science goes by the name of Graph Drawing, a branch of Graph Theory. According to this stuff, I'm looking to draw Layered Orthogonal Directed Graphs:
  • 'Layered' from the fact that I can arrange the instances into columns. Sugiyama seems to be the main man when it comes to algorithms for this sort of graph.
  • 'Orthogonal' because I want the nets to go in right-angles.
  • 'Directed' because there's a flow in the drawing. For us EEs, this flow is left to right, but in graph theory it's usually top to bottom. So my problem would've been with the x-placement.
In graph theory, my RTL module instantiations are nodes and nets are edges.

Existing Code

The first thing I did with my new-found pragmatism was to look for open-sourced code I could rob use. Preferably this code would be a C/C++ library (for speed) with Python bindings (for handiness), but I'd settle for pure Python. I didn't find exactly what I was looking for; either there was a lack of examples and screenshots, no python bindings, or the library was closed source. That said, if I'm willing to learn SWIG to create python bindings, or I'm willing to create my own examples, there are a few libraries to investigate further:Some of the proprietary stuff could've been exactly what I need: tomsayer.com (sorry, this tries to resize your browser window) had a teaser of a circuit diagram, and yFiles had intrigingly-named ChannelEdgeRouter class.

Even if none of the above open source libraries end up suiting my project, at least I have the freedom to look at the code and study the algorithms they use when cooking my own.


Literature Search

Then I stuck a whole pile of terms into the search engine to see what turned up. I tried various combinations of terms including 'graph', 'drawing', 'routing', 'layout', 'channel', 'layered', '2d' etc. and added more as they turned up. Although I got some useful introductory slide decks from university courses, I did bang my head up against sites such as ieeexplore and springerlinks which expected me to pay for stuff.

The searching did throw up a pair of papers by Eschbach, Günther & Becker which seem promising. One of which, Orthogonal Circuit Visualization Improved by Merging the Placement and Routing Phases, especially so.


Homework

I think the next stage of my endeavour is to read the papers by EGB (hehe, Eternal Golden Braid) I mentioned above, and have a look into those graph drawing libraries, maybe igraph seems the most appealing at a first cut.

Tuesday, May 27, 2008

Constrained Random Verification

After stewing for a bit on constrained random verification, it's beginning to loose a bit of its sheen. Let me explain...

The first question is: What gets randomised? Well, there are two types of inputs to our chips: control and data, ignoring supplies. So let's think about what randomizing control and data inputs might entail.

Randomizing Control Inputs

For control input we can randomise timing, order or address-data pairs. Randomising the timing between control writes caught bugs for us in the past, so we find this useful. Randomising the order of control writes doesn't make sense for us as we give customers specific powerup sequences to avoid various unwanted transients.

Throwing constrained random address-data pairs at the chip seems like A Good Thing, but there's a lot of infrastructure needed to get at the full benefits. At the minimum you'll need a high-level model of your chip against which you can check the behaviour of your chip. But the very point of high-level models is that they are not as complicated as the chip itself. I worry in this case that we'll end up designing each chip twice - once in RTL and once as a model. I may be getting confused here, so I should try to gather my thoughts on high-level modelling at a later time.

Randomizing Data Inputs

I'm failing to see the benefits of randomizing the input to datapaths. I've issues with the high-level modelling again, and anyway truely random data is nonsense when piped through filters! (GIGO). So how would constrained random data look like? Usual signals with noise on top? 'Usual signals' is what we're trying to do away with though... (Could I use that trick where you can set a maximum dx/dt?)

I'm not sure what constrained random input signals would look like in our case. And I'm not sure what type of errors they could catch in the datapath (assuming we already stress them with types of signals that we know can over-range our sums).

Random Chip Configurations

Maybe I'm thinking about this at too low a level. Maybe we should be randomizing the configurations of our chips. For example, our serial data ports can work in a variety of modes: I2S, LJ, RJ etc. We've sims to check the correct functionality of each of these serial formats. But when it comes to other sims, for example, checking out the DAC signal chain, we usually feed it with data in the default serial format (I2S). Maybe it's things like serial formats and number of DACs powered up that should be randomised? Maybe that's a bad example as the interfaces between our serial ports and the rest of the chip are well defined?

Conclusion

I haven't come to one, really - the jury's looking to get put up in a plush hotel. I might explore the randomisation of our chips' configurations and maybe make sure we're stressing our datapaths. And I haven't even touched upon functional coverage, which if I'm not careful, could fall prey to the same traps as code coverage.

Friday, May 16, 2008

SystemVerilog

I've just come back from a week-long SystemVerilog course, presented by one of the folks at Doulos. The course was, I'd have to admit, very interesting and extreemly well delivered - J_ certainly knew his stuff. There seems to be a lot of cool features in SystemVerilog, and other slightly underwhelming stuff, that I want to rant about.

A Fistfull of Features...

SystemVerilog is basically Verilog 2001 with a shedload of new ideas, features and keywords, system tasks, mini-languages, etc, etc. Although there are one or two new language features to make your RTL look prettier, to my mind the majority of the shiny new things are for verification engineers.

The Good

I'm mostly a verification engineer, and SystemVerilog offers me 3 huge and genuinely exciting powers that I want to try out right away; these being assertions, constrained random testing and functional coverage.

Assertions

Assertions are great for making sure your design does what you wanted it to do. They can check the value of a signal or two at a point in time. But more interestingly, by using a regular-expression type mini-language, they can also check signal behaviour during a sequence of clock cycles.

The idea is that you sprinkle assertions all around your RTL in interesting places (synthesis tools will ignore them), and they'll let you know if whatever they're monitoring steps out of line. They'll also help you get to the source of a bug far quicker than a traditional chip-as-a-black-box testbench setup - in which case you have to wait for bugs to propagate to the outputs, then follow the chain of events back to the bug.

Another win for assertions is when you code up a module, and a colleague ends up using it. If the module has assertions on its inputs, it can complain if it is not being fed with the correct signals. Now any bugs that are reported to you are real bugs, and your time is not wasted with bugs due to a misunderstanding of the module's input specs.

Functional Coverage

Functional is a new angle on design verification. Currently our verification plans consists of a big list of sims that must pass before we can tape out. This list is mostly derived from the specs - we go through the specs and try to write a sim testcase that will cover each bit of functionality.

Functional coverage is different because first up, you describe to the simulator every bit of functionality that you want to see. Then it tells you what behaviours in your list it has encountered during the course of a sim (coverage results are usually aggregated over a bunch of sims). If you're careful when writing the functionality descriptions, you can say that functional verification is finished when 100% of targets are hit!

The big payoff for functional coverage is when it's used with constrained random testing.

Constrained Random Testing

Constrained random testing makes it possible to trade verification engineer brain cycles for CPU cycles. It involves throwing random-yet-tuned stimulus at your design, shaking the innards of the chip in more ways than any verification engineer could engineer given a reasonable amount of time.

The fun starts when assertions are added to our design and our list of functional coverage points has been defined. Instead of tuning bunches of testcases to exercise each behaviour, we can just run a few randomised testbenches for longer and let luck stumble across all our behaviours. (Could we breed testcases?) Of course, purely random stimulus is not going to be helpful here due to the GIGO principle, hence we guide or constrain the randomness. And we're probably going to need a bus functional model to check the outputs of our design too.

The Bad and The Ugly

This is where I descend into rant mode, so be warned...

The Tower of Babel

Nothing in SystemVerilog is new under the sun, everything has been magpied from elsewhere: assertions are based on Sugar; OOP sort of follows C++; and other bits and pieces from OpenVera, Superlog and other things I can't quite remember. This leaves the whole SystemVerilog thing looking a bit un-integrated (or uneven, or inconsistent) to me. In most places you use begin-end, other places you use curly brackets. In most places you finish lines off with a semi-colon, in constraint blocks you don't. While in some cases this is not too bad and is perfectly understandable (for example, the PSL), for the most part it just feels a bit of a hodge-podge mish-mash in places.

Classes

OK, I can see how this might get slightly controversial, and maybe this is more related to the inconsistency I've already noted, but I think that object orientated programming has been kludged into SystemVerilog. And it's an ugly kludge.

The amount of hoops that need to be jumped through just to use a class in SystemVerilog that wiggles a few pins, and keeping it reusable is crazy! First, define an interface, give it a clocking block and a modport, throw the handle to the interface all around the place, instanciate your classes, interfaces, and DUT, and probably a few other things that I've forgotten too.

OK, so you only have to do all this once, but it seems ugly and unintuitive. The concurrency that's an essential feature of an HDL is lost for classes and has to be regained again by forking a .run() method on all your classes. The connectivity that's an essential feature of an HDL is lost and has to be faked by using references to interfaces though which pins are accessed. Crazy.

Could us poor hardware engineers not be introduced to the benefits of object orientated programming in a gentler way? Why can't modules not be our 'classes', and be inhierited as well as being instantiated? I've a nagging feeling that I'm missing something huge about the way OOP needs to be implemented, and that maybe it had to be done that way - I'd love to know why.

A Few Features More


Overall, I'm genuinely excited by some of the possiblities that SystemVerilog has opened up to our verification setups. I plan to try out some of these things in our current testbench and report on the progress - I won't be changing it to a class-based architecture any time soon though!

Wednesday, March 5, 2008

Some Verilog Tips & Tricks

I thought I'd share a few Verilog tips & tricks I discovered recently that help when you're trying to build a simulation that doesn't care where it's run from in your directory structure.

Gather Files & Environmental Variables

Gather files are list of simulator commands that are included using the -f flag. You know this, but you may have a different name for them. In these gather files I list all the Verilog module files that I'm using in the simulation, as well as the include directories needed. I specify each file relatively (eg ../../../) from a simulation base directory ($BASEDIR). I actively avoid absolute paths: this gives designers the freedom to set up their simulations anywhere; and the simulations can be run from any of our company's sites. So long, of course, if the designer checks the code out from our versioning system...

run_sim.sh:
#! /usr/bin/sh
export BASEDIR="../../../" # setenv in csh

simulator -f "${BASEDIR}/config/sim.gather"


${BASEDIR}/config/sim.gather:
//
// TITLE: A Simulation Gather File
//
+incdir+${BASEDIR}/block1
${BASEDIR}/block1/rtl/module1.v
${BASEDIR}/block1/rtl/module2.v
${BASEDIR}/block2/rtl/module1.v
${BASEDIR}/block2/rtl/module2.v

The magic here is the 'export' command in the shell script. For best results, the script calling the simulator can calculate what this should be based on the current working directory ($PWD). Verilog & Debussy will correctly substitute any environmental variables it sees in gather files. Our designs are fairly complicated, and I've used nested gather files sucessfully to mimic Verilog-2001's 'Configurations'. But don't get me started on using Verilog configurations with NC-Verilog and Debussy...

Specifying bitmap files for $readmemb and $readmemh

The next problem is making any bitmap files for $readmem tasks portable. The main problem here is that macros are not expanded within strings, nor are you allowed to split strings with them. For example, the code below won't work:

`define BASEDIR ../../../
`define BASEDIR_STR "../../../ // may not even get this far...

module test();
reg [3:0] mem [3:0];

initial begin
$readmemh("`BASEDIR/rom.dat"); // macro won't be expanded
$readmemb(`BASEDIR_STR.rom.dat"); // won't be accepted by simulator - split over string
end

endmodule

I came up with a solution using a reg vector to hold a strings and then using $sformat, but it seemed ugly and I thought there must be a better way. And I found it using concatenations.
module test();
reg [3:0] mem [3:0];

initial begin
$readmemb({`BASEDIR_STR,"/rom.dat"}); // string literal concatenation!
end

endmodule


This works like a charm so long as `BASEDIR_STR is a string literal - "../" is ok but just ../ is not. To keep things directory-agnostic, I pass the base directory string as an argument to the simulator, eg (escaping the quotes is important here to get the path passed to the simulator as a string literal):

run_sim.sh:
#! /usr/bin/sh
export BASEDIR="../../../" # setenv in csh

simulator -f \
"${BASEDIR}/config/sim.gather" \
+define+BASEDIR_STR=\"${BASEDIR}\"

Monday, December 10, 2007

RTL Visualiser

I'm in the midst of writing a Visualiser for Verilog RTL. It'll take in a Verilog description of a circuit design and produce the corresponding schematics. I hope it will be a joy to use, and produce 'nice' schematics.

Automatic Schematics

I've found that producing nice automatic schematics is difficult.

As for the App: so far, I have a very basic GUI running. It reads in and parses very basic verilog, builds the hierarchy tree and displays very basic schematics of very basic RTL, with very basic ratsnest type flightlines representing the block-to-block connections. It's all very basic. I've a nice recursive algorithm to place module instantiations in the x-axis, but y-axis placement is a whole other ball game. Y-Placement is not very basic. (At least as far as I can tell).

I've been fighting with the Y-Placement problem on-and-off for over 4 months, with no successful outcome. I want to see what I'm capable from a programming design point-of-view, so I have not yet consulted the interweb on how to solve it.

Genetic Algorithms

Another interest of mine is in Genetic Algorithms, so I threw one at it, to see if it could get some nice y-axis values to stick. The GA was s.l.o.w. (about a minute to place ~13 blocks - this is far from 'joy to use' territory), although there is room for some tuning to speed things up. And even though it successfully minimized net crossovers, it did not minimise them all. Also, things that should've been connected with a straight line weren't.

This led me to think about Genetic Algorithms and when it's a good idea to use them. But I never came to any conclusions, except to say that its probably a bad idea for this app. It's a bad idea for a few reasons.

First of all, there's the speed issue. I'm not convinced that even if I farmed the GA out to a 'C' routine, that I'd get through enough genotypes and generations in a GUI-friendly timeframe to get a nice schematic. And since the length of the genome depends directly on the number of things I have to find a y-axis number for, the GA slows down exponentially as this size increases. With the added complication of having to find heuristics to determine what population size and how many generations to run the GA for per genome size, it all just gets too much to deal with.

Another issue with running a GA here is that there's no guarantee that you'll always hit a genome with 'maximum' fitness (ie no crossovers if there needn't be, etc). And due to the nature of the algorithm, you can't get consistent schematics for the same RTL for each GA run if you can't consistently hit the fitness maxima.

The fitness functions used for the GA seems to take up the most programming time. And, how in hell do you write a fitness function for 'nice schematics'? To produce nice schematics, I think it's necessary to minimize net crossovers and ensure that modules are not drawn over the top of each other. It also seems to be important to minimize the gradient (sum of the gradients) of the connections. I have included these measures in the fitness function, and have even tried tweaking the weighting given to each, but all to not much avail.

So?

So I've gone back to basics, and am going to try to draw simple 2 & 3 gate circuits to see if I can get a handle on automatic schematics. Wish me luck...

Monday, August 27, 2007

BEDROOMNET: Subversion & Samba

As I mentioned on a previous post, one of the projects I have on the go is an RTL Visualiser. As I want this to be cross-platform, I suppose I had better test it on a few platforms. To this end, I set up a small local network where my mintLinux desktop could talk to my XP laptop. The idea was to host a Subversion repository on the desktop, where I'd be doing most of the development, and use a mix of Samba, Subversion and TortoiseSVN to get the XP laptop to access the repo.

Samba

Out of the box (and with a crossover cable) the desktop could read shared folders on the laptop, but the laptop couldn't see the desktop. So after installing Samba, running the Network Setup Wizard on the laptop and sharing a few folders on both machines, things were running well. I think I disabled the password stuff on Samba because I still haven't figured out how to add accounts - I don't need them anyway for this network. I'll poke about on it a bit more once I fork out for an internet connection.

Subversion

Getting this running was fairly easy too. First I installed Subversion on the desktop and set up a repository on an ext3 partition. Then I installed TortoiseSVN on the laptop - a SVN client program which hooks into Windows explorer and gives extra SVN command options when you right-click on a folder or file. After this, I easily checked out the SVN repo on the laptop and ran my RTL Visualiser (Version 0.1!) successfully on the laptop!

I couldn't check any changes in though. But after adding a user and a password to the repo's passwd file and enabling password authentication, I was soon checking stuff into the desktop repo from the laptop.

At the end of the day...

All in all I'm fairly happy with this setup, and it wasn't too difficult to set up after doing a bit of digging around in the Subversion docs.

Tuesday, August 21, 2007

LinuxMint

Distros and Me

As mentioned in my last post, I had a rotten time trying to find a linux distro that suited me straight out of the box. I suppose this is a big ask, but the reasoning behind it is that I know I'm going to want to try out tons of distros, but I don't want to have to go thru the hassle of configuring it to suite each time. I suppose I'm going to have to settle on one distro for "everyday use" and leave the other ext3 partition on my harddrive for my new distro fixes. I've settled on LinuxMint (MintLinux??) 'cos I like the codec support and the themes and I like the fact that it can use the Ubuntu repos.

NVIDIA Drivers

I hate freedom, and I want my NVIDIA drivers. My Linux box remains unconnected to the net, which makes it a pain to install the NVIDIA drivers on LinuxMint. Luckily, I stumbled across an easy fix for installing NVIDIA drivers. It requires a Ubuntu Fiesty CD (which matches LinuxMint Cassandra), which I got with a linux magazine...

* Fire up Synaptic and disable all the repositories pointing to the web.
* Select 'Add a CDROM' and insert the Ubuntu CD when prompted.
* Close Synaptic.
* Open Restricted Driver Manager.
* Enable NVIDIA Drivers, which now it grabs from the CD drive.
* (Reboot? - I can't remember exactly)

OpenOffice

I've spotted a few funnies with the locale settings when using OpenOffice. I selected Dublin, Ireland as my timezone (locale?) when installing LinuxMint. I'm also assuming that the language packs that are installed depend on the locale in some logical way. But unfortunately I had to manually install the help files and the dictionaries for OpenOffice. It looks like the installer was looking for 'English (Ireland)' for example and could not find it, but it did not revert to 'English (UK)' or 'English (US)' and instead installed nothing. I suppose what I'm getting at is that it'd be nice if there was some kind of graceful fallback for OpenOffice Help and Dictionary files.