Wednesday, January 19, 2011

DCC Firmware for Arduino

Firmware

So now that I had assembled the hardware, it was firmware time. I wanted to send an address:direction:speed string (eg "A001:F:S3") over the serial connection to the Arduino, and have the Arduino build the corresponding DCC packet and drive the H-Bridge accordingly.
The Arduino firmware I wrote to implement the DCC spec is interesting from two respects: it uses timer interrupts and it writes to the microcontroller ports directly. But I'm getting ahead of myself a little...

DCC Specification

Before going any further, we'd probably need to have a look at the DCC spec. DCC sends 1's and 0's as square waves of different lengths. A short square wave (58us * 2) represents a 1, and a longer one (>95us * 2) is a 0.
These 1's and 0's are then collected into packets and transmitted on to the rails. Each packet contains (at least):
  1. A preamble of eleven 1's
  2. An address octet. This is the address of the train you want to control on the layout.
  3. A command octet. This is 1 bit for direction and 7 bits for speed.
  4. An error checking octet. This is the address octet XORed with the command octet
Each of these sections is separated by a "0" and the packet ends with a "1" bit.
If a train picks up a control packet that is not addresses to it, the command is ignored - the train keeps doing what it was last instructed to do, all the while still taking power from the rails. When nothing has to be changed, power must still be supplied to the trains so packets are still broadcast on the rails to supply power. In this case either the previous commands can be repeated or idle packets sent.

Driving the H-Bridge

First, I had to figure out a way of driving the H-Bridge signals. Driving both legs of the H-Bridge incorrectly won't short out the power supply, but it will give ugly transitions on the rails ( instead of ) and DCC decoders may not be able to decode the packet. The H-Bridge control signals should be driven differentially - both must change at the same time. This ruled out using digital_write() to set pin states for two reasons: it can only change one pin at a time; and it's too slow.
So I needed to directly manipulate the a microcontroller digital port. I chose pins 11 and 12 which are both in PORTB. By directly manipulating PORTB with a macro, I could now change the pins at the same instant in time.
#include <avr/io.h>
#define DRIVE_1() PORTB = B00010000#define DRIVE_0() PORTB = B00001000

When to use these macros was the next problem.

Timing

As the DCC spec specifies quite a tight timing requirement on the 1 and 0 waveforms, I decided I should use the timer on the Arduino's microcontroller. Using the timer, I could place the transitions on the outputs accurately. So I set up the timer so that the interrupt would trigger every 58us. To simplify things, I defined the time of a 0 bit to be twice that of the 1 bit, ie 116us between transitions. For example, if I wanted to send a 1, I would drive LO HI, and I'd drive LO LO HI HI to transmit a 0. The timer setup routine is shown below.
void configure_for_dcc_timing() {
/* DCC timing requires that the data toggles every 58us
  for a '1'. So, we set up timer2 to fire an interrupt every
  58us, and we'll change the output in the interrupt service
  routine.

  Prescaler: set to divide-by-8 (B'010)
  Compare target: 58us / ( 1 / ( 16MHz/8) ) = 116
  */

  // Set prescaler to div-by-8
  bitClear(TCCR2B, CS22);
  bitSet(TCCR2B, CS21);
  bitClear(TCCR2B, CS20);
  
  // Set counter target
  OCR2A = timer2_target;
   
  // Enable Timer2 interrupt
  bitSet(TIMSK2, OCIE2A); 
}
The interrupt service routine (ISR) for the timer is shown below. For accurate timing when using a count target for a timer, I have to reset the timer counter straight away. Straight after, I figure out which level I need to drive and drive it. The point is, there's a fixed amount of processor cycles needed from when the ISR fires until I drive the pins. After this, I can be a little more relaxed about anything else I need to do during the ISR, like update the pattern count or load a new frame (explained later).
#include <avr/interrupt.h>

...

ISR( TIMER2_COMPA_vect ){
  TCNT2 = 0; // Reset Timer2 counter to divide...

  boolean bit_ = bitRead(dcc_bit_pattern_buffered[c_buf>>3], c_buf & 7 );

  if( bit_ ) {
    DRIVE_1();
  } else {
    DRIVE_0();
  }  
  
  /* Now update our position */
  if(c_buf == dcc_bit_count_target_buffered){
    c_buf = 0;
    load_new_frame();
  } else {
    c_buf++;
  }
};

Building Control Packets

There are two steps to getting packet UI data ready for transmission. First, the UI pattern must be constructed using the latest address, speed and direction data that the firmware has received from the serial link. And then when the driver interrupt is ready for it, the packet is copied to a buffer area so that output data is never updated mid way through the transmission of a packet. The picture right gives the general idea.
To keep things simple for the interrupt routine, I built a list of highs and lows that must be transmitted for a given packet. Now, each time the ISR fires it just outputs the next level in the list. For example, if I wanted to drive a packet of 1001, I'd actually be driving 12 UIs (LO HI, LO LO HI HI, LO LO HI HI, LO HI) on the pins. So I set up an array of bytes called dcc_bit_pattern to hold this HI LO HI ... sequence. It was sized so that it would hold the worst case packet length, transmitting all 0's.
So after receiving a new direction instruction, I'd determine the frame data and write it to this packet buffer in UI format. All the while, I'd be keeping a count of the number of UIs in the packet, and when I'd finished building the packet, squirrel this final UI count away for use later. To build a packet from the address, speed and direction data, I call build_packet(), which in turn calls a general-purpose packet builder function called _build_packet(), shown next:
void _build_frame( byte byte1, byte byte2, byte byte3) {
   
  // Build up the bit pattern for the DCC frame 
  c_bit = 0;
  preamble_pattern();

  bit_pattern(LOW);
  byte_pattern(byte1); /* Address */

  bit_pattern(LOW);
  byte_pattern(byte2); /* Speed and direction */

  bit_pattern(LOW);
  byte_pattern(byte3); /* Checksum */

  bit_pattern(HIGH);  
  
  dcc_bit_count_target = c_bit;
  };
The byte_pattern() function takes a byte and converts it to a string of UIs. For example, given an address of 12, this is b0000_1010 in binary and the byte_pattern() function would add the UIs {LO LO HI HI, LO LO HI HI, LO LO HI HI, LO LO HI HI, LO HI, LO LO HI HI, LO HI, LO LO HI HI} to the current packet being constructed.
The function byte_pattern() uses bit_pattern() which really does all the donkey work, doing the actual logic-to-UI conversion. Starting at position held in variable c_bit, bit_pattern() will lay down LO HI or LO LO HI HI for each bit and will increment the UI counter c_bit as it goes.
void bit_pattern(byte mybit){
    bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
    c_bit++;
    
    if( mybit == 0 ) {
       bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
       c_bit++;   
    }
    
    bitSet(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
    c_bit++;
    
    if( mybit == 0 ) {
       bitSet(dcc_bit_pattern[c_bit>>3], c_bit & 7 );
       c_bit++;   
    }
    
}
The position of a given UI in the packet's byte array dcc_bit_pattern is decoded from the UI counter. The three LSBs, c_bit[2:0] are the position within the byte and the remaining MSBs are the byte address. This explains the bitClear(dcc_bit_pattern[c_bit>>3], c_bit & 7 ) stuff that's going on both here and in the ISR.
When the packet is built and the driver interrupt is ready for it, the packet is copied to a buffer area so that a transmitted packet is never updated mid way through being updated. The function load_new_packet() takes care of copying the new UI data and updating the buffered UI target count.

Reading Control Strings via Serial I/O

To read a control string from the serial port, I've used the Serial module and a finite state machine (FSM). The FSM detects a string in the form: "A" digit digit digit ":" "F" or "B" ":" "S" digit. If there's a handier way to do this, I'm all ears. The FSM diagram for this is shown below, with the red transitions being the main loop, and the dashed transistions being followed when there's an error. I snuck a few testmodes in there too: one so I could drive the rails constantly long enough to put a multimeter on them; and another to tweak the timer target count
Having the firware controlled by strings passed through the serial port opens up some interesting capabilities. For instance, I didn't know the address of the train initially, so I wrote small Python script to cycle through all the addresses and wait a while to see if the train responded (it turned out to be '1'):
#! /usr/bin/env python
""" Try to find the address of dad's train... """
from time import sleep
import serial
link = serial.Serial('/dev/ttyUSB0', baudrate=9600, timeout=2)

def search_address():
 for address in range(127):
  print "Address %03d" % (address)
  link.write("A%03d:F:S3" % address )
  sleep(10)
 
if __name__ == '__main__':
 search_address()
I also wrote one to move the train back and forth along the track:
#! /usr/bin/env python
from time import sleep
import serial

link = serial.Serial('/dev/ttyUSB0', baudrate=9600, timeout=2)
print "Link:", link
for i in xrange(10):
    link.write("A001:F:S5")
    sleep(10)
    link.write("A001:B:S6")
    sleep(14)

The Grand Opening

So after all this, you might be interested in what my dad thought of the whole endeavour. I took it back home and showed him, and he was like "Meh, that's nice I suppose. I'm more interested in the wireless control that's about these days...". Fair play, no point in using old tech, I suppose!

References

Saturday, January 15, 2011

Controlling Model Trains with an Arduino

‎Hear My Train a Coming

I was back home a few months ago, and I was in the auld fella's shed. He was giving me the grand tour of the model railway setup he was building (OO guage, I believe). Dad's kinda more into the scenery, building buildings, and wiring the tracks rather than playing with the trains. But what interested me was the operation of the trains - he could have a couple of trains on the tracks and control them seperately, going at different speeds and directions. But there's only two wires! What kind of magic was this?
Turns out it was Digital Command Control, or DCC.

The Golden Age of Steam

Back in olden times, the motors onboard model trains got their power (either AC or DC) from the tracks that the train ran on. This was cool if you had only the one train, you could control its speed by varying the voltage on the tracks, and if you had a DC setup, its direction by flipping the polarity. But if you wanted to run two or more trains at the same time on the same tracks, they'd go at the same speed in the same direction. Not too realistic. Or fun, I can imagine.
That's unless you split up the track layout into separate zones electrically. So a train on zone 1 say, would go at a different speed from a train on zone 2. This setup worked but was very flakey in a number of dimensions. It was especially troublesome at the boundaries between these sections, usually at the points. Points, if you don't know, are those things on a railway which direct a train onto one branch of a track or the other. In model railway land, with the tracks being electrically conductive and all, the points are essentially DPDT switches which can end up shorting the zones if things are not properly controlled. I'm a bit fuzzy on the details here to be honest, so I'll continue...

DCC

Anyways, DCC is the solution to all this. It's quite cool. Instead of DC or a sinewave on the rails, you drive a digital control packet at roughly +-15V. The motor on the train takes its power from this DCC signal (rectifies it, I think), and a chip onboard each train decodes the control packet to set the direction and speed of the train. Since each DCC train can be programmed with an address, each train on a layout can be individually addressed and controlled all without tricky zone wiring! Brill! For a train that's not being addressed, it can still rectify the signals on the rails to power its motor. And if its not being addressed, the train keeps doing what it's doing.

I had a spare Arduino

This was very interesting to me. Digital control, eh? I had a spare Arduino - I'd brought my RGB LED project to show the nephew/nieces. Digital Control. A spare Arduino. A plan was forming. Could I possibly program my Arduino to digitally control my dad's trains?

Power

The first problem was electrical. The Arduino pumps out 5V, and the trains would require a swing of ideally ±15V and quite a bit of current. So I was thinking MOSFET H-Bridge switching a hefty power supply and controlled by the Arduino's outputs. But I had no MOSFETs to hand. Luckily, my dad had a few L293D's lying about (he's cool like that). So with a bit of stripboard and a chopped up DIL socket I had a quick and dirty power driver circuit ready to go. A dusty wall wart rated for 12V DC (giving me ±6V) sourced from the bottom drawer in my dad's shed would supply the necessary power. The general idea of the circuit is shown below:

I used two of the four H-Bridge legs in the L293D to steer the 12V across the tracks. By controlling inputs 1A and 2A carefully, I could put +12V on one rail and 0V on the other, and vice versa, giving a swing of ±6V. This is not exactly to spec, but seemed to work for two trains at least.

The Grand Plan

Now that I was happy with the physics, it was time to get metaphysical. The basic DCC spec defines a packet made up of the train address, its direction and its speed. So I thought it would be nice if I could send an address:direction:speed triplet from a computer GUI to the Arduino via the USB/serial port. My firmware on the Arduino would then convert this command triplet string into voltage waveforms on its output pins, that would drive the power H-Bridge made from the L293D to, in turn, control the train.

So that's what I did. Although I didn't get it completed at home, so the auld fella tacked a few sections of track onto a length of 2x1 and let me borrow a train.
(Warning! as pointed out by Sergei in the comments, if you build this circuit on a breadboard and use it for long periods of time, the chip will heat up and melt your breadboard! So please build it on stripboard and connect pins 4,5,12 & 13 to as much copper as you can to act as a heatsink.)

Firmware

So when I got back to base, I started on the firmware. The firmware to implement the basic DCC spec is interesting enough and would make an interesting post on its own. So that's what I'll do.

Tuesday, June 15, 2010

SystemVerilog is a Big Mistake

I think we dropped the ball with SystemVerilog.
* It's based on old tech (but at least it has garbage collection). Why is it not more Python-like, y'know easier.
* It's a mishmash of languages
* It's getting 'unattainable'. For example, if you want to plug away at it on your own, there's no free simulator that you can practice with.

Toward a Fully Featured Programming Language


The Verilog standard should've only been updated to make it more useful from a HARDWARE DESCRIPTION point of view. SystemVerilog is an effort to grow Verilog towards a more traditional OOP programming language - and that's what's back to front. We should've taken Python (yield) (or even Go - after all it's built around concurrency and it compiles PDQ (not TCL, please)) and grown it to include a Verilog DUT.
SV adds useful stuff like hashes and foreach loops that make it a lot more expressive - stuff that's empiricaly proven to increase productivity by 100.09%. But why not just start from a real programming language in that case? It's not like OOP testbenches do connectivity and timing like traditional RTL - SV testbenches expect you to call .run() on all your class instantiations and pass around handles to interfaces for connectivity. And since we're back to forking a load of .run() methods, why not start from a 'real' programming language, and allow it to twiddle the inputs of RTL descriptions of hardware?

Adding Broken Things


Since SV is a huge amalgamation of things by an amalgamation of vested interests, things were added to the SV standard that should not have been.

program Block Fail


Also, what's with the program blocks? That's a fail right there. And we still have problems with time -0 initialisation, still have possible race conditions at the start of a sim if you want a monitor module to have reasonable defaults, and then change them at the start of an initial block.

final Blocks


I don't get these. They're supposed to be able to let you do things at the end of the simulation. But like most Verilog procedural blocks, you've no visibility on the order that they'll execute. So say you want to open a file at the end of a simulation and have all your testbench monitors write their status to it. Yay, so put a final block in each of your monitor blocks to write to the file... uh, hold on, how do you know that file has been opened? How do you keep the order consistent? Ah, I know, call a .summary() function/method for each of your monitors. But now to call these functions you need to know what monitors you have, so monitors have to register themselves somewhere because SV has no introspection. So now you've a single final block calling a bunch of .summary() functions and if you've only one final block, what's the point? You may as well just have a function that you call at the end of your 'main()' initial procedure.

Open Verification? Hmmm...


SV testbench-building methodologies seem to be settling around the UVM - a nice 'open' standard that's being put together by the Accellera consortium. Yeah, you can download the code for free and have a peek at it, and maybe send some patches back to fix things that trouble you, but it ain't open, baby. If you have to pay loads of cash for a simulator to run this, I'm not sure that you can claim that it's open.
This is another good reason for going the {Real_Programming_Language, Verilog} route. With just a Verilog-2001 open source simulator, open source programming language and some tasty interfacing, you'd be able to run fancy testbenches on pre-existing RTL from the comfort of your own home. No expensive licenses needed. And more than that, you wouldn't have to limit the maximum concurrent jobs on the compute farm to 10 when doing regressions because co-workers write pleading e-mails to you not to hog the licenses...

Assertions, Coverage & Constrained Randomisation


I admit that I haven't used assertions, coverpoints or constrained randomisation in anger. And I suspect that this weakens my argument somewhat. But this could be done in a Python module instead of, y'know, bolting together several existing languages? I've a feeling I underestimate the amount of work needed to get all this stuff working. Yip, I admit it - this portion of my argument is weak.

Companies


Companies. Why would they do {Real_Programming_Language, Verilog} when they could build SystemVerilog to steer us away from the opensource verilog simulators that were somewhat catching up, and make us all move to something that we need to look on feature vs price matrices to see which portions of the bright new thing we can afford to run? Companies, I suppose I can have nothing against them, after all I do work for one! They have to make a buck, I suppose.

So...


It's interesting to think about what a "Real Programming Language + Verilog 2001" SystemVerilog would look like. What Real Programming Language would we use? Would it actually improve productivity?

Tuesday, March 9, 2010

That Wiki Thing...

It's been roughly a year since my pet wiki has been active on the company's intranet. It's definitely been useful, but I think it hasn't completely lived up to the hopes and dreams I had for it.

Usefulness to My Good Self

As I'd planned, I've been using it as a kinda design notebook, although I still scribble on real paper as it's the quickest way to record thoughts. When I write a wiki page, I find I write for an audience other than myself. And that's no bad thing as I have to state assumptions and 'formally' defend any assertions. I'm convinced this is ok; my paper notebook is for exploration and the wiki is the crystalisation of the thought process that lead to the final design. The wiki is the definitive source of information about a topic, not a discussion. The wiki has added a sense of rigor to thinking behind the stuff I produce.

Y'know, maybe I shouldn't be setting up wiki pages willy-nilly. I shouldn't actually be doing my design in wiki pages. Wiki pages are supposed to be solid information, not cloudy half-thought-out explorations. It should not really be an extension of my paper notebook, should it?

Usefulness to My Teammates

This is harder to judge. I think it's somewhat useful to my teammates in a read-only sense, but that it's still considered as "Marty's wiki" and not "the wiki" as I'd hoped.
I have made an effort to let people know of its existence. After I complete a body of work, I check that the page in the wiki is reasonably accurate and then the link is sent around in the 'announcement' email. For example:
Hi All, I'm finished setting up the co-sim environment for our latest chip (which is the bee's knees, BTW, and going to make our company millions). See here (http://ourgroupswiki.some.address.com/) for info on the environment and instructions for launching a sim
That sort of thing. And there is evidence that people read it, but they don't edit it if something's amiss. I do get the odd query on the accuracy of instructions, but my teammates never change the information themselves. Maybe they've better things to be doing - maybe they don't feel that they're an expert in that field so need consensus. Who knows?

The Elephant in the Room - Sharepoint

The wiki's relationship with Sharepoint is still mostly undefined.

Sharepoint is our company's blessed online collaboration thingy. But it's become a dumping ground for powerpoints and word documents. And mostly Office 2007 versions of stuff I've no hope of opening on my linux workstation (vendor lock-in, much?). Rant aside, this is where the latest datasheets, latest marketing info, latest formal design documents go. And to be honest, it's probably the correct place for that info.

So...

I need to properly define the wiki's place in the grand scheme of things. I know it has one, but I haven't yet been able to articulate it. I also need to ask my teammates why it's not "the wiki" yet.

I dunno why I'm invested in this so much.

Tuesday, February 9, 2010

Canonical Signed Digit Representation

I've recently had the opportunity to play around with multiplierless filter designs. Here's some Python code to convert numbers to and from Canonical Signed Digit (CSD) representation. It does fractional too, as I like to keep track of my binary points with negative net indices in Verilog-land.

It's based on a short paper I can't remember the name of. More specifically, it's based on the pictures from a short paper I can't remember the name of as I couldn't really follow all the set theory in the text.

To use it, put it on your path somewhere and:

canavan% python
Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15)
[GCC 4.4.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import csd
>>> csd.to_csd(34)
'+000+0'
>>> csd.to_csd(34.75)
'+00+0-'
>>> csd.to_csd(34.75,4)
'+00+0-.0-00'
>>> csd.to_csd(34.75,6)
'+00+0-.0-0000'
>>> csd.to_decimal('+0000')
16.0
>>> csd.to_decimal('+0000.0-000+0-000+0000-')
15.761955261230469
>>>

Beware, I don't do any input validation yet...

Oh yeah, linkage: http://sourceforge.net/projects/pycsd/

Thursday, July 16, 2009

I Program Computers

A new housemate moved in recently. We were getting to know each other - talking about our backgrounds, our favourite football teams and all the usual getting-to-know-you good stuff. He'd half remembered from our initial meeting that I did something vaguely technical for a living, and asked did I "program computers or what?".
"I'm an electronics engineer. I help to design the digital parts of chips", I NACKed.
"Ah", says he, "so how do you do that then?"
"Emm", I was caught out. "By ah, programming computers...", I sheepishly admitted.
It brings up a topic close to my heart - are Electronic Engineers (EEs) learning as much as they should from Computer Science and Software Engineering?

Digital Design is Programming


Software Engineering is important to EEs because digital designers, and especially functional verification engineers, are in essence specialised software engineers. For digital designers, our thoughts are necessarily grounded in hardware but those thoughts are expressed in software. The special requirements of concurrency and timing for describing hardware requires dedicated Hardware Description Languages (HDLs), but these are programming languages none the less - computers can be made to execute them.

If computers can run our HDLs as programs, then its natural as engineers to want to check the arse off our designs before they make it to manufacturing. We want to make sure that we've expressed our ideas correctly. We're obsessive about checking so we put our functional verification engineer hats on and we run simulations, and now we're suddenly programming for real. Our testbenches and testcases are now software proper. It no longer matters if the code we write is translatable into flip-flops and NOR gates, so long as the input signals are wiggled in the correct way and that the outputs wiggle as we'd expect. And even better (maybe?), we're allowed to abstract now.

I'm of the opinion that a lot of Electronic Engineers don't read enough about software development as they should. Software seems to be, or at least seemed to be, a minor detail that we could get the co-op to sort out. And as far as my own university course was concerned - why did I have to independently discover the joys of source control? I've read a few books like "Code Complete", "Emergent Design" and "Pragmatic Programmer" and wished with every line I read that an equivalent existed for us digital designers. Maybe there is, it's just that programming related resources are easier to find on the web.

Since we're all programmers now, we should learn how to program. From what I read, real software programmers seem to have a small niggling worry that they're somehow inferior to 'real' engineers. That's backwards though - us 'real' engineers need to start befriending real programmers and learn from them. We're so dependent on computers that we need to learn how to program for real. We need source control, we need unit tests, we need to learn to refactor and we need to learn to spot code smells. We need to write scripts to generate RTL, scripts to launch batches of sims over the network and create Makefiles to automate synthesis. We're software engineers and we haven't the slightest clue we are - at least, we've no ideal we will be when we leave college.

Monday, May 4, 2009

Drawing Circuit Diagrams - Update

Well, after a bit of wrangling with the EGB layout algorithm - things are working out!



There are still a few crossovers on the outputs of U8 & U9 which I haven't got to the bottom of yet...

Animation


To help get to the bottom of such things, I've implemented a bit of animation to show me how the layout is progressing at each step. Using python's generators to unroll the main layout loop was the key here. First, the circuit data structure is drawn, then after a small delay the .next() is called on the generator, and the circuit redrawn. This continues until the generator is spent. Pretty nifty if I do say so myself...

Improvements


At the minute, the layout algorithm is sweeping from the inputs of the circuit to the outputs. I'm worried that this won't be optimum for untangling all types of circuits. So, once I debug my EGB algorithm implementation, I'll experiment with the following to see what gives the best results:

  • inputs to outputs

  • outputs to inputs

  • inputs to outputs to inputs


I'm also concerned about the initial state of the circuit data structures. Maybe I've giving the algorithm too easy of a time. The instantiations in the circuit data structure are more or less in the order that they are in the verilog file. Maybe I should mix-up the instantiation order in the verilog files. Or maybe have a switch to randomize the instantiation orders in the data structure...

I've also to trawl/profile the code and look for optimizations...

Next Steps


After playing around with the layout algorithm, I think I'll add a final stage to tidy up the drawing of the nets. Once I get something half-pretty going, I'll concentrate on parsing a bigger subset of the verilog language.