Not a very good day today. I started by trying to lay down a 50mm straight line of HDPE. I completely failed and ended up smoking my machine!
The first problem I decided to tackle was extruding just the right amount of filament. This should be easy because I can instruct my extruder controller to turn the pump an exact amount. Using the equations I described last time, I know what feed rate is required to give a particular diameter filament and what its exit speed will be. The problem is that when the extruder stops, the filament continues to extrude slowly for a while afterwards. This is because the molten plastic, being non Newtonian, is compressible.
To start with I was getting about 12mm of overrun. I have noticed that the flexible drive made from steel wire gets wound up and stores some energy. With no power applied to the motor it actually unwinds a bit driving the motor backwards. By default my software was preventing that because it monitors the shaft position and applies increasing power as the shaft moves backwards until equilibrium is reached.
The host can instruct the controller to turn off the motor completely and let the wire unwind. That reduces the overrun to about 4mm. The shaft encoder sees the motor go backwards so, when it's told to move again, it regains all the backlash as fast as it can before settling down to the desired speed. Therefore, there is no loss cumulative loss of accuracy in letting the wire unwind and wind up again.
I expect the amount of filament overrun could be reduced further, or even eliminated completely by running the pump backwards a bit at the end. Unfortunately I can't do that because this is what happens to the steel wire when it is turned the wrong way:-
Because of this I designed my electronics to only be able to go forwards. Apparently this effect is not observed on the RepRap at Bath university. They are using 3mm wire, whereas mine is only 2.5mm, so that might account for it. I may see if I can get better wire that won't unwind. If so I will have to upgrade my drive to an H-bridge to allow the motor to be reversed. There isn't any spare room on my Vero board so I will either have to make a new one or make some sort of 3D creation.
In the meantime I decided to bodge round the problem. As well as the 4mm overrun when the motor stops, it also extrudes about 15mm when the heater is allowed to cool down and is then warmed up again. This is usually accompanied by a sharp cracking sound which sounds like trapped air bursting through the HDPE. I am not sure of the exact mechanism, but air must get in when the plastic is cold and contracted and then get trapped while it is heating up again, forcing some molten plastic out. Perhaps I have discovered a new type of pump with no moving parts!
So, before I can start extruding I need to remove the excess filament hanging from the nozzle. I did this by attaching a scalpel blade to one corner of my XY-table and having the machine visit it to wipe its nose just before starting to extrude. It is just a lash up at the moment, it would be better if it was 20mm above the table and a razor blade might be better, but it seems to work OK.
Of course, once the overrun has occurred and been removed, there is a net deficit of material which manifests itself as a delay before extrusion starts when the motor is switched on again. That has to be made up by starting the extruder in advance of moving the table for the first line segment.
So the next step was to lay down the filament on the table in a straight line. The first problem was that I discovered a bug in my software that meant the table only moved at half the specified rate. So any previous references to milling feed rates in this blog need to be halved!
The bug was easily fixed of course but I could not get the filament to stick to my table. When it hits the table it curls upwards into a loop and sticks to the side of the hot nozzle. The table surface I used for milling is made of upside down laminate flooring. It is covered with a textured layer of what I assume is probably some sort of vinyl. No great surprise it didn't stick, the next thing I tried was paper, a post-it note to be precise. That did not work either so the next thing to try was MDF. I taped an 18mm block to the the table for a quick test and raised the z position by 18mm, but I forgot to program it to raise up to clear it after visiting the knife. The result was the nozzle collided with the block and that pushed the thermistor wires so they touched the heater wires.
The result was quite spectacular, the thermistor wires, being quite thin, lit up like a light bulb before burning out. The thermistor is toast and so is the micro. Three volt micros don't like 12V up 'em!
I should have insulated the wires but I didn't have any insulation handy that would stand the temperature. Also three 3A diodes in series across the thermistor would have saved the day but it's a bit late now.
Fortunately I have a couple more micros and a spare thermistor but the machine will be out of action for 24 hours while the JB-Weld cures.
It is very easy to get a tool crash with a 3D machine and it usually causes a lot of damage. When I was using it as a milling machine I got into the habit of getting it to mime what it was going to do by running the program with a Z offset higher than the workpiece. I should have done the same thing this time.
My attempts to make a rapid prototyping machine that I will use to make parts for a machine that will be able to make parts for a copy of itself.
Sunday, 30 September 2007
Friday, 28 September 2007
Equations of Extrusion
When I first tested my extruder I found that the filament diameter varied with the flow rate and temperature. This was contrary to what others have experienced so I decided to investigate further. It turns out that this is known as die swell and is caused by non Newtonian fluids expanding after they have been squeezed through a hole. Apparently it is a very complicated subject.
To get an idea of what was going on I designed my extruder controller to be able to make measurements. Rather than drive the motor with open loop PWM I used a shaft encoder with proportional feedback. Instead of specifying what PWM setting to use, the host specifies how many shaft encoder steps to move and at what rate. The extruder controller then adjusts the PWM to maintain the correct shaft position at any given instant. Assuming the filament does not slip against the drive screw, that means I can extrude a known volume of plastic in a known time to the tolerance of the the original feed material. The host can then ask the controller what the total on time and off times have been so that it can calculate the average power that has been used.
My temperature control works in a similar way. The host calculates the resistance the thermistor should have at the desired temperature, and from that, what voltage reading the ADC should produce. It sends that setting to the controller which turns the heater on and off. Again it keeps track of the total on and off times so that the host can calculate the average power.
My heater has a resistance of 8.5Ω and has 11.8V across it after the drop in the MOSFET switch and the wiring. That gives a power of 16.4W. This is a graph of the temperature reading from the thermistor plotted against the heater duty cycle :-
As you can see it is not quite a straight line. This is because the resistance of the nichrome heating element increases slightly as it gets hotter, so power does not quite rise in line with the duty cycle. I measured the resistance at 200°C to be 9.7Ω. Using the formula:
R = R0[1 + α(T − T0)]
that gives a temperature coefficient α of 7.8 × 10-4 which is about twice the figure I found on the web for nichrome. I expect that it varies widely according to the exact alloy being used.
Here is a graph of temperature against power, calculated using the above formula for resistance :-
It is a lot closer to the straight line I was expecting.
I decided to investigate how much extra power is needed to heat the incoming plastic when extruding. I found that while feeding the filament in at 1mm/s, which is about the maximum my motor can do, the PWM to maintain 200°C increased from 44.6% to 61.2%. An increase of 16.6% corresponding to an extra 2.4W. Feeding a 3mm filament at 1mm/s gives a flow rate of 7.1 × 10-3 cc/s. HDPE has a density of around 1 so that is 7 × 10-3 g/s. The specific heat capacity is 2.2 J/g-°C which gives 2.8W to heat 7 mg from 20°C to 200°C per second. I think that is reasonably close to the value I measured, given that HDPE has quite a wide range of densities.
Next I decided to look at the effect of temperature on the motor power required to extrude at a given rate :-
I concluded that temperature has little effect on the motor power required, except when it gets close to the melting point, where it rises rapidly as expected. That was how I broke my extruder!
Next I looked at filament diameter against temperature :-
No real correlation, so it seems temperature is not very important as long as it is above the melting point. This was a surprise to me as I imagined molten plastic would get less viscose as temperature increased. It may become more critical when I start laying down filament as it will effect how it fuses together and shrinks. I did all the subsequent measurements at 200°C.
Feed rate (in mm/s) against PWM was another surprise. I expected power to rise rapidly with feed rate but, in fact, it is quite proportional :-
Presumable 30% is the power required to overcome static friction in the system.
Here you can see the output rate versus the feed rate :-
It does not increase in proportion, so if conservation of matter is true then it must be getting bigger in diameter. Indeed it does, here is output diameter against output rate :-
Either it is a very complex relationship with multiple inflexions or it is just linear with lots of measurement error. I made three measurements per test with digital calipers and took the average but the deviation between samples was quite high.
I prefer to think it is a simple linear relationship which means I can make a simple mathematical model of my extruder. As you can see it will hit the Y axis at about 0.93 mm. I think that must be the size of the hole in my nozzle. I drilled it 0.5mm but perhaps I drilled the hole from the back too far and opened it out a bit. It seems to have got bigger with use because I could get 0.8mm filament when I first tested it but I don't seem to be able to now, even at very low extrusion rates.
So if the filament diameter equals hole size plus a constant times extrusion rate then from conservation of volume I can relate the output rate to the feed rate.
do = dh + kvo
vodo2 = vidi2
So: vo(dh + kvo)2 = vidi2 a cubic equation!
Where do is the output filament diameter, di is the input filament diameter, dh is the nozzle hole diameter and vo is the output filament speed, vi is the input filament speed.
With these equations I can calculate the output rate to get a particular filament diameter. That also tells me how fast to move the head. From the output rate I can also calculate the feed rate required.
Conclusion? Well I definitely have die swell which increases with extrusion rate but other people have reported constant die swell. The only explanation I can think of is that I drilled my nozzle too deep from the back so the aperture has almost zero thickness instead of the 0.5 to 1mm expected.
I have a simple mathematical model which allows me to exploit the variable filament width if I need to. This may all become irrelevant when I start laying down filament to build things because the filament can be stretched or compressed if the head movement does not match the output rate.
Tomorrow I will try laying down the filament.
To get an idea of what was going on I designed my extruder controller to be able to make measurements. Rather than drive the motor with open loop PWM I used a shaft encoder with proportional feedback. Instead of specifying what PWM setting to use, the host specifies how many shaft encoder steps to move and at what rate. The extruder controller then adjusts the PWM to maintain the correct shaft position at any given instant. Assuming the filament does not slip against the drive screw, that means I can extrude a known volume of plastic in a known time to the tolerance of the the original feed material. The host can then ask the controller what the total on time and off times have been so that it can calculate the average power that has been used.
My temperature control works in a similar way. The host calculates the resistance the thermistor should have at the desired temperature, and from that, what voltage reading the ADC should produce. It sends that setting to the controller which turns the heater on and off. Again it keeps track of the total on and off times so that the host can calculate the average power.
My heater has a resistance of 8.5Ω and has 11.8V across it after the drop in the MOSFET switch and the wiring. That gives a power of 16.4W. This is a graph of the temperature reading from the thermistor plotted against the heater duty cycle :-
As you can see it is not quite a straight line. This is because the resistance of the nichrome heating element increases slightly as it gets hotter, so power does not quite rise in line with the duty cycle. I measured the resistance at 200°C to be 9.7Ω. Using the formula:
R = R0[1 + α(T − T0)]
that gives a temperature coefficient α of 7.8 × 10-4 which is about twice the figure I found on the web for nichrome. I expect that it varies widely according to the exact alloy being used.
Here is a graph of temperature against power, calculated using the above formula for resistance :-
It is a lot closer to the straight line I was expecting.
I decided to investigate how much extra power is needed to heat the incoming plastic when extruding. I found that while feeding the filament in at 1mm/s, which is about the maximum my motor can do, the PWM to maintain 200°C increased from 44.6% to 61.2%. An increase of 16.6% corresponding to an extra 2.4W. Feeding a 3mm filament at 1mm/s gives a flow rate of 7.1 × 10-3 cc/s. HDPE has a density of around 1 so that is 7 × 10-3 g/s. The specific heat capacity is 2.2 J/g-°C which gives 2.8W to heat 7 mg from 20°C to 200°C per second. I think that is reasonably close to the value I measured, given that HDPE has quite a wide range of densities.
Next I decided to look at the effect of temperature on the motor power required to extrude at a given rate :-
I concluded that temperature has little effect on the motor power required, except when it gets close to the melting point, where it rises rapidly as expected. That was how I broke my extruder!
Next I looked at filament diameter against temperature :-
No real correlation, so it seems temperature is not very important as long as it is above the melting point. This was a surprise to me as I imagined molten plastic would get less viscose as temperature increased. It may become more critical when I start laying down filament as it will effect how it fuses together and shrinks. I did all the subsequent measurements at 200°C.
Feed rate (in mm/s) against PWM was another surprise. I expected power to rise rapidly with feed rate but, in fact, it is quite proportional :-
Presumable 30% is the power required to overcome static friction in the system.
Here you can see the output rate versus the feed rate :-
It does not increase in proportion, so if conservation of matter is true then it must be getting bigger in diameter. Indeed it does, here is output diameter against output rate :-
Either it is a very complex relationship with multiple inflexions or it is just linear with lots of measurement error. I made three measurements per test with digital calipers and took the average but the deviation between samples was quite high.
I prefer to think it is a simple linear relationship which means I can make a simple mathematical model of my extruder. As you can see it will hit the Y axis at about 0.93 mm. I think that must be the size of the hole in my nozzle. I drilled it 0.5mm but perhaps I drilled the hole from the back too far and opened it out a bit. It seems to have got bigger with use because I could get 0.8mm filament when I first tested it but I don't seem to be able to now, even at very low extrusion rates.
So if the filament diameter equals hole size plus a constant times extrusion rate then from conservation of volume I can relate the output rate to the feed rate.
do = dh + kvo
vodo2 = vidi2
So: vo(dh + kvo)2 = vidi2 a cubic equation!
Where do is the output filament diameter, di is the input filament diameter, dh is the nozzle hole diameter and vo is the output filament speed, vi is the input filament speed.
With these equations I can calculate the output rate to get a particular filament diameter. That also tells me how fast to move the head. From the output rate I can also calculate the feed rate required.
Conclusion? Well I definitely have die swell which increases with extrusion rate but other people have reported constant die swell. The only explanation I can think of is that I drilled my nozzle too deep from the back so the aperture has almost zero thickness instead of the 0.5 to 1mm expected.
I have a simple mathematical model which allows me to exploit the variable filament width if I need to. This may all become irrelevant when I start laying down filament to build things because the filament can be stretched or compressed if the head movement does not match the output rate.
Tomorrow I will try laying down the filament.
Monday, 24 September 2007
Self destruction
Not managed any self replication yet but my machine has done a bit of self destruction!
While doing some experiments running my extruder at different speeds and temperatures, I managed to run it at too low a temperature such that it forced the PTFE barrel out of its clamp. That broke off one of the heater wires under the JB Weld. Fortunately I was able to dig out the end of the nichrome and reconnect it. I soldered the joint but that is not the best idea as solder melts at 183°C and I am running my barrel at about 200°C. The heater gets a bit hotter than that. Presumably molten solder is still a good conductor. The ideal way to make the connection would be with a miniature barrel crimp but I don't know if they exist. Here it is repaired :-
Clamping a very slippery plastic rod with a clamp made out of a slightly less slippery plastic is probably not the best design.
I seem to spend as much time stripping down and rebuilding my extruder as I do running it. Looking on the bright side at least the thermistor didn't fall off again despite some rough treatment. Here it is all back together again and working :-
While doing some experiments running my extruder at different speeds and temperatures, I managed to run it at too low a temperature such that it forced the PTFE barrel out of its clamp. That broke off one of the heater wires under the JB Weld. Fortunately I was able to dig out the end of the nichrome and reconnect it. I soldered the joint but that is not the best idea as solder melts at 183°C and I am running my barrel at about 200°C. The heater gets a bit hotter than that. Presumably molten solder is still a good conductor. The ideal way to make the connection would be with a miniature barrel crimp but I don't know if they exist. Here it is repaired :-
Clamping a very slippery plastic rod with a clamp made out of a slightly less slippery plastic is probably not the best design.
I seem to spend as much time stripping down and rebuilding my extruder as I do running it. Looking on the bright side at least the thermistor didn't fall off again despite some rough treatment. Here it is all back together again and working :-
Saturday, 22 September 2007
Temperature drop
My extruder controller is working much better after I cured the motor noise problem. The 10 bit SAR ADC also seems to work better than the 16 bit sigma-delta version did. With the 16 bit one there was a lot of noise on the readings, even when the motor wasn't running. I had to average over many samples to get a consistent reading which delayed the response. With the 10 bit ADC I just read it and compare it with the temperature set point value to decide if the heater should be on or off. That gives a temperature swing of about ± 3°C with the heater going on and off every four or five seconds.
The temperature is calculated from the ADC reading and vice versa by the PC with the following Python class :-
The result seems to track the temperature measured by a thermocouple to within about 5°C, good enough for me. Just as I had finished checking it, I knocked the thermistor leads with my thermocouple and it fell off. It was stuck to the brass nozzle with JB Weld but I forgot to roughen the surface first. I am now waiting 16 hours for it to set again.
The extruder controller firmware is only about 400 lines of C. As well as temperature control it also controls the DC motor precisely using the shaft encoder and handles the I²C protocol.
I have now completed all the mechanical parts, the electronics and the firmware. I just need to get the RepRap host code to talk to my non standard hardware to complete the machine.
The temperature is calculated from the ADC reading and vice versa by the PC with the following Python class :-
from math import *It is instantiated as follows :-
class Thermistor:
"Class to do the thermistor maths"
def __init__(self, r0, t0, beta, r1, r2):
self.r0 = r0 # stated resistance
self.t0 = t0 + 273.15 # temperature at stated resistance, e.g. 25C
self.beta = beta # stated beta
self.vref = 1.5 * 1.357 / 1.345 # ADC reference, corrected
self.vcc = 3.3 # supply voltage to potential divider
self.vs = r1 * self.vcc / (r1 + r2) # effective bias voltage
self.rs = r1 * r2 / (r1 + r2) # effective bias impedance
self.k = r0 * exp(-beta / self.t0) # constant part of calculation
def temp(self,adc):
"Convert ADC reading into a temperature in Celsius"
v = adc * self.vref / 1024 # convert the ADC value to a voltage
r = self.rs * v / (self.vs - v) # resistance of thermistor
return (self.beta / log(r / self.k)) - 273.15 # temperature
def setting(self, t):
"Convert a temperature into a ADC value"
r = self.r0 * exp(self.beta * (1 / (t + 273.15) - 1 / self.t0)) # resistance of the thermistor
v = self.vs * r / (self.rs + r) # the voltage at the potential divider
return round(v / self.vref * 1024) # the ADC reading
thermistor = Thermistor(10380, 21, 3450, 1790, 2187)10380 is the resistance of the thermistor measured by my multimeter at a room temperature of 21°C. 3450 is the beta of the thermistor taken from the data sheet. The last two values are the two resistors forming a potential divider with the thermistor wired across the second one, again the values are measured with a multimeter. The fudge factor of 1.357 / 1.345 corrects the MSP430 internal reference voltage so that it agrees with the multimeter.
The result seems to track the temperature measured by a thermocouple to within about 5°C, good enough for me. Just as I had finished checking it, I knocked the thermistor leads with my thermocouple and it fell off. It was stuck to the brass nozzle with JB Weld but I forgot to roughen the surface first. I am now waiting 16 hours for it to set again.
The extruder controller firmware is only about 400 lines of C. As well as temperature control it also controls the DC motor precisely using the shaft encoder and handles the I²C protocol.
I have now completed all the mechanical parts, the electronics and the firmware. I just need to get the RepRap host code to talk to my non standard hardware to complete the machine.
Tuesday, 18 September 2007
DC to daylight
Well my machine is not going to pass any EMC regulations, my wife is complaining it is interfering with the digital TV downstairs! The amount of noise coming from the little GM3 gear motor is astonishing. This is the motor switching waveform on the top trace and the other lead of the motor which is at 12V on the bottom trace. The vertical scale is 20V and the timebase 0.4mS.
In this instance the motor is being powered for about 300 uS every 1.3 mS when its negative lead is driven to ground. When the motor is switched off the voltage shoots up above 12V due to the back emf. It gets capped at 48V by the over voltage protection of the BTS134 low side switch that I am using to drive it. It then has a damped oscillation at about 6KHz before settling down to 12V for the remainder of the off period. This will be due to the inductance of the motor windings resonating with the 100nF capacitor I put across the motor terminals. Although it looks violent it is actually the smaller burst of noise on the right which is causing all the problems.
Here is a close up of a similar burst of noise with a timebase of 10uS.
This is around 20MHz and you can see it gets onto the 12V rail. It is caused by the sparks at the motor brushes. Sparks emit RF energy from DC to daylight as I was once told by an EMC expert. My guess is that 20MHz is the resonant frequency of the motor windings with their own stray capacitance when they are momentarily disconnected from my suppression capacitor by the commutator.
One nasty aspect of this sort of noise is that it tends to get less as the motor brushes wear in and then get worse again as they start to wear out. I remember a project where a small motor was mounted close to a PIC. The PIC would frequently crash when the device was first run, but it would soon become impossible to recreate the problem until a new motor was fitted. I read that it is a good idea to "break in" DC motors by running them without any load at a low voltage for a few hours to allow the brushes to become a good fit to the commutator. Too late for mine though!
This is what the noise that gets onto the I²C lines looks like :-
A tough challenge then to make I²C reliable in this environment!
I began by stopping the comms from locking up so that I could add a retry scheme. To do that I had to put timeouts in all the points where I was waiting for the master controller to do something. When it times out I have to reset the controller and do one manual clock pulse to free up the slave before delaying 100uS and then re-enabling the controller. That stopped the comms locking up but did nothing to preserve data integrity. E.g., while I was sending motor commands and reading the temperature the heater came on of its own accord, not good!
The next thing I did was add an 8 bit CRC checksum to the end of each message so that I can detect when a message has been corrupted. 8 bits should be sufficient because the messages are only a few bytes long, i.e. less than 28 bits, and the bursts of noise are only a few bits long, i.e. less than 8. I used a table driven method so the software overhead is just a 256 byte table, one XOR and a table lookup.
I also added a sequence flag to the top bit of the command byte. This alternates when a new command is sent but does not on a retry. This enables the slave to ignore retried commands resent by the master because the previous reply from the slave has been corrupted.
The result seems to be robust even with the massive amount of noise present but I don't like to paper over hardware problems with software. To make systems like this completely reliable I aim to get no retries in normal operation and only rely on the protocol to handle exceptional events. The root cause is the noise from the motor so I decided to have a go at tackling that.
I took a closer look at the noise on the motor leads without any suppression :-
It looks pretty random and different on each wire which is to be expected because the two brushes spark independently. Here is a spectrum analysis :-
It peaks at 23 MHz but must in fact go all the way up to over 600 MHz to affect the television. There is also a lot of noise on the can. My first attempts to suppress it were to put a 100nF disc ceramic across the terminals and earth the can. That did not work well at all. I found that a more modern 1nF capacitor across the terminals worked better and leaving the can floating was better than grounding it because that just put noise on the ground rail. The old and new capacitors are shown below :-
It is no surprise the me that the smaller one works better at higher frequencies because it is so much physically smaller its inductance will be less. It is also much kinder to the MOSFET driving it!
Doing a bit of research I found that it is common practice to connect a capacitor from each terminal to the can, so I added two more 1nF caps forming a triangle. That worked well as it got the retries on the I2C bus down to zero, and also stopped the TV interference. I could have stopped there but there was still plenty of noise visible on the scope. I added two small ferrite bead inductors that I salvaged from a very old disc drive, one in series with each lead, and put a small 10nF ceramic across the cable. That made a fantastic filter leaving no noise visible on the scope.
I also decided to add a back emf clamping diode rather than rely on the over voltage protection of the MOSFET. 48V across a 5V motor is a bit much after all and is high enough to give an electric shock.
Here is the resulting filter mounted on Vero board and fitted to the motor :-
The 1nF cap across the motor is hidden by it and the other two are underneath :-
And here is the new switching waveform with pretty much all overshoot, ringing and noise eliminated :-
If only all EMC problems were that easy!
In this instance the motor is being powered for about 300 uS every 1.3 mS when its negative lead is driven to ground. When the motor is switched off the voltage shoots up above 12V due to the back emf. It gets capped at 48V by the over voltage protection of the BTS134 low side switch that I am using to drive it. It then has a damped oscillation at about 6KHz before settling down to 12V for the remainder of the off period. This will be due to the inductance of the motor windings resonating with the 100nF capacitor I put across the motor terminals. Although it looks violent it is actually the smaller burst of noise on the right which is causing all the problems.
Here is a close up of a similar burst of noise with a timebase of 10uS.
This is around 20MHz and you can see it gets onto the 12V rail. It is caused by the sparks at the motor brushes. Sparks emit RF energy from DC to daylight as I was once told by an EMC expert. My guess is that 20MHz is the resonant frequency of the motor windings with their own stray capacitance when they are momentarily disconnected from my suppression capacitor by the commutator.
One nasty aspect of this sort of noise is that it tends to get less as the motor brushes wear in and then get worse again as they start to wear out. I remember a project where a small motor was mounted close to a PIC. The PIC would frequently crash when the device was first run, but it would soon become impossible to recreate the problem until a new motor was fitted. I read that it is a good idea to "break in" DC motors by running them without any load at a low voltage for a few hours to allow the brushes to become a good fit to the commutator. Too late for mine though!
This is what the noise that gets onto the I²C lines looks like :-
A tough challenge then to make I²C reliable in this environment!
I began by stopping the comms from locking up so that I could add a retry scheme. To do that I had to put timeouts in all the points where I was waiting for the master controller to do something. When it times out I have to reset the controller and do one manual clock pulse to free up the slave before delaying 100uS and then re-enabling the controller. That stopped the comms locking up but did nothing to preserve data integrity. E.g., while I was sending motor commands and reading the temperature the heater came on of its own accord, not good!
The next thing I did was add an 8 bit CRC checksum to the end of each message so that I can detect when a message has been corrupted. 8 bits should be sufficient because the messages are only a few bytes long, i.e. less than 28 bits, and the bursts of noise are only a few bits long, i.e. less than 8. I used a table driven method so the software overhead is just a 256 byte table, one XOR and a table lookup.
I also added a sequence flag to the top bit of the command byte. This alternates when a new command is sent but does not on a retry. This enables the slave to ignore retried commands resent by the master because the previous reply from the slave has been corrupted.
The result seems to be robust even with the massive amount of noise present but I don't like to paper over hardware problems with software. To make systems like this completely reliable I aim to get no retries in normal operation and only rely on the protocol to handle exceptional events. The root cause is the noise from the motor so I decided to have a go at tackling that.
I took a closer look at the noise on the motor leads without any suppression :-
It looks pretty random and different on each wire which is to be expected because the two brushes spark independently. Here is a spectrum analysis :-
It peaks at 23 MHz but must in fact go all the way up to over 600 MHz to affect the television. There is also a lot of noise on the can. My first attempts to suppress it were to put a 100nF disc ceramic across the terminals and earth the can. That did not work well at all. I found that a more modern 1nF capacitor across the terminals worked better and leaving the can floating was better than grounding it because that just put noise on the ground rail. The old and new capacitors are shown below :-
It is no surprise the me that the smaller one works better at higher frequencies because it is so much physically smaller its inductance will be less. It is also much kinder to the MOSFET driving it!
Doing a bit of research I found that it is common practice to connect a capacitor from each terminal to the can, so I added two more 1nF caps forming a triangle. That worked well as it got the retries on the I2C bus down to zero, and also stopped the TV interference. I could have stopped there but there was still plenty of noise visible on the scope. I added two small ferrite bead inductors that I salvaged from a very old disc drive, one in series with each lead, and put a small 10nF ceramic across the cable. That made a fantastic filter leaving no noise visible on the scope.
I also decided to add a back emf clamping diode rather than rely on the over voltage protection of the MOSFET. 48V across a 5V motor is a bit much after all and is high enough to give an electric shock.
Here is the resulting filter mounted on Vero board and fitted to the motor :-
The 1nF cap across the motor is hidden by it and the other two are underneath :-
And here is the new switching waveform with pretty much all overshoot, ringing and noise eliminated :-
If only all EMC problems were that easy!
Saturday, 15 September 2007
Bus stops
The I²C bus not working was more of a problem. I had some issues when I just had the spindle controller connected, see bodge-it-and-move-on, but at that time I did not have a storage scope so I could not get to the bottom of it. In the meantime I bought a cheap 100 MHz 2 channel USB scope so I was able to find out what was going wrong.
As I have indicated above, the clock line has two glitches where it should have a proper clock pulse. The master generates the clock but the slave is allowed to stretch it by extending the low portion. Normally it is hard to tell which device is driving the bus but in this case it is obvious because I got the pull up resistors too small for the MSP430. I forgot that the MSP430 is aimed at low power applications so has an unusually low drive capability.
I still wasn't sure which end was the problem so I coded the host end in software so I could see exactly what the slave was doing. It is relatively easy to do an I²C master in software because it does not have any strict timing deadlines. The slave does, so it is more difficult to implement purely in software.
It turns out the problem lies with the mask revision of the MSP430F2013s that I am using. Revision B has several I²C bugs such as pulling the clock low when it shouldn't and no workarounds. I have had my chips for some time, long before I started this project. Two are actually labeled with a mask revision of X which is undocumented. It seems to have at least all the bugs in mask B. The other is labeled as mask B, so none are of any use for I²C!
Very disappointing that these days a device has to have three or four mask revisions just to get something as simple as an I²C module working. This seems to be the way things are going: hardware is becoming just as buggy as software. I wasted some time at work recently discovering that the UART in a PIC18F65J10 occasionally inserts zero bytes in the middle of your packet. These things were lab exercises when I was an undergraduate, now big companies can't get them right.
Fortunately, I had some MSP430F2012s that were mask revision B and the I²C bugs were fixed one revision earlier on that chip. They are slightly different in that they have a 10 bit SAR ADC instead of a 16 bit sigma-delta ADC. This is actually more appropriate for my application but it has a completely different software interface and voltage range.
Once I had swapped to the MSP430F2012 and modified the firmware for the new ADC, the I²C bus sprang into life. That was until I started to run the motor whereupon I got occasional bus lockups. This is due to the massive amount of noise coming from the brushes of the DC motor. I get about 2.5V of noise on the I²C lines
Using I²C without buffers for anything other than inter IC comms is a really bad design decision, I am not sure what came over me. I normally use RS485 differential comms when linking boards that drive motors or other high current loads and have never had a problem with noise. I even think I have publicly criticized the RepRap design for converting the RS232 to 5V signals before sending it around the ring of control boards. It was tempting to give I²C a try because I already had micros that support it, and didn't have UARTs, and my screened cable is only about a foot long. I²C is particularly susceptible to noise though because it is only actively driven low and because it is edge sensitive. Also the data rate I chose is five times faster than RepRap uses and I am using 3.3V logic rather than 5V. I²C also has the nasty feature that corruption can cause the bus to lock up which doesn't happen with asynchronous comms.
One thing I noticed was that earthing the can of the motor made things a lot worse by coupling the noise onto the ground rail. I established that the noise is conducted rather than radiated by running the motor from a separate bench PSU. I also managed to get it to run reliably by adding some 2200pF capacitors to the I²C lines, but that is a horrible bodge! Other things I will try :-
As I have indicated above, the clock line has two glitches where it should have a proper clock pulse. The master generates the clock but the slave is allowed to stretch it by extending the low portion. Normally it is hard to tell which device is driving the bus but in this case it is obvious because I got the pull up resistors too small for the MSP430. I forgot that the MSP430 is aimed at low power applications so has an unusually low drive capability.
I still wasn't sure which end was the problem so I coded the host end in software so I could see exactly what the slave was doing. It is relatively easy to do an I²C master in software because it does not have any strict timing deadlines. The slave does, so it is more difficult to implement purely in software.
It turns out the problem lies with the mask revision of the MSP430F2013s that I am using. Revision B has several I²C bugs such as pulling the clock low when it shouldn't and no workarounds. I have had my chips for some time, long before I started this project. Two are actually labeled with a mask revision of X which is undocumented. It seems to have at least all the bugs in mask B. The other is labeled as mask B, so none are of any use for I²C!
Very disappointing that these days a device has to have three or four mask revisions just to get something as simple as an I²C module working. This seems to be the way things are going: hardware is becoming just as buggy as software. I wasted some time at work recently discovering that the UART in a PIC18F65J10 occasionally inserts zero bytes in the middle of your packet. These things were lab exercises when I was an undergraduate, now big companies can't get them right.
Fortunately, I had some MSP430F2012s that were mask revision B and the I²C bugs were fixed one revision earlier on that chip. They are slightly different in that they have a 10 bit SAR ADC instead of a 16 bit sigma-delta ADC. This is actually more appropriate for my application but it has a completely different software interface and voltage range.
Once I had swapped to the MSP430F2012 and modified the firmware for the new ADC, the I²C bus sprang into life. That was until I started to run the motor whereupon I got occasional bus lockups. This is due to the massive amount of noise coming from the brushes of the DC motor. I get about 2.5V of noise on the I²C lines
Using I²C without buffers for anything other than inter IC comms is a really bad design decision, I am not sure what came over me. I normally use RS485 differential comms when linking boards that drive motors or other high current loads and have never had a problem with noise. I even think I have publicly criticized the RepRap design for converting the RS232 to 5V signals before sending it around the ring of control boards. It was tempting to give I²C a try because I already had micros that support it, and didn't have UARTs, and my screened cable is only about a foot long. I²C is particularly susceptible to noise though because it is only actively driven low and because it is edge sensitive. Also the data rate I chose is five times faster than RepRap uses and I am using 3.3V logic rather than 5V. I²C also has the nasty feature that corruption can cause the bus to lock up which doesn't happen with asynchronous comms.
One thing I noticed was that earthing the can of the motor made things a lot worse by coupling the noise onto the ground rail. I established that the noise is conducted rather than radiated by running the motor from a separate bench PSU. I also managed to get it to run reliably by adding some 2200pF capacitors to the I²C lines, but that is a horrible bodge! Other things I will try :-
- Change the pullup resistors from 1K to 2K2 so that the MSP430 can pull them fully low. That will increase the logic low noise immunity but make the logic high immunity worse.
- See if I can program the master to clear the bus lockup.
- Add a CRC and packet sequence flags so I can detect errors and do retries.
- See if I can suppress the motor better.
Friday, 14 September 2007
Toast
The first problem was easy enough to diagnose. Since I had rebuilt and rewired the circuit it had to be the thermistor itself. After removing it, I could see the underside was looking a bit toasted. I don't think it was designed for use up to 250°C. The insulation on the wire was not up to it and I suspect it was soldered to the actual device, and that the solder melted. There seems to be some solder on the brass nozzle now where it was mounted.
The remedy was easy: I just replaced it with the recommended glass bead thermistor which had arrived from back order in the meantime. It is rated to 300°C. Its characteristics are different so I had to change my resistor values, but I had anticipated that by mounting them on the connector rather than the board.
The remedy was easy: I just replaced it with the recommended glass bead thermistor which had arrived from back order in the meantime. It is rated to 300°C. Its characteristics are different so I had to change my resistor values, but I had anticipated that by mounting them on the connector rather than the board.
Thursday, 13 September 2007
Getting nowhere fast
Two weeks ago I had my extruder controller built on breadboard controlling the heater temperature and motor speed. All I had left to do was link it to my main controller and talk to it from my host software. This should have been easy as I already had I²C working to my spindle controller ...
The first thing that went wrong was the temperature reading from the thermistor started to become erratic. I decided this may be due to a bad connection as my breadboard layout was getting a bit messy.
The hot resistance of the thermistor is only about 12Ω so I was willing to think a bad connection could be possible as I had not used the breadboard for over 10 years. I was also getting a lot of noise from the motor so I decided to rebuild the circuit on vero board and shorten all the connections.
I paid careful attention to the layout to keep the high power stuff away from the sensitive inputs and the micro, and route the ground currents sensibly. The connectors on the far left are the outputs for the heater, motor and possibly a fan. Next is the power in connector followed by 3.3V and 5V regulators. The shaft encoder is 5V but the micro is 3.3V, the four resistors handle level shifting. Next are the input connectors for the shaft encoder, thermistor and filament exhausted sensor. The far connectors are for the I²C bus.
I mounted it on the z-axis together with my spindle controller so that the only moving wires are a 12V feed and the I²C bus.
All the wires are now much shorter and screened. I also earthed the casing of the motor. On testing I was very disappointed to find :-
The first thing that went wrong was the temperature reading from the thermistor started to become erratic. I decided this may be due to a bad connection as my breadboard layout was getting a bit messy.
The hot resistance of the thermistor is only about 12Ω so I was willing to think a bad connection could be possible as I had not used the breadboard for over 10 years. I was also getting a lot of noise from the motor so I decided to rebuild the circuit on vero board and shorten all the connections.
I paid careful attention to the layout to keep the high power stuff away from the sensitive inputs and the micro, and route the ground currents sensibly. The connectors on the far left are the outputs for the heater, motor and possibly a fan. Next is the power in connector followed by 3.3V and 5V regulators. The shaft encoder is 5V but the micro is 3.3V, the four resistors handle level shifting. Next are the input connectors for the shaft encoder, thermistor and filament exhausted sensor. The far connectors are for the I²C bus.
I mounted it on the z-axis together with my spindle controller so that the only moving wires are a 12V feed and the I²C bus.
All the wires are now much shorter and screened. I also earthed the casing of the motor. On testing I was very disappointed to find :-
- The thermistor was still erratic.
- The I²C bus did not work much at all.
- The noise around the circuit was just as bad if not worse. Until I added the earth connection to the z-carriage the micro crashed when the motor was running.
Saturday, 1 September 2007
Caught in the act
You may remember that I reported a ribbon of swarf coming out of the side of my extruder :-
It wasn't obvious to me how this was formed. When I stripped it down today I caught it in the act :-
It appears that when the threaded rod cuts into the plastic it displaces a corrugated ribbon of material sideways. This remains attached to the filament at the leeward side and the ridges formed by the thread remain joined to each other by very thin webs. As it progresses down the pump it gets separated from the filament, presumably where it enters the barrel, and finds its way out through the side. I think the root cause is that when polymers like HDPE are stretched the long molecules get aligned length ways and it becomes very strong even though it is very thin.
It wasn't obvious to me how this was formed. When I stripped it down today I caught it in the act :-
It appears that when the threaded rod cuts into the plastic it displaces a corrugated ribbon of material sideways. This remains attached to the filament at the leeward side and the ridges formed by the thread remain joined to each other by very thin webs. As it progresses down the pump it gets separated from the filament, presumably where it enters the barrel, and finds its way out through the side. I think the root cause is that when polymers like HDPE are stretched the long molecules get aligned length ways and it becomes very strong even though it is very thin.