This is a helpful little list I found on various processor cores.
http://www.1-core.com/library/digital/soft-cpu-cores/
Wednesday, July 28, 2010
Tuesday, June 29, 2010
$4 Development Kit
I was tipped off by someone from work that TI has this little development kit available for the low, low price of $4.30. But so many people have jumped on this, that it's backordered.
It would be nice to be able to get my hands on this for tinkering at home. You can't beat the price, and the development kit is included. I'm assuming it's powered right off the USB, so no external power supply is needed.
Story is here:
http://ee.cleversoul.com/news/tis-amazing-430-launchpad.html/
It would be nice to be able to get my hands on this for tinkering at home. You can't beat the price, and the development kit is included. I'm assuming it's powered right off the USB, so no external power supply is needed.
Story is here:
http://ee.cleversoul.com/news/tis-amazing-430-launchpad.html/
Friday, May 21, 2010
Digital Feedback Control
Linear feedback systems are well known. In the continuous domain, the Laplace transform can be used to characterize the system and find an appropriate compensator to give the desired response. But in a digital world, this can break down. If the output of the system is sampled at discrete time intervals, the time lag can cause system instability that might not be predicted with the Laplace transform.
With some simple continuous feedback systems, the gain of the feedback control can be increased to improve the response time. There might not be any theoretical limit to how high the gain can go. But in a discrete time system, the time lag can cause oscillations and instability if the gain is raised beyond some limit.
As a simple example, suppose you have an integral feedback controller. This type of feedback accumulates error and uses this to drive the thing being controlled. It can be a little slower, but it is simpler than a full-fledged PID controller. It is very stable -- it tends to asymptotically approach the set point for all gains, and it tends towards zero steady-state error, which might be important in some systems. A write-up of that kind of system is here [pdf].
In the continuous domain, the open loop gain is just the integrator, 1/s, times the feedback gain K. In closed-loop form, assuming unity feedback, the transfer function comes out to K / (K + s). As you increase K, you increase the speed of the response.
In the discrete domain, you can simply do this in a tabular fashion, as if you had a digital controller that samples the output, compares against the set point, and accumulates the error. For a small value of K, say 0.2, you get a nice exponential decay towards the set point.
If K gets larger, you will start to see ringing but the system is still stable.

Finally, increasing K by too much will make the system unstable.

There are analytical ways of figuring the response of a discrete time system. That is for another post.
With some simple continuous feedback systems, the gain of the feedback control can be increased to improve the response time. There might not be any theoretical limit to how high the gain can go. But in a discrete time system, the time lag can cause oscillations and instability if the gain is raised beyond some limit.
As a simple example, suppose you have an integral feedback controller. This type of feedback accumulates error and uses this to drive the thing being controlled. It can be a little slower, but it is simpler than a full-fledged PID controller. It is very stable -- it tends to asymptotically approach the set point for all gains, and it tends towards zero steady-state error, which might be important in some systems. A write-up of that kind of system is here [pdf].
In the continuous domain, the open loop gain is just the integrator, 1/s, times the feedback gain K. In closed-loop form, assuming unity feedback, the transfer function comes out to K / (K + s). As you increase K, you increase the speed of the response.
In the discrete domain, you can simply do this in a tabular fashion, as if you had a digital controller that samples the output, compares against the set point, and accumulates the error. For a small value of K, say 0.2, you get a nice exponential decay towards the set point.
If K gets larger, you will start to see ringing but the system is still stable.
Finally, increasing K by too much will make the system unstable.
There are analytical ways of figuring the response of a discrete time system. That is for another post.
Wednesday, May 12, 2010
Another Orbit Simulator
This is one I came across while researching orbital mechanics, and was looking to see what was out there for simulators. It's pretty impressive, and it's free!
http://orbit.medphys.ucl.ac.uk/
Basically, it's a first person point of view in different types of vehicles and missions. It can get pretty involved, though. Make sure you have lots of free time.
http://orbit.medphys.ucl.ac.uk/
Basically, it's a first person point of view in different types of vehicles and missions. It can get pretty involved, though. Make sure you have lots of free time.
Sunday, April 25, 2010
Orbit simulator
Reading Neal Stephenson's Anathem got me interested in orbital mechanics. It's a little non-intuitive. If you are trying to dock with a space station, and it's ahead of you in orbit, you need to slow down (fire thrusters opposite the direction you are going, or retrograde.) This puts you in a slightly lower orbit which has a shorter period, so you speed up your angular velocity. Kind of like running on the inside lane around a track. If the station is behind, then speed up to let it catch up.
So I studied the equations and came up with an orbital simulator where you chase a station with your ship and see how close you can get, by firing thrusters to speed up or slow down. By matching the orbital elements, you close in on the station.
A couple of other tips: to get to a circular orbit, get the eccentricity close to zero. Do this by firing thrusters prograde (in the direction you are going) near the hollow square which is apoapsis. This is when you are farthest from the earth (or whatever planet you want the blue circle to be.) Or fire retrograde near the solid square which is periapsis, when you are closest to the earth.
Try to match the period and radius of the station. I still need to add some help on the other elements shown, but they are less crucial. You can also just practice "flying" and see what changing speed at various points does to the orbit. The blue circle planet is virtual, so you'll pass right through instead of crashing or burning up in the atmosphere. Try not to fly off the screen though.
The simulation is here and runs on Java:
http://users.wowway.com/~jrlivermore/orbit/orbitpage.htm
So I studied the equations and came up with an orbital simulator where you chase a station with your ship and see how close you can get, by firing thrusters to speed up or slow down. By matching the orbital elements, you close in on the station.
A couple of other tips: to get to a circular orbit, get the eccentricity close to zero. Do this by firing thrusters prograde (in the direction you are going) near the hollow square which is apoapsis. This is when you are farthest from the earth (or whatever planet you want the blue circle to be.) Or fire retrograde near the solid square which is periapsis, when you are closest to the earth.
Try to match the period and radius of the station. I still need to add some help on the other elements shown, but they are less crucial. You can also just practice "flying" and see what changing speed at various points does to the orbit. The blue circle planet is virtual, so you'll pass right through instead of crashing or burning up in the atmosphere. Try not to fly off the screen though.
The simulation is here and runs on Java:
http://users.wowway.com/~jrlivermore/orbit/orbitpage.htm
Friday, April 2, 2010
Impressive company blog
If you have a design company that encourages employees to contribute concise and informative articles on the things they are working on, this is the way to do it. It looks very organized, clean, and professional.
http://www.dmcinfo.com/Blog.aspx
http://www.dmcinfo.com/Blog.aspx
Monday, March 8, 2010
Error Correction Coding Receiver - Part 4
Back to the problem of decoding a linear block code with multiple correctable bits.
From page 15 of this [pdf] description mentioned in Part 3, there is a set of minimal polynomials for any particular system. These can be looked up in a table. This example shows three unique equations for the system of six syndrome equations for a triple error correcting code:
m1(x) = m2(x) = m4(x) = x^5 + x^2 + 1
m3(x) = m6(x) = x^5 + x^4 + x^3 + x^2 + 1
m5(x) = x^5 + x^4 + x^2 + x + 1
The syndrome vector can be obtained by taking the received codeword v(x) modulo with each of the minimal polynomials.
With the LFSR, it is a simple matter of setting the reduction polynomial to each of the minimal polynomials, shifting in the codeword, and saving the result as S1(x), S2(x), and so on.
Next, a system of equations in α is needed. This is achieved by taking S1(α), S2(α^2), S3(α^3) ... S6(α^6). Replacing x with α^i is the same as spreading out the "bits" in the polynomial in x with (i - 1) zeroes in between, and taking the result modulo the reduction polynomial for this field.
In this example, that means S1 just replaces x with α. S2 through S6 can each be fed into an LFSR with the spacing mentioned above, and with the reduction polynomial set appropriately. In this case that is x^5 + x^2 + 1.
Example: If S3(x) = x^3 + x^2 + 1, then feed in x^9 + x^6 + 1. Binary 01101 becomes 1001000001. The bolded zeroes show which ones were inserted to get the equation in α. This is fed into the LFSR to reduce it to fit in the field.
Now there is a set of six equations in α. We are ready for the Berlekamp-Massey algorithm.
From page 15 of this [pdf] description mentioned in Part 3, there is a set of minimal polynomials for any particular system. These can be looked up in a table. This example shows three unique equations for the system of six syndrome equations for a triple error correcting code:
m1(x) = m2(x) = m4(x) = x^5 + x^2 + 1
m3(x) = m6(x) = x^5 + x^4 + x^3 + x^2 + 1
m5(x) = x^5 + x^4 + x^2 + x + 1
The syndrome vector can be obtained by taking the received codeword v(x) modulo with each of the minimal polynomials.
With the LFSR, it is a simple matter of setting the reduction polynomial to each of the minimal polynomials, shifting in the codeword, and saving the result as S1(x), S2(x), and so on.
Next, a system of equations in α is needed. This is achieved by taking S1(α), S2(α^2), S3(α^3) ... S6(α^6). Replacing x with α^i is the same as spreading out the "bits" in the polynomial in x with (i - 1) zeroes in between, and taking the result modulo the reduction polynomial for this field.
In this example, that means S1 just replaces x with α. S2 through S6 can each be fed into an LFSR with the spacing mentioned above, and with the reduction polynomial set appropriately. In this case that is x^5 + x^2 + 1.
Example: If S3(x) = x^3 + x^2 + 1, then feed in x^9 + x^6 + 1. Binary 01101 becomes 1001000001. The bolded zeroes show which ones were inserted to get the equation in α. This is fed into the LFSR to reduce it to fit in the field.
Now there is a set of six equations in α. We are ready for the Berlekamp-Massey algorithm.
Subscribe to:
Posts (Atom)